Owes specifically notes that the HAPLR neglects to include electronic services, increasingly a significant and cost-effective service delivery method. Hennen acknowledges this omission and, in his defense, correctly notes that there are inadequate standards for objective measurement in this area.
Another entire area missing in HAPLR is programming. With libraries increasingly adopting the roles of community centers, use of meeting spaces and attendance at library programs is a significant part of many libraries' efforts. My workplace, the Appleton Public Library is one of these.
As Owes rightly states, "Only through a complete picture of a library, its services, and how the needs of a community are met may the public library be viewed." While the HAPLR can be valuable for many libraries in gaining well-deserved credit, it can wrongly be cited as evidence that a library heavily invested in electronic services & community programs is worse than one heavily invested in high circulation and traditional reference.
Because measurements lag behind services, standards lag behind measurements, and comparative analysis lags further, it is nigh unto impossible for any system such as the HAPLR to be up to date. Problems arise when library critics point fingers due to a low "score", without keeping in perspective both this lag and the somewhat arbitrary nature of choosing and weighting some criteria and excluding others.
The Montgomery City-County Public Library has a 2006 HAPLR score of 21%; the Appleton Public Library's 2006 score is 90%. I still think it's unfair. The true measure of any library is the community it serves. Arbitrary measurements like HAPLR have their use, but they cannot determine how any library is doing relative to the vlaues and needs of the community it serves. You can measure by standard criteria, but not judge by them. If we don't understand the difference, we're just hapless.