There are several national systems for assessing, evaluating, or otherwise rating the quality of public libraries.
United States
Basic library statistics (not rankings) were initially maintained by the National Center for Educational Statistics; that body continues to collect data for academic libraries, but administration of the Public Libraries Survey and the State Library Agencies Survey was transferred to the Institute of Museum and Library Services (IMLS) in October 2007.[1] IMLS continues to conduct public library surveys as well as distribute historical data from surveys back to 1988.[2]
The Library Data Archives includes longitudinal data sets. [3]
HAPLR and subsequent debate
The system that would become Hennen's American Public Library Ratings (HAPLR) was first published in the January 1999 issue of American Libraries prepared by Thomas J. Hennen Jr., Director of Waukesha County Federated Library System in Wisconsin.[4] Libraries were ranked on 15 measures with comparisons in broad population categories. HAPLR was updated annually through 2010 and was the focus of widespread professional debate in the field of librarianship.
Oregon State Librarian Jim Scheppke noted that the statistics that HAPLR relies on are misleading because they rely too much on output measures, such as circulation, funding, etc. and not on input measures, such as open hours and patron satisfaction. "To give HAPLR some credit, collectively, the libraries in the top half of the list are definitely better than the libraries in the bottom half, but when it gets down to individual cases, which is what HAPLR claims to be able to do, it doesn't work."[5]
In contrast, Library Journal editor, John N. Berry, noted: "Unfortunately, when you or your library receives any kind of honor, it stimulates the flow of competitive hormones in your professional colleagues. This jealousy rears its ugly head in many ways. We've suffered endless tutorials on the defects in Hennen's rankings. So what? They work!" [6]
Keith Curry Lance and Marti Cox, both of the Library Research Service, took issue with HAPLR reasoning backwards from statistics to conclusion, point out the redundancy of HAPLR's statistical categories, and question its arbitrary system of weighting criteria.[7]
Hennen responded, saying Lance and Cox seem to suggest "that the job of comparing libraries cannot be done, so I am at fault for having tried. Somehow, unique among American public or private institutions, libraries are just too varied and too local to be compared. Yet despite these assertions, the authors urge individuals to use the NCES Public Library Peer Comparison tool (nces.ed.gov/surveys/libraries/publicpeer/) to do this impossible task."[8]
A 2006 Library School Student Writing Award article questioned HAPLR's weighting of factors, and its failure to account for local factors (such as a library's mission) in measuring a library's success, the index's failure to measure computer and Internet usage, and its lack of focus of on newer methods of evaluation, such as customer satisfaction or return on investment.[9]
Ray Lyons and Neal Kaske later argued for greater recognition of the strengths and limitations of ratings.[10] They point out that, among other factors, imprecision in library statistics make ratings scores quite approximate, a fact rarely acknowledged by libraries receiving high ratings. The authors also note that HAPLR calculations perform invalid mathematical operations using ordinal rankings, making comparisons of scores between libraries and between years meaningless.
Star Libraries system
America's Star Libraries and Index of Public Library Service, an alternative system developed by Keith Curry Lance and Ray Lyons, was first introduced in the June 2008 issue of Library Journal.[11] This method rates on four equally weighted per-capita statistics with comparison groups based on total operating expenditures: library visits, circulation, program attendance, and public internet computer use.[12] The system awards 5-star, 4-star, and 3-star designations rather than numerical rankings. Creators of the LJ Index stress that it does not measure service quality, operational excellence, library effectiveness, nor the degree to which a library meets existing community information needs.
Australia and New Zealand
There is some interest in developing an index in Australia and New Zealand[13]
Great Britain
Great Britain adopted national standards, and in 2000 the Audit Commission began publishing both a summary annual reports of library conditions and individualized ratings of libraries. Audit Commission personnel base the reports on statistical data, long-range plans, local government commitment to the library, and a site visit. The Audit Commission is an independent body. Every library is assigned a score.[14]
Germany
Bertelsmann Publishing partners with the German library association to produce BIX, a library index quite similar to HAPLR. The main difference between BIX and HAPLR, is that BIX was designed to provide comparisons of one library to another in a given year as well as over time. HAPLR compares all libraries to one another only during a given year.[15]
^Nelson, Elizabeth. (Winter 2007) “Library Statistics and the HAPLR Index” Library Administration & Management 21 (1) : p. 9, ISSN0888-4463
^"Honorable Mention: What Public Library National Ratings Say" (Nov/Dec 2008)Public Libraries p.36-41.
^"The New LJ Index". Archived from the original on 2009-04-14. Retrieved 2019-01-31.{{cite web}}: CS1 maint: bot: original URL status unknown (link)Library Journal June 15, 2008.