Keith Curry Lance and Marti Cox's thesis seems to be that the job of comparing libraries cannot be done, so I am at fault for having tried. Somehow, unique among American public or private institutions, libraries are just too varied and too local to be compared. Yet despite these assertions, the authors urge individuals to use the NCES Public Library Peer Comparison tool (nces.ed.gov/surveys/libraries/publicpeer/) to do this impossible task. To use an apropos statistical expression, go figure! The NCES Public Library Peer Comparison tool uses eight of the 15 measures that I employ in the HAPLR Ratings--five of the six input measures I use (FTE staff per 1,000, total spending per capital, collection spending per capita, books per capita, and periodical subscriptions per capita) and three of the nine output measures in HAPLR (visits per capita, circulation per capita, and reference per capita). It also provides the data to easily extract the remaining seven measures in HAPLR. Oddly,Cox and Lance do not caution against mixing and matching potentially hyper-correlated measures when doing one's own comparisons. That, it appears, is only necessary when evaluating the HAPLR ratings. Go figure. Cox and Lance insist that population alone is insufficient to identify a library's "true peers." Yet neither they, nor FSCS, NCES, nor any agency of which I am aware has established the necessary criteria for establishing a library's "true peers." So again, go figure. GASPING FOR MEANING I agree that "index" may be the wrong word to have used to describe the HAPLR system. I probably should have used scorecard. The HAPLR ratings are designed to be like an ACT or SAT score, with a theoretical score between 1 and 1,000, and mostlibraries scoring between 260 and 730. An index, such as the Dow Jones Index or the Consumer Price Index, can theoretically range from zero to infinity.
展开▼