Trouble with usage studies Steve Black 28 Jun 1996 03:39 UTC

Al Henderson, Editor, Publishing Research Quarterly recently replied to a
post by Rose M. LaJudice concerning what constitutes low journal usage.
Mr. Henderson's observations invite discussion.

  He states, in part,
"The infamous Pitt study used a sample that indicated low usage of
journals while interviews indicated that faculty members systematically
browsed all new issues."
  I'd like to make 2 points.  First, faculty statements about their
systematic browsing may or may not be valid.  I would want to see this
Pitt study (anyone have the full cite?) to understand the context of the
interviews.  Second, I consider browsing and use to be different.  If an
individual browses the table of contents and looks at a few abstracts,
but decides that there are no articles worthy of reading in full, then it
is justified to not count that browsing as a "use".  If the individual
reads or copies an article, the volume is more likely to be left to be
reshelved, and therefore counted as a use.

  Mr. Henderson states, "A 'scientific' study of usage would probably use
some other method to assure a given level of reliability".  I would be
very interested in hearing some specific, feasible suggestions for how to
do this.  We all know that no use study is perfect, especially in open
stacks.  However, counting volumes as they are reshelved is more accurate
than asking people what they use, and probably the best we can do with
our staffing and budgets.  I also feel that even though no use study is
perfect, doing one is far better than making decisions with no use
statistics at all.

  Finally, Mr. Henderson opened his reply to Rose M. LaJudice's query
with "This question [about what constitutes low use] is interesting
because of all the trouble caused by 'usage studies'."
  Please, Mr. Henderson, explain for us what you mean by "all the trouble
caused".

Steve Black
The College of Saint Rose
blacks@rosnet.strose.edu