Guidelines for Journal Usage (Albert Henderson) Marcia Tuttle 03 Jul 1996 12:29 UTC

---------- Forwarded message ----------
Date: Tue, 2 Jul 1996 18:38:19 EDT
From: Albert Henderson <70244.1532@COMPUSERVE.COM>
Subject: Guidelines for Journal Usage (Dorothy Milne)

Dorothy Milne <dmilne@MORGAN.UCS.MUN.CA> writes:

[snip]

<<A study by Dorothy
<<Milne and Bill Tiffany (Serials Librarian 19,3/4, 1991) indicated that
<<many patrons did not cooperate with their reshelving methodology.

<First of all, we did not use a reshelving methodology.  We used a method
<that asked patrons to tick a tag whenever they made any use of a journal
<issue - any use that would otherwise lead them to order the article by
<ILL. So - browsing an issue counted as a use, as did reading the article,
<photocopying the article, or borrowing it.  What we found was that
<users failed to tick about one-third of the time.  However, since there
<is no reason to think that the ratio of ticking to not-ticking would
<vary from one journal to another (i.e. that physicists failed to
<tick more than historians), this problem was resolved by adjusting the
<use figures upwards by 1.5.  Since the decision on which journals to
<cancel depended on a listing of titles in order of the intensity of
<their use, the method produced usable and repeatable results.

That might be true if your users were a homogeneous group. But they are
not. You would have to prove that the inclination to tick or not was not
idiosyncratic to certain individuals -- not discipline-wide -- to have a
credible methodology. Was it one-third of the time, or one-third of the
users, or a single person too busy using your collections to be bothered
ticking, etc? This is particularly significant where the use of a
research-level journal may be confined to a particular individual who, in
fact, may have a duty to scout and copy articles for several members of a
research unit. Your study made no effort to evaluate who performed each
use. The fault has to do with its reliance on patron cooperation, whether
shelving or ticking, and its dismissal of information from your faculty.

<<The infamous Pitt study used a sample that indicated low usage of journals
<<while interviews indicated that faculty members systematically browsed all
<<new issues.

<A number of studies have shown that _actual use_ of journals by faculty
< members and the use that faculty members _say_ they make of journals are
< quite different.  Our own results showed almost no correlation between
< the two.  In my view, the only reliable results come from measuring
< actual use.

You say don't trust your faculty to provide accurate information about
what they use and its importance to library patrons. If I recall
correctly, your study reported that every serial in your library had been
identified by at least one faculty member as important for a teaching or
research program. Nonetheless you canceled over 20 percent of the
subscriptions.

You also indicated that the journals of eight major commercial publishers
were more cost-effective than the average and that the most prestigious
journals were not cost-effective -- flatly contradicting the thesis of
cost-per kiloword studies published by American Physical Society and
others.

<<There are also arithmetical complications. Suppose your dept. heads browse
<<all major journals systematically. A weekly such as NEJM will show 50 uses
<<per patron while monthlies show twelve and bimonthlies show six.

< In my view, the only meaningful basis for cancellation/retention of
< journals is the cost-per-use of a title, based on the total number of
< uses per year.  If one measures use, estimates the annual total of uses,
< and then calculates an estimate of the cost-per- use  (i.e. annual
< subscription cost / annual no. of uses), the distinction between
< weeklies, monthlies, quarterlies is taken into account.  This is
< neither complicated nor difficult to do.

You mean that this is an arithmetical exercise in 'survival of the
fittest' journals that has nothing to do with the qualitative view of your
patrons .... the use by a research department is no more valued than the
use by an undergraduate, or a visitor.

<<A "scientific" study of usage would probably use some other method to
<<assure a given level of reliability.

<< Our methodology was scrutinized by some very annoyed mathematicians and
<< physicists (annoyed because their journals were slated for cancellation).
<< They were itching to find deficiencies in the method.  They came back
<< apologetic and said they found no methodological problem. (The method
<< was devised by a Ph.D. scientist in the first place.)

The faults I find are in the social and behavioral aspects of the design
which incorrectly assume use by uncooperative patrons, as noted above, and
in the unashamed intent of the study to justify the cancelation of
subscriptions and the highhandedness of the administrators who simply hand
you marching orders. The Pitt study was devised by a number of PhDs. Its
use of circulation data to calculate in-house use and cost-effectiveness
was savaged by Robert M. Hayes (LIBRARY RESEARCH 3:215-260, 1981). Its
methodology was also severely criticized by Borkowski and MacLeod (LIBRARY
ACQUISITIONS: PRACTICE AND THEORY 3:125-151, 1979) and sharply questions
by Broadus (COLLECTION MANAGEMENT 5,1/2:27-41).

< As for reliability, we have checked our results against ILL requests
< over the past 8 years and so far have found no major errors.  A
< recent collection evaluation based on a recent citation analysis in
< chemical instrument analysis offered an interesting confirmation that
< our cancellations (based on our use studies) have indeed targetted the low
< use titles and spared the high use titles.

< I would agree that no estimate of usage will ever be "scientifically
< accurate" - this sort of human behaviour is too difficult to analyse
< with total precision. This is why our method was based on "estimates"
< which then rank the journals in sequence. A better question, in my
< opinion, is what is a library's best approach to getting reliable
< information?  As far as I know, no better method has yet appeared in the
< library literature.

Tony Stankus has recommended talking, personally, to each faculty member
and negotiating each journal of interest that might be canceled.

[snip]

< Publishers are not overjoyed to have libraries judge their journals
< by the actual use they receive.  Publishers' focus on "quality" has
< included paper and printing quality, binding quality, and quality
< of an editorial board.  They have not, however, concentrated on
< the "usability" or relevance of the information they publish.  I
< sympathize with them, since this is a difficult thing to identify
< and promote.  None the less, I get the impression that publishers
< have focussed more on getting more _quantity_ (more articles, more
< issues) rather than on keeping costs down by publishing articles
< which the readership really wants to see.  Thus, their journals
< are swollen with articles that will never be read, and the costs
< of this system have spiralled out of sight.  There is little that
< libraries can do in this environment than observe user behaviour
< and act accordingly,

Never mind publishers. My question is do you sympathize with researchers,
students and faculty? Does your library collection enable or limit their
opportunities? Do the members of your university have a say in how the
budget is drawn?

As for publications that are 'never read' I would encourage you to read
the recent article debunking myths about journal use by Carol Tenopir and
Donald W. King (LIBRARY JOURNAL March 15, 1996:32-35).

Increases in the volume of publications are generated by increased
research expenditures, not by publishers. Why haven't library expenditures
kept pace with research spending??

Thanks for responding to my comments.

Albert Henderson, Editor, PUBLISHING RESEARCH QUARTERLY
70244.1532@compuserve.com