Re: Journal use studies--request for info -Reply Alfred Kraemer 15 May 2000 21:25 UTC

Jennifer,

Here is a summary of the method we use for tracking use data for journals.
In the process of the brief outline I'll answer some of the questions you
raised:

Continuity of tracking by barcode:

We barcode bound and unbound journals. After binding use data for the
unbound is transfered to the corresponding bound volume. (A script is used
to tally the use by title based on the items which are sent in a specific
bindery shipment).

For the bound items use data: Intl use (-scanning before reshelving),
checkouts (our journals checkout to depts - 2hrs), and IUSE3(contains the
transfered use from unbound issues included in the bound vol). Use data
for bound volumes accumulates, that means the previous year's totals serve
as a 'baseline'. We would like to reset the data every year but at this
point -I don't believe Innovative has changed that yet- only the system
vendor can do that.

Concerns about the validity of the data:

There were some initial objections to way use was tallied: what if a user
reshelves the journal or copies several articles from one issue, etc.
However, there has been strong support from faculty for the assumption
that such discrepancies would not favor or disadvantage specific titles,
and if journal cuts have to be made, journals with the lowest use, or
journals with a very high cost per use should be targeted. (This is
somewhat simplified for the sake of brevity)

Evaluating the data:
There are -in essence- two different approaches I have seen in use:

1. Title-based: A total per title is used no matter how many years of back
volumes there are for each title.
  Some use an 'exclusion period' to ensure journals with less than -let's
say five years of  holdings- are not cut 'prematurely'.

2. Title/year-based: Breaks down the usage by title and by year, e.g.
Total usage: 2,300 , last five years: 1,800. This approach is more complex
in many ways: requires a method to reliably use the volume field to tally
use by year, etc. I have heard about this approach but not actually seen
any results from other libraries based on such an approach. We are doing
this this year because we need a better way to determine
cost-effectiveness of our subscriptions.

I'm pretty sure there are other methods in use - and I would like to hear
about them, too!

Software used:

a. Database program, like Access for comparisons and evaluations.
Spreadsheet for simpler formatting of the final reports.
Tables of 50,000 + record lines can be manipulated, compared, queried
without trouble in Access, MySql etc.
Sql and pattern-matching functions in Access are also pretty good.

b. For manipulating Innopac output: a scripting language like Perl, etc.
works very well and fast. (Especially, if you can run your perl programs on
the network using your network directory.)

The key question: Has it been -and is it still- worthwhile?

Without questions:
Over the past five years, we have make significant journal cuts (>30% total)
without a drop in overall journal usage.
Even in years where were could have continued every subscription we had, it
was generally wise to cut low-use journals and use the savings to buy
journals which showed continued high ILL-use.
Some low-use cuts run into some opposition - and we do invite comments on
proposed journal cuts - but overall the faculty support of  the usage-based
method has remained very strong.
In many instances they even recommended further cuts to purchase high
ILL-use journals.
One of the biggest misconceptions that I heard frequently here several years
ago was that unless journal price increases forced journal cuts, cuts should
not be made. If one maintains that as a guideline for journal management,
one would  almost inevitably wind up with a collection that includes too
many low-use, high-cost per use subscriptions.

By the way, general indicators of the importance of journals, e.g. impact
factors, did not work for us. They would have favored journals with a broad
focus, and would have led to the cancellation of some specialty journals
which have significant use on our campus.

Alfred Kraemer
Head, Technical Services
Medical College of Wisconsin Libraries
8701 Watertown Plank Road
Milwaukee, WI 53226

Phone: 414-456-4273
E-Mail: akraemer@mcw.edu

----- Original Message -----
From: Jennifer Sweeney <jksweeney@UCDAVIS.EDU>
To: <SERIALST@LIST.UVM.EDU>
Sent: Monday, May 15, 2000 12:31 PM
Subject: Re: Journal use studies--request for info

> We follow a similar procedure as Dana as far as scanning everything that
> gets reshelved, item records for bound volumes in Innovative, etc.
>
> Couple of hangups we have found and not gotten past yet:
>
> Some of our librarians are concerned about the date range of "current
> issues"; as we all know, depending on your binding schedule, there could
> be anywhere from two weeks to two years of "current issues" on the shelf
> at any given time.  The average is likely around one year but I haven't
> verified this.  Dana, can I ask you what assumptions you make regarding
> the time span of current issues?  Some librarians feel that they need more
> precise usage data for current issues (specifically in
> chemistry/physics/engineering), and we are not sure how to obtain this
> conveniently in an automated way w/out barcoding every issue.
>
> Dana, could you also tell us the size of your collection and the software
> you are using to manipulate the data?
>
> We are exploring various cost per use methodologies too.  Is anyone doing
> anything experimental?  How about journal impact factors, half-life, etc.?
> Use of eletronic versions of titles?  Any preferred approaches, experience
> to share out there?
>
> Jennifer Sweeney
> Library Analyst
> Shields Library
> University of California
> Davis, CA 95616
>
> voice (530) 752-5819
> fax (530) 752-6899
> jksweeney@ucdavis.edu
>
> At 02:19 PM 05/12/2000 -0400, Dana Belcher wrote:
> >How do you do it?  (Using barcodes? scanners? etc.)
> >
> >We keep continuous use statistics.  Our students have to do "pick ups"
all
> >day long, so we have them scan in what they pick up before shelving.  We
> >have Innovative, and have set up item records for all current and mf
> >titles.  The bound volumes automatically have item records.  We have a
> >rolodex containing barcodes for each current/mf title that is then
scanned
> >for each item picked up.
> >  For the bound, we just scan the barcode on the item.
> >
> >Do you collect data for each title for each year, or by spans of years
(e.g.
> >1980-1990)?  If so, which spans of years do you use, and why?
> >
> >We collect data for each year, June 1-May 31.  We than put the use data
> >into an allocation formula that distributes the book budget among the
> >academic depts. The data is also used to determine if titles should be
> >kept or whatnot.  We keep statistics separately for each medium so we can
> >see which format is being used most.  We've changed alot of retention
> >periods because the mf or bound was not being used.  We now keep about
1/3
> >of the current titles on the shelves for 2-5 years instead of converting
> >them to mf or binding.
> >
> >What are the pros and cons of your automated method?
> >
> >Up until 1996, we kept the statistics manually like you do now.  Automate
> >as soon as possible!  It's alot easier, and the software allows you to
> >sum, sort, and all other possibilities.  We really haven't had any
> >problems with the automation.
> >
> >Feel free to contact me personally.
> >
> >Dana Belcher, Periodicals/Acquisitions Librarian
> >East Central University
> >Linscheid Library
> >200 S. Stadium Drive
> >Ada, OK 74820
> >580-310-5564
> ><dbelcher@MAILCLERK.ECOK.EDU>
>