Evaluating digital services: an alternative reality

By reeset / On / In Digital Libraries

Sorry for what I’m sure will be a longish post.  This is a bit of a brain dump —

Lately, I’ve been thinking a lot about how libraries determine if services that they provide are successful.  Well, specifically, how libraries determine if digital services that they provide are successful.  And after attending DLF and listening to more than a few folks talk about very cool digital programs, I’m starting to think that libraries view digital programs as living in a kind of alternative reality, where the rules of regular evaluation and assessment don’t apply. 

Our recently retired University Librarian and I would spend a good deal of time talking about assessing digital projects.  However, the months prior to her retirement, we spent a lot of time talking about digital services and the necessity for libraries to being looking at those services that we provide critically and assess their feasibility in terms of impact and cost of operation.

I think in the library world, there is a feeling that libraries need to catch up, and in many respects, catching up is a code-word for building digital programs, creating mobile sites, creating institutional repositories, etc.  It could be a lot of things – but I think the question that sometimes gets lost is the question of if a library should participate in those activities.  Let’s use the institutional repository as an example.  I know that libraries and librarians love to jump on the open access hobby horse (yes, I’m quite cynical about this one), but does every library need to have an institutional repository.  I’d argue no.  In fact, OSU’s IR is very successful when compared to other IR efforts in the library community, but even here, I sometimes wonder if its necessary for an institution of our size to maintain such  a repository.  I honestly think that as repository efforts go, OSU’s is one of the best.  At the same time, I know how many resources (both infrastructure and people) go into delivering this service and because of that, I sometimes get discouraged when I consider the substantial costs per item.  At the same time, I see how this effort becomes more important to the campus and the library each passing year as more content finds its way into the IR.

At the same time, it seems to me that the concept of an IR is an old one.  Yes, organizations want to maintain walls around their information to demonstrate ownership and provenance, but in reality, I often times  believe our patrons would be much better served with repository efforts that removed the concept of the organization.  Thinking about OSU’s repository efforts…could this work have more impact if we could leverage a larger state-wide repository effort.  For that matter, couldn’t every institution in Oregon.  And why would it have to just be statewide…again, those are fairly arbitrary boarders. 

I think the thing that I find myself struggling against sometimes is that libraries search for digital services to distinguish themselves and provide services that make resources more readily available for their patrons.  But we often times have approached this the same way we build traditional print collections.  We start a local program, brand it, promote it.  When in reality, the digital space provides libraries an opportunity to work outside the traditional boundaries of ownership and build more collaborative services.  And collaborative not in the sense that I run and IR and you run and IR and we have an API that let’s us communicate with each other – but collaborate in the sense that we build tools that everyone shares.  We see this model happening and it will happen more as libraries are forced to justify stretched resources, but I often times think that we could be doing so much more today.

Anyway – what has got me thinking about this lately has been talking to people about users and their digital services.  Over the past couple of months, the number of times I’ve spoken to folks building mobile sites, or grant driven projects that excitedly talk about 500 visitors a day, or 2000 visitors daily – and see those numbers as justifying tens of thousands of dollars of startup monies or locking up finite FTE resources makes me wonder if libraries have lost their minds, and should be doing better assessment with our digital collections.  Because of MarcEdit’s automatic updating, I know that it’s opened over 5,000 times daily.  A few of my map cataloging tools on my page that I don’t even update any more get a few hundred visitors a day – these are tools that are created in my spare time, with little resources – yet many times, have larger audiences than many digital library services being spun out by libraries.  And yet, it are these vampire projects that are siphoning valuable time and resources away from libraries and make really transitive development more difficult.

This isn’t to say that libraries shouldn’t be doing things.  We should.  However, what I’m finding myself looking for more and more at digital library conferences, are librarians talking seriously about assessment of their digital assets and projects.  They are out there –


9 thoughts on “Evaluating digital services: an alternative reality

  1. Well said, I have thought the same thing.

    What’s the solution, how do we assess if a given digital service is valuable? Just number of hits a day? How do you apply this to proposed digital services, to determine if they are worth pursuing?

  2. @Jonathan,

    I don’t think that it can be just number of hits — because there are going to simply be projects that have niche audiences. However, at the same time, there needs to be some sanity between the number of resources utilized to build and maintain a project and population served. I think with many digital project, libraries are trying new things. This is something that we need to be doing. However, when things done work or they fail to find an audience, we need to be willing to be ruthless and change directions. I think when we talk about proposed digital services — projects worth pursuing are projects that have a plan for providing assessments of their success. A project should be able to identify their audience and what would determine a successful project as well as a plan for how the project could be decommissioned if the process didn’t work. I see too many projects that start as experiments, find some level of pseudo-production, then become vampires sucking up time and money as libraries continue to support a project because they are afraid someone, somewhere, might be using it — rather than cutting their losses.


  3. Well, I run a consortial “IR,” and frankly, I don’t think it solves anything. If anything, it’s an excuse for institutions to check off the “IR” checkbox without doing appropriate soul-searching about why they have this thing, what they plan to collect for it, and how they plan to collect it.

    I’m not cynical about OA, but I’m extraordinarily cynical about how libraries approach OA. If we can’t rustle up a decent cost-per-item, what are we going to DO about that exactly? Give up?

  4. @Dorothea,

    Sure — I think a lot of what we do with digital services is to check it off the list. What’s missing in many cases is the soul searching that you mention, in addition to a commitment to shift resources. I think probably the biggest problem libraries have had with IR’s has been that it’s a solution to a problem that libraries are trying to convince institutions that they have. I’ve always thought that IRs won’t work in libraries until the institutions themselves treat the idea as a core value, something that I simply haven’t seen happening. But the IR is really just one of the most visible symptoms of what I think is a much larger problem with assessment when considering digital collections and services in libraries.


  5. Nice post that resonates strongly with me. Another place where there is too little collaboration is between libraries and LIS programs. It seems to me as though you have begun to define a research/measurement need that some LIS programs might be uniquely positioned to operationalize. There are faculty and students who could be spending their research time trying to study and provide results for what is a fundamental question for those in the profession. This is a case where impact studies could be used to flesh out the somewhat meager information gained from trying to measure use.

  6. @Tim,

    I certainly agree with you there. I actually had an opportunity in Sept. to speak to the faculty at the University of North Texas and this was one of the topics discussed. From that conversation, I came away with the feeling that this could happen if libraries and LIS programs simply engaged with other a little more often, so that the theory learned in one’s LIS program actually matched the practice in the real world.


  7. Evaluation and assessment are always critical, especially to projects that are expensive. I’m not sure of the perception that digital library programs exist in an alternate reality as so many digital library folks do extensive work for project planning, management, and business analysis. Project planning and management for library digital initiatives should include an upfront cost/benefit analysis. The easy use-cases for digital services are those where there’s an existing need that can be better and more cheaply met and can have added value by being digital. This requires a good deal of pre-planning (often brought about by extremely strained resources) to best leverage all existing capacity. At the University of Florida, our digital services grew out of and eventually replaced preservation microfilming. The service level (based on hits, user feedback, regular usability evaluations, and integration with teaching and research) has increased while costs per quantity has dropped. The IR is on the same platform (only 1 FTE of support which is also shared with other digital projects).

    Alongside large projects with high use (the UF Digital Collections see over 20,000 hits/day), there’s a need for digital library services that serve faculty research needs with the libraries supporting and collaborating with faculty on digital projects that may have low popular usage and high academic impact. Stanford’s Spatial History group is a great example of this. The costs tend to be high for these projects, but they’re often supported with grant funds and by both the libraries and academic departments.