So, let’s start out with a preface to my comments here. First, it’s a little on the long side. Sorry. I got a bit wordy and occasionally wonder a little bit here and there :). Second — these reflect my opinions and observations. So with that out of the way…
This question comes from two experiences recently. First, at Midwinter in Seattle, a number of OSU folks and myself met with Innovative Interfaces regarding Encore (III’s “next generation” public interface in development) and the difficulty that we have accessing our data in real-time without buying additional software or access to the system (via access to API or in III’s case, access via a special XML Server). The second meeting has been the current eXtensible Catalog meeting here in Rochester where I’ve been talking to a lot of folks that are currently looking at next generation library tools.
Sitting here, listening to the XC project and other projects currently ongoing, I’m more convinced than ever that our public ILS, which was once the library communities most visible public success (i.e., getting our library catalogs online) — has become one of the library communities’ biggest liabilities — an albatross holding back our communities’ ability to innovate. The ILS and how our patrons interact with the ILS shapes their view of the library. The ILS, at least, the part of the system that we show to the public (or would like to show to the public — like web services, etc.) simply has failed to keep up with library patron or the library communities’ needs. The internet and the ways in which our patrons interact with the internet have moved forward — while libraries have not. Our patrons have become a savvy bunch. They work with social systems to create communities of interest — often times, without even realizing it. Users are driving the development and evolution of many services. A perfect example to this has been Google Maps. A service that in and of itself, isn’t too interesting in my opinion. But what is interesting is the way in which the service has embraced user participation. Google maps mashups liter the virtual world — to the point that the service (Google maps) has become a transparent part of the world that the user is creating.
So what does this have to do with libraries? Libraries up to this point simply are not participating in the space that our users currently occupy. Vendors, librarians — we are all trying to play catch-up in this space by brandishing about phrases like “next generation”, though I doubt anyone really knows what that means. During one of my many conversations over the weekend, something that Andrew Pace said really stuck with me. Libraries don’t need a next generation ILS; they need a current generation system. Once we catch-up — then maybe we can start looking at ways to anticipate the needs of our community. But until the library community creates a viable current generation system and catches-up, we will continue to fall further and further behind.
So how do we catch-up? Is it with our vendors? Certainly, I think that there is a path in which this could happen. But it would take a tremendous shift in the current business models utilized by today’s ILS systems, but a shift that needs to occur. Too many ILS systems make it very difficult for libraries to access their data outside of a few very specific points of access. As an Innovative Interfaces library, our access points are limited based on the types of services we are willing to purchase from our vendor. However, I don’t want to turn this specifically into a rant against the current state of ILS systems. I’m not going to throw stones, because I live in a glass house that the library community created and has carefully cultivated to the present. I think to a very large degree, the library community…no, I’ll qualify this, the decision makers within the library community — remember the time when moving to a vendor ILS meant better times for a library. This was before my time — but I still hear decision makers within the library community apprehensive of library initiated development efforts because the community had “gone down that road” before when many organizations spun their own ILS systems and were then forced to maintain them over the long-term. For these folks, moving away from a vendor controlled system would be analogous to going back to the dark ages. The vendor ILS has become a security blanket for libraries — it’s the teddy bear that lets everyone sleep at night because we know that when we wake up, our ILS system will be running and if its not, there’s always someone else to call.
With that said, our ILS vendors certainly aren’t doing libraries any favors. NSIP, SRU/W, OpenSearch, web services — these are just a few standards that libraries could easily accommodate to standardize the flow of information into and out of the ILS, but find little support in the current vendor community. RSS, for example, a simple protocol that now most IlS vendors support in one way or another, took years to finally be developed.
Talking to an ILS vendor, I’d used the analogy that the ILS business closely resembles the PC business of the late 80’s, early 90’s when Microsoft made life difficult for 3rd-partly developers looking to build tools that competed against them. Three anti-trust cases later (US, EU and Korean) and Microsoft is legally binded to produce specific documentation and protocols to allow 3rd-party developers the ability to compete on the same level as Microsoft themselves. At which point, the vendor deftly noted that they have no such requirements, i.e., don’t hold your breath. Until the ILS community is literately forced to provide standard access methods to data within their systems, I don’t foresee a scenario in which this will ever happen — at least in the next 10 years. And why is that? Why wouldn’t the vendor community want to enable the creation of a vibrant user community. I’ll tell you — we are competitors now. The upswing in open source development within libraryland has place the library community in the position of being competitors with our ILS vendors. Dspace, Umlaut, LibraryFind, XC — these projects directly compete against products that our ILS vendors are currently developing or have developed. We are encroaching into their space, and the more we encroach, the more difficult I predict our current systems will become to work with.
A good example could be the Open source development of not one, but two main stream open source ILS products. At this point in time, commercial vendors don’t have to worry about losing customers to open source projects like Koha and Evergreen, but this won’t always be the case. And let me just say, this isn’t a knock against Evergreen or Koha. I love both projects and am particularly infatuated with Evergreen right now — but the simple fact is that libraries have come to rely on our ILS systems (for better or worst) as acquisition systems, serial control systems, ERM systems — and with ILS vendors having little incentive to commoditize these functions. This makes it makes it very difficult for an organization to simply move to or interact with another system. For one, it’s expensive. Fortunately, the industrious folks building Evergreen will get to the point where it will be a viable option and when it does, will the library community respond? I hope so, but I wonder which large ACRL organization will have the courage to let go of their security blanket and make the move — maybe for the second time — to using an institutional supported ILS. But get that first large organization with the courage to switch, and I think you’ll find a critical mass waiting and maybe, just maybe, it will finally breathe some competitive life into what has quickly become a very stale marketplace. Of course, that assumes that the concept of an OPAC will still relevant — but that’s another post I guess.
Anyway, back to the meeting at Rochester. Looking at the projects currently be described, there is an interesting characteristic of nearly all “next generation” opac projects. All involve exporting the data out of their ILS. Did you get that — the software that we are currently spending tens or even hundreds of thousands of dollars to do all kinds of magical things must be cut out of the equation when it comes to developing systems that interact with the public. I think that this is the message that libraries and those making decisions about the ILS within libraries are missing. A quick look around at folks recognized at creating current generation opacs (the list isn’t long) like NCState have one thing in common — the ILS has become more of an inventory management system, providing information relating to an item’s status, while the data itself is being moved outside of the ILS for indexing and display.
What worries me about current solutions being considered (like Endeca) is that they aren’t cheap and will not be available to every library. NCState’s solution, for example, still requires NCState to have their ILS, as well as an Endeca license. XC, an ambitious project with grand goals, may suffer from the same problem. Even if the program is wildly successful and meets all its goals, implementers may still have a hard time selling their institutions on taking on a new project that likely won’t save the organization any money upfront. XP partners will be required to provide money and time while still supporting their vendor systems. What concerns me most about the current path that we are on is the potential to deepen already existing inequities that exist between libraries with funding and libraries without.
But projects like XC, the preconference at Code4lib discussion Solr and Lucene — these are developments that should excite and encourage the library community. As a community — we should continue to cultivate these types of projects and experimentation. In part, because that’s what research organizations do — seek knowledge through research. But also, to encourage the community to take a more active role when it comes to how our systems are developed and interact with our patrons.