Can the open source community help the ILS matter?

So, let’s start out with a preface to my comments here.  First, it’s a little on the long side.  Sorry.  I got a bit wordy and occasionally wonder a little bit here and there :).  Second — these reflect my opinions and observations.  So with that out of the way… 

This question comes from two experiences recently.  First, at Midwinter in Seattle, a number of OSU folks and myself met with Innovative Interfaces regarding Encore (III’s “next generation” public interface in development) and the difficulty that we have accessing our data in real-time without buying additional software or access to the system (via access to API or in III’s case, access via a special XML Server).  The second meeting has been the current eXtensible Catalog meeting here in Rochester where I’ve been talking to a lot of folks that are currently looking at next generation library tools. 

Sitting here, listening to the XC project and other projects currently ongoing, I’m more convinced than ever that our public ILS, which was once the library communities most visible public success (i.e., getting our library catalogs online) — has become one of the library communities’ biggest liabilities — an albatross holding back our communities’ ability to innovate.  The ILS and how our patrons interact with the ILS shapes their view of the library.  The ILS, at least, the part of the system that we show to the public (or would like to show to the public — like web services, etc.) simply has failed to keep up with library patron or the library communities’ needs.  The internet and the ways in which our patrons interact with the internet have moved forward — while libraries have not.  Our patrons have become a savvy bunch.  They work with social systems to create communities of interest — often times, without even realizing it.  Users are driving the development and evolution of many services.  A perfect example to this has been Google Maps.  A service that in and of itself, isn’t too interesting in my opinion.  But what is interesting is the way in which the service has embraced user participation.  Google maps mashups liter the virtual world — to the point that the service (Google maps) has become a transparent part of the world that the user is creating.

So what does this have to do with libraries?  Libraries up to this point simply are not participating in the space that our users currently occupy.  Vendors, librarians — we are all trying to play catch-up in this space by brandishing about phrases like “next generation”, though I doubt anyone really knows what that means.  During one of my many conversations over the weekend, something that Andrew Pace said really stuck with me.  Libraries don’t need a next generation ILS; they need a current generation system.  Once we catch-up — then maybe we can start looking at ways to anticipate the needs of our community.  But until the library community creates a viable current generation system and catches-up, we will continue to fall further and further behind.

So how do we catch-up?  Is it with our vendors?  Certainly, I think that there is a path in which this could happen.  But it would take a tremendous shift in the current business models utilized by today’s ILS systems, but a shift that needs to occur.  Too many ILS systems make it very difficult for libraries to access their data outside of a few very specific points of access.  As an Innovative Interfaces library, our access points are limited based on the types of services we are willing to purchase from our vendor.  However, I don’t want to turn this specifically into a rant against the current state of ILS systems.  I’m not going to throw stones, because I live in a glass house that the library community created and has carefully cultivated to the present.  I think to a very large degree, the library community…no, I’ll qualify this, the decision makers within the library community — remember the time when moving to a vendor ILS meant better times for a library.  This was before my time — but I still hear decision makers within the library community apprehensive of library initiated development efforts because the community had “gone down that road” before when many organizations spun their own ILS systems and were then forced to maintain them over the long-term.  For these folks, moving away from a vendor controlled system would be analogous to going back to the dark ages.  The vendor ILS has become a security blanket for libraries — it’s the teddy bear that lets everyone sleep at night because we know that when we wake up, our ILS system will be running and if its not, there’s always someone else to call. 

With that said, our ILS vendors certainly aren’t doing libraries any favors.  NSIP, SRU/W, OpenSearch, web services — these are just a few standards that libraries could easily accommodate to standardize the flow of information into and out of the ILS, but find little support in the current vendor community.  RSS, for example, a simple protocol that now most IlS vendors support in one way or another, took years to finally be developed. 

Talking to an ILS vendor, I’d used the analogy that the ILS business closely resembles the PC business of the late 80’s, early 90’s when Microsoft made life difficult for 3rd-partly developers looking to build tools that competed against them.  Three anti-trust cases later (US, EU and Korean) and Microsoft is legally binded to produce specific documentation and protocols to allow 3rd-party developers the ability to compete on the same level as Microsoft themselves.  At which point, the vendor deftly noted that they have no such requirements, i.e., don’t hold your breath.  Until the ILS community is literately forced to provide standard access methods to data within their systems, I don’t foresee a scenario in which this will ever happen — at least in the next 10 years.  And why is that?  Why wouldn’t the vendor community want to enable the creation of a vibrant user community.  I’ll tell you — we are competitors now.  The upswing in open source development within libraryland has place the library community in the position of being competitors with our ILS vendors.  Dspace, Umlaut, LibraryFind, XC — these projects directly compete against products that our ILS vendors are currently developing or have developed.  We are encroaching into their space, and the more we encroach, the more difficult I predict our current systems will become to work with. 

A good example could be the Open source development of not one, but two main stream open source ILS products.  At this point in time, commercial vendors don’t have to worry about losing customers to open source projects like Koha and Evergreen, but this won’t always be the case.  And let me just say, this isn’t a knock against Evergreen or Koha.  I love both projects and am particularly infatuated with Evergreen right now — but the simple fact is that libraries have come to rely on our ILS systems (for better or worst) as acquisition systems, serial control systems, ERM systems — and with ILS vendors having little incentive to commoditize these functions.  This makes it makes it very difficult for an organization to simply move to or interact with another system.  For one, it’s expensive.  Fortunately, the industrious folks building Evergreen will get to the point where it will be a viable option and when it does, will the library community respond?  I hope so, but I wonder which large ACRL organization will have the courage to let go of their security blanket and make the move — maybe for the second time — to using an institutional supported ILS.  But get that first large organization with the courage to switch, and I think you’ll find a critical mass waiting and maybe, just maybe, it will finally breathe some competitive life into what has quickly become a very stale marketplace.  Of course, that assumes that the concept of an OPAC will still relevant — but that’s another post I guess.

Anyway, back to the meeting at Rochester.  Looking at the projects currently be described, there is an interesting characteristic of nearly all “next generation” opac projects.  All involve exporting the data out of their ILS.  Did you get that — the software that we are currently spending tens or even hundreds of thousands of dollars to do all kinds of magical things must be cut out of the equation when it comes to developing systems that interact with the public.  I think that this is the message that libraries and those making decisions about the ILS within libraries are missing.  A quick look around at folks recognized at creating current generation opacs (the list isn’t long) like NCState have one thing in common — the ILS has become more of an inventory management system, providing information relating to an item’s status, while the data itself is being moved outside of the ILS for indexing and display.

What worries me about current solutions being considered (like Endeca) is that they aren’t cheap and will not be available to every library.  NCState’s solution, for example, still requires NCState to have their ILS, as well as an Endeca license.  XC, an ambitious project with grand goals, may suffer from the same problem.  Even if the program is wildly successful and meets all its goals, implementers may still have a hard time selling their institutions on taking on a new project that likely won’t save the organization any money upfront.  XP partners will be required to provide money and time while still supporting their vendor systems.  What concerns me most about the current path that we are on is the potential to deepen already existing inequities that exist between libraries with funding and libraries without. 

But projects like XC, the preconference at Code4lib discussion Solr and Lucene — these are developments that should excite and encourage the library community.  As a community — we should continue to cultivate these types of projects and experimentation.  In part, because that’s what research organizations do — seek knowledge through research.  But also, to encourage the community to take a more active role when it comes to how our systems are developed and interact with our patrons.  

–TR 


Posted

in

, ,

by

Tags:

Comments

17 responses to “Can the open source community help the ILS matter?”

  1. Chris Avatar

    I can say for sure that at least 4 proprietary vendors (I wont say commercial because commercial is not the opposite of open source, proprietary is. A common misunderstanding) in New Zealand have lost out to Koha. There must be many more the world over that have, given there are more than 150 libraries running Koha. So perhaps its a little further along than it first looks.

  2. Administrator Avatar
    Administrator

    I certainly don’t want to undercut the success that open source ILS systems have had in the short term. It is true, outside of the US, (particularly in NZ where Koha is based) these open source systems have had more success but the current audience has been with smaller libraries that honestly just aren’t on the ACRL radar. Until a large academic or research library — an ACRL library, or a Harvard, a Cornell, an Oxford, an MIT, the Library of Congress, British Library, etc. essentially “validates” these open source tools by migrating off their vendor system to an open source alternative — these systems will continue to be more fringe systems. Fortunately, I think that there are organizations that are wanting to make this jump — the problem is that they just aren’t ready at the moment when you consider all the things that libraries use their ILS systems for (ERM, aquisitions, etc).

    –TR

  3. Joshua Ferraro Avatar

    Like the last comment, I’d like to make the point that open-source software is not the opposite of commercial software, though it is often framed as such. There are commercial support options for most open source projects, including both Koha and Evergreen. For instance, LibLime provides hosting, installation, data migration, on-site staff training, ongoing maintenance, ticket-based support, and development/customization services.

    The difference is that the business model for commercial open-source software is services-based rather than sales-based.

    If you’re looking to make a distinction, I’d recommend making it between ‘open source’ and ‘proprietary’ software. Otherwise, you run the risk of undermining the role that commercial entities have in the sustainability of an OSS project.

  4. Administrator Avatar
    Administrator

    I understand both points, but I’ll admit — I’m an idealist when it comes to open source software and communities. The community will deterime if the project succeeds or not. LibLime or others, these provide services around software. That’s fine. It’s basically a paid support model. Is this any better than the proprietary route — I don’t think so. Sure, you have the source code, but if you aren’t willing to work with it, you are just exchanging one cost for another. Folks are free to go that route, but an open source system must succeed with or without these types of groups. It really comes down to the community that supports the project — in whatever forms that may be. With that said, I don’t think that the distinction needs to be made in this case because I’m not willing to exchange one model (proprietary systems that we pay for) for another (open source software where we pay for commerical support). Its not bad that the option is there for those that do — but I think that library community needs to roll up its collective sleeves, and get back into the software development business. Otherwise, the difference is just cost.

    –TR

  5. Joshua Ferraro Avatar

    That’s an interesting perspective, especially since most of LibLime’s clients hire us because it costs less for us to manage their systems than if they were to develop expertise inhouse.

    Also, take a look at what’s happening in Georgia. Ask yourself why the core development team for Evergreen is forming a company (ESI) and then think about the fact that if GPLS hires developers to replace them they will essentially be repeating the cycle (that team will likely either form their own company eventually or become employees of one of the Evergreen companies). This is happening in Georgia, where the libraries did just what you’re proposing :-).

    I should mention, on a personal note, that I started out as the sysadmin/programmer for the first library in the US to implement an open-source ILS (the seven-branch Nelsonville Public Library System in Athens County Ohio). I tried for years to convince other librarians and administrators to support OSS inhouse by hiring project managers, project managers, programmers, interface designers, trainers, support staff, etc. for their ILSes. But the response I always got was that without commercial support the model wasn’t perceived as safe. So I decided to try another route. Ultimately, I disagree that there’s no difference between commercial OSS and commercial proprietary software. The fact that there are so many more support and maintenance options (no vendor lock in), the fact that features are library-driven rather than vendor driven, the fact that the resulting software can be shared with libraries not able to afford development costs are what sells me on the OSS model. Whether the developers are getting paid directly by the library or through a support company hired by the library is irrelevant.

  6. Administrator Avatar
    Administrator

    Ah — but in both cases (Koha and Evergreen) — part of the reason why these are not viewed as safe is because we’ve yet to hit a “tipping” point within libraryland. I’m sorry — but commercial support for OSS isn’t going to get us there. It will help, but it doesn’t build a vibrant OSS community. As I said, it simply replaces one support model for another. If Evergreen or Koha or anything else is to be successful long-term, a vibrant community will need to be build around it. I include OSU’s LibraryFind software in this group as well. I’m proud of the work we are doing — its rough for sure — but this tool won’t be a success unless we can build a community around it. Even if OSU spun off a company to provide support — this won’t be enough. Too many people still remember what it was like to roll their own systems. This, I think, more than anything, stunts active development within libraries. Also, I think that to some degree, libraries aren’t taking their role as research institutions seriously. The support their organization’s research, but what are they doing on their own? Granted, this isn’t true of all (there are some places doing great work) — but is true of the majority. Too many are looking to get by and shortcuts. WorldCat.org is a shortcut, and maybe that will be the wave of the future for libraries. Maybe the Opac is subsumed by OCLC and our local interfaces go away. But if this is the case, then that will be a sad day for libraries. When libraries (as in the community) give up their ability to innovate — they will make themselves expendible. This is why I’ve said, it will take a large research library to show the courage to make the switch to OSS and put development resources behind it. Momentum must come from outside the commercial community if it is to be sustained within the mainstream. I think that the library community is wanting to make this change, but 1) the software needs to be ready (and it currently is still a year or more off) and 2) someone needs to make the first step.

    –TR

  7. Chris Avatar

    Just responding to your community bit, I realise you are US focused, but there *is* already a vibrant community around Koha. The project has been running for 7 years, with about 45 developers in more than 20 countries. There are libraries using and developing for Koha in Bolivia, Fiji, Poland, Estonia, India, Switzerland, New Zealand, Australia, USA, Canada .. the list goes on.
    Every day I talk to a new Koha user or contributor. A simple google search will turn up more than 350 libraries running Koha.
    When Koha was developed, the Library and librarians speced the system but it was built by a company.
    I’m interested also where the 1 year or more off being ready comes from. As one of the developers of Koha (one who has been working on it for 7 years now), I don’t think I have ever talked to you, or seen you post on the mailing lists. It would be fantastic if people like yourself would become involved and tell us what is missing, or what is not ready.

  8. Administrator Avatar
    Administrator

    Chris,

    I don’t want to give the impression that Koha doesn’t serve a purpose for a niche group, but yes, I work at a medium sized academic library in the US and am thinking primarily of this group and larger. For this group, your aquisition, circulation, serials checkin and OPAC modules are steps behind what currently can be purchased within the proprietary community as well as a lack of an ERM module which becomes more important all the time. For small libraries, libraries that cannot afford a proprietary product, its a fine system. For larger academics, its got some warts. It’s also been around for a while. As you note, 7 years. I’ve seen it grow quite a bit in that time, but its never captured the interest of the research community here in the states.

    Evergreen is changing that. I see Evergreen as being a year, maybe two (looking at their development path) from being viable for large consortia or research institutions, and primarily because they’ve been able to capture some buzz within the research community. There are currently a lot of libraries that are downloading the code and giving it a spin to see how it currently evaluates against their current systems. If this project was to fill in some missing pieces or garner a large development partner willing to bring it up in parallel to their current system, I could see someone going this route.

  9. Joshua Ferraro Avatar

    Well I’ve no way to compare our acquisition, circulation, and serials modules, but I think it’s fair to say that as far as the OPAC, Koha’s actually got quite a bit more functionality than the ‘research libraries’ I’ve seen.

    http://search.athenscounty.lib.oh.us

    http://library.neu.edu.tr/

    There are two production examples of the latest Koha OPAC. At the back end, it uses a high performance textual database engine (Zebra) that has native support for Z39.50/SRW/SRU, OpenSearch RSS feeds of any search, relevance ranking, field weighting, and stemmed queries. It includes faceted results and supports multiple query syntaxes such as CCL and CQL. The MARC support for searching is more comprehensive than any I’ve encountered (take a look at the advanced search at the Nelsonville site for instance). It’s got native federated search capability (which you can see at the NEU site) because it’s entirely standards-based … in fact, the OPAC is basically an enhanced Z39.50 client. Zebra also scales to tens of millions of records, and is in use in some very large bibliographic databases like the Danish National Library with something like 25 million records.

    But anyway, your point is well taken. The research community hasn’t taken note of the OSS library systems. I’m thrilled that you see Evergreen as a potential for changing that within your community.

  10. Administrator Avatar
    Administrator

    Joshua,

    I’ve actually seen these (I’d seen the anouncements some time ago on Code4lib) and was glad to see them. Your right — they do catch the public interface up with systems like Sirsi and Alepha (which have provided these types of interfaces for a while) and have passed systems like III in the short term (which OSU currently uses). The thought of the catalog as basically a Z39.50 front end does make me shudder a bit though — that’s one protocol that I’d personally like to see deleted from our collective vocabulary :). I also like the SRU/SRW support (something vendors seem to have ignored completely) — but lets get back to my main point — even with these improvements, I still don’t see any ILS system that has what I would consider a current generation opac or outside what I would consider to be the status quo in terms of Opac design, which I think is still the problem. The advance search is a perfect example. It provides great searching granularity — but this isn’t want our patrons as asking for. They are asking for use to make searching easier and to make it inclusive. Transparency in the opac…kindof. And as I say — I have a feeling that if this problem isn’t remedied — libraries (or the organizations that fund libraries) will find it much easier to simply oursource this part of the process to something like OpenWorldCat, which I know is happening, at a couple of large research institutions in the states. So this direction may not be that bad — though I’d need to be convinced.

    I think what gives me hope with Evergreen is that there is a very unorthodox aspect to it — something that has a very research oriented, yet practical feel to the current development. I think it complements the Koha development very nicely, which I’ve found to be much more practical and easier to set up and support for smaller organizations with limited support budgets.

    But, with all that said, I still think that the time is ripe for libraries to re-invest in their own futures (be they with the current available systems or something new like XC) — I just hope it happens before time passes us by.

    –TR

    –TR

  11. Darla Grediagin Avatar

    I wonder how much the word of OSS programs have gotten out to the smaller libraries that may not have the money to purchase a new program. In my case, my library has someone willing to put the hours of work into getting a new system up and going. (That would be me.) I didn’t learn of Koha from colleagues in the library arena but rather from my boss, who understands the support needed with OSS products.

    I consider myself well informed and on the cutting edge of technology. I implemented Koha with the help of a great tech guy and am now giving presentations at our state library conference and school technology conference.

    My desire to get more people involved with Koha, in particular, but OSS in general, is the thought that we can spread the costs out for building a system that meets the needs of our particular libraries.

    I am fortunate in the fact that I have a supervisor that understands we haven’t given up the upkeep costs for our old system, just moved where the funds will be spent on the programming.

    Having the catalog up and running is my first step, my next one is to learn to program in Linux so that I can add my own changes. I am thrilled with Koha and what it opens up for people to do. Long before I learned of Koha, I felt that we should be able to have a program like this in the librarian community. Librarians by their nature like to share, so I feel that Open Source is a great place to be.

  12. […] Terry’s Worklog (Terry Reese). See the post on innovation, openness and open source: Can the open source community help the ILS matter? […]

  13. Joshua Ferraro Avatar

    1. Do Sirsi and Aleph provide native RSS feeds, OpenSearch, autodiscovery for said, allow download in MODS, DC, MARCXML, MARC21 (MARC8 or UTF8 encoded). Do they offer _real_ field weighting? (try a search on ‘it’ in the NPL catalog for an example). Do they offer relevance ranking? How about a sane search of the elusive fixed fields in MARC (007, 008 and yes, even the leader are indexed in Koha ZOOM).

    Do the Sirsi and Aleph systems offer SMS messaging for their patrons for overdue books, fines, etc.? Can you sort by ‘popularity’? How about limiting to ‘currently available items’? I wonder if you’ve even taken the time to actually look at the Koha ZOOM OPAC.

    2. I’ve yet to find someone who actually understands Z39.50 and didn’t think it was an important standard for building search engines; standards-based search engines are key to interoperability, and as far as targets go, in the library world, Z39.50 is king at the moment.

    3. I agree, the OPAC advanced search continues to be problematic, but keep in mind that Koha’s advanced search options you see on the NPL and other Koha ZOOM systems were speced out by librarians. The underlying tool certainly isn’t restricted to offering so much search granularity. If you would rather offer a simple ‘amazon style’ advanced search, that’s certainly possible.

  14. Administrator Avatar
    Administrator

    >>1. Do Sirsi and Aleph provide native RSS

    I’m not really familiar with Sirsi and Aleph, but last time I checked, Sirsi does provide RSS support and a pseudo-open api so you could likely roll your own OpenSearch support. We are an Innovative library and yes to RSS and the various metadata formats — it all depends on what modules you purchase from the vendor. III uses a very al cart model which I guess made sense at one time — but I find fairly annoying today since it means that any new development becomes a new product to purchase. In most cases, the costs are not steep — but you get a feeling that you are being nickled and dimed to death — which just seems silly when you consider what one pays per year for “service” and “support”. Field weighting — to a degree, relevancy — its gotten much better this year and is probably better than any other proprietary system I’ve currently seen (though still could improve a lot). Search of fixed fields — yes. You can even scope the catalog by items in the fixed fields — though in many cases, only a few of these bytes are would be useful for most folks.

    >>Do the Sirsi and Aleph systems offer SMS
    Again, not familiar with Sirsi and Aleph, but if you purchase it, III does offer such functionality.

    As far as Koha, I’m currently running a copy of both Evergreen and Koha of our catalog as a way to evaluate the products. I’ve been working with our consortia and wanted to look specifically at the current OSS offerings to see exactly how feasible the current offerings are within a consortial environment. Unfortunately, this would be a catalog for just the consortia and would require our vendor making concessions to make the data more accessible for indexing outside their consortia software — which isn’t likely. So I’ve gotten pretty familiar with both over the past month or so.

    >>2. I’ve yet to find someone who actually

    Well, you’ve met one. 🙂 As someone that’s written several servers and clients for the GIS community years ago — I can honestly say that I loath the protocol. I actually first worked with it as part of the FGDC nine years ago, when Z39.50 was (and probably still is) implemented to provide a national federated search for the geocommunity. I find the protocol expensive and dodgy when doing very broad queries or very complex queries and is generally implemented using only at the most basic level making it only nominally useful when querying most Z39.50 servers. In created our own search tools, I’ve found that working with SRU, OpenSearch and SOAP services much more fruitful. The problem, as you mention, is that Z39.50 is still the mostly widely used protocol — though this is changing as our content providers provide XML gateways for current federated search tools. In the past few months, we’ve had a number of our largest content provides submit documentation regarding webservices API that allow us to query as if on their system so I’m hopeful that we’ll be able to retire Z39.50 in the next 5 years or so.

    While our vendor, III, does provide the ability to provide most of these services, what it doesn’t provide is an open API that can be developed to. This I think is I think the biggest benefit that OSS software offers. It might save you some money — though I’ve found that it really just redistributes where the funds are spent (Dspace for example allowed us to not purchase a vendor solution, but required the purchase of hardware support as well as a programmer’s time to manage the software initially. I think over each year, the cost is probably a push), but what it does do is give you control over your information. What I’d like to see OSS community push more is the openness aspect and I’d like the conversation move out of the developer’s community. To a large extent, everything that folks have posted about OSS is preaching to the choir. I’d hazard to guess that most folks that read this blog are developer oriented. The development community has agreed that these are good tools, that their development is a good thing. Now we need to convence our directors and administration. This is where the conversations need to occur and the proprietary vendor community is much more adapt at speaking to this group than the current development community.

    I’ve known folks whose careers have suffered over the past two years for submitting alternatives to vendor products, so I’m not convenced that the library environment is open enough to support wide scale OSS adoption just yet. I’m not questioning the desire by those that develop the software — but the organization desire to allow it and support it. I work at an organization that has allowed me to work on a number of OSS projects — many simply outside of the library given my research interests — but I don’t think that my environment is typical today.

    –TR

  15. Chris Avatar
    Chris

    Just out of interest Terry, which version of Koha are you running?
    Joshua is talking about the dev_week branch of cvs, with the Zebra backend, its significantly different to the rel_2_2 branch from which the 2.2.x releases are built from.

  16. Administrator Avatar
    Administrator

    At this point, I’ve recently evaluated the 2.2.x and the 2.3 series of Koha, so its very likely that I could be a few incrementations behind. I believe the 2.3 version though was the first dev. release of the zoomopac which I tried more out of curiousity.

    –TR

  17. Chris Avatar
    Chris

    Yeah the 2.3.0 release was a development release, and was well buggy 🙂 Most of them (hopefully all) have been fixed in cvs. The thing holding back a 2.4 release is a reliable/complete installer.

    In the meantime rel_3_0 is rapidly progressing towards a 3.0 release. San Ouest Provence library near Marseille have done a ton of work towards 3.0 and its looking pretty nifty. The thing I enjoy most about working on Koha is that almost all the features have come from requests from Libraries/Librarians, and have been specced by them. And in cases like San OP, and CCFLS, and Near East University (and others im bound to be forgetting), coded by them also. In essence Koha evolves in the way that librarians want to evolve, rather than in a way that vendors think is saleable.

    I agree totally with needing to convince the people who are in charge of the money that OSS is a viable option. And im constantly suprised that the Library environment is as slow to accept OSS as it is. I think this is where Joshua was coming from when he was talking about Liblime. It was started as a way to shift some thinking, to make those who make the decisions see that it is viable. And for a lot of people, something isnt viable unless there is commercial support for it.