Code4Lib: Feb. 15 Keynote: Evergreen development

By reeset / On / In Digital Libraries, Programming

Evergreen branching out
Bill Erickson (Systems Developer), Jason Etheridge (PINES System Support Specialists), Brad LaJeunesse (PINES System Administrator) and Mike Rylander (Database Developer)
Quick History of the PINES Consoritum
*      Started as a Y2K project
o        Many libraries running older, antiquated systems.
o        Some libraries were still manual
*      Politial support was available for the creation of a state wide system
PINES consortium today
*      ~250 libraries
*      ~8 million holdings
*      ~1.5 million patrons
*      ~15 million annual circulation
PINES consoritum for tomorrow
*      ~300 libraries
*      10 million holdings
*      2 million patrons
*      25 million annual circulation
PINES management
*      Defined by a committee of elected library directors.
*      Georgia Public Library Service (State Library) provides support.
Why Evergreen
*      Vendors contract was ending and they were unable to find a system that they were happy with to support their consortium.
The Evergreen Solution
*      Open source, GPL
o        Wanted to develop a system that could spread beyond Georgia
*      Designed from the ground up to scale
*      Was designed by designers without library backgrounds.  They were looking for software designers, not folks with library experience.
Genesis 1.1
*      Setup the funtionality requirements for the system
*      Designing processes that make sense rather than designing systems around workarounds.
Genesis 1.2
*      Choose and then developed new processes that actually meet the needs of the consortium
o        The idea here was that they wanted to convience their audience that the system would not be designed with the same limitations as their current system.
*      Design data containers that model the chosen policies
Genesis 1.3:
*      Used Open Source software as the initial building blocks
o        Protgres Database (though its not tied)
o        Apache Webserver
o        XUL/Mozilla Framework
o        Jabber (XMPP)
o        Ed Summers (for perl)
o        others….
*      Built what didn’t already exist
OpenSRF (“open surf?)
*      Service request framework
*      OpenSRF Router provides failover and load balancing
*      Server libraries exist in Perl and C, client library exist in Perl, C and Javascript
*      Provides an HTTP gateway for secure, scalable remote access
*      Allows for the development of additional gateway interfaces, such as the Open-ILS XML gateway
OpenSRF Methods
*      Create a useful function
*      Language-specific idioms
*      Register the method with the local OpenSRF application instance.
Development Schedule
*      Alpha Release: July 2005
*      Beta Release: Spring 2006
*      Production Release: Summer 2006
*      PINES go-live date: Fall 2006
Questions:
*      How many developers are working on this tool?
~3 full time developers writing code with a few other individuals doing some coding in various open source systems
*      How are acquisitions and ordering being handling?
Evergreen originally will not have an acquisition or serials module in the original ILS.
*      Will it be a phased switch over?
No, everything will go live at once.
*      How are loading and unloading of patron records been handled?
They currently don’t believe that it will be all that difficult.  However, it doesn’t appear that they are considering long-term loading and unloading of data.
*      Why not try to enhance an existing open source project?
In general, they believed the the problem was scale.  So rather than enhance the existing system, they wanted to start from scratch.
*      Given that folks building the ILS didn’t have a librarian background, did they consider throwing out MARC?
In general, they would have liked to, but were unable to ride themselves completely of it.  However, they don’t actually utilize the MARC data.  The MARC data itself is stored only for storage…rather they utilize only pieces of data that populate the search table.
*      Relevance Ranking?
They are using a Postgres plugin for doing abstract rankings and then modify the basic relevancy ranking to provide an item with a relevancy bump based on criteria from their organization.