Working with the Koha ILS HTTP API
I’ve been spending the last week working with the Koha API, using it as an example for MarcEdit’s direct ILS integration platform. After spending some time working with it and pushing some data through it, I have a couple of brief thoughts.
- I was pleasantly surprised as how easy the API was to work with. Generally, the need for good authentication often stymies many a good API designs because the process for doing and maintaining authentication becomes so painful. I found the cookejar approach that Koha implemented to be a very simple one to support and work with. What’s more, error responses when working with the API tended to show up as HTTP Status codes, so it was easy to work with them using existing html tools.
- While the API is easy to use, it’s also really, really sparse. There isn’t a facility for deleting records and I’m not sure if there is an easy way with the API to affect holdings for a set of records. I do know you can create items, but I’m not sure if that is a one off that occurs when you pass an entire bib record for update, or if there is a separate API that works just for Item data. Search is also disappointing. There is a specific API for retrieving individual records data – but the Search API is essentially Z39.50 (or SRU). I’m not particularly enamored with either, though Z39.50 works (and I’m told that it’s fairly universal in terms of implementation). I’ve never really liked SRU so it didn’t hurt my feelings too much to not work with it. However, after spending time working with the Summon search API for other projects here at Oregon State, I was disappointed that search wasn’t something that the API specifically addressed.
- The API documentation leaves much to be desired. I was primarily utilizing the Wiki (http://wiki.koha-community.org/wiki/Koha_/svc/_HTTP_API) which includes a single page on the API. The page provided some simple demonstrations to show usage, which are really helpful. What is less helpful is the lack of information regarding what happens when an error occurs. The Authorization API returns an XML file with a status message – however, all other API return HTTP status codes. This caught me a little by surprise, given the Authorization response – it would be nice if that information was documented somewhere.
- One thing that I can’t find in the documentation, so I really can’t answer this question is the impact of the API on system resources. The API seems really to be geared towards working with individual records. Well, MarcEdit is a batch records tool. So, in my testing, I tried to see what would happen if I uploaded 1010 records through the API. The process finished, sluggishly, but it appeared that uploading records through the API at high rates was having an impact on system performance. The Upload process itself slowed considerably as the records were fed through the API. But more curious – after the process finished, I had to wait about 15 minutes or so for all the records to make it through the workflow. I’m assuming the API must queue items coming into the system, but this made it very difficult to test successful upload because the API was reporting success, but the data changes were not visible for a considerable amount of time. Since I’ve never worked in a library that ran Koha in a production environment, I’m not sure if this type of record queuing is normal, but a better description of what is happening in the documentation would have been nice. When I first started working with the API, I actually thought that the data updates were failing because I was expecting the changes to be in real-time in the system…my experience however seemed to indicate that they are not.
Anyway – those are my quick thoughts. I need to caveat these notes by saying I have never worked at a library where Koha has been used in production, so maybe some of these behaviors are common knowledge.