This question has shown up in my email box a number of times over the past couple of days. My guess, it’s related to the youtube videos recently posted demonstrating how to setup and use MarcEdit directly with Alma.
Folks have been curious how this work was done, and if it would be possible to do this kind of integration on their local ILS system. As I was answering these questions, it dawned on me, others may be interested in this information as well — especially if they are planning to speak to their ILS vendor. So, here are some common questions currently being asked, and my answers.
How are you integrating MarcEdit with the ILS?
About 3 years ago, the folks at Koha approached me. A number of their users make use of MarcEdit, and had wondered if it would be possible to have MarcEdit work directly with their ILS system. I love the folks over in that community — they are consistently putting out great work, and had just recently developed a REST-based API that provided read/write operations into the database. Working with a few folks (who happen to be at ByWaters, another great group of people), I was provided with documentation, a testing system, and a few folks willing to give it a go — so I started working to see how difficult it would be. And the whole time I was doing this, I kept thinking – it would be really nice if I could do this kind of thing with our Innovative Interfaces (III) catalog. While III didn’t offer an API at the time (and for the record, as of 4/17/2017, they still don’t offer a viable API for their product outside of some toy API for dealing primarily with patron and circulation information), I started to think beyond Koha and realized that I had an opportunity to not just create a Koha specific plugin but use this integration as a model to develop an integration framework in MarcEdit. And that’s what I did. MarcEdit’s integration framework can potentially handle the following operations (assuming the system’s API provides them):
Bibliographic and Holdings Records Search and Retrieval — search can be via API call, SRU or Z39.50
Bibliographic and Holdings Records creation and update
Item record management
I’ve added tooling directly into MarcEdit that supports the above functionality, allowing me to plug and play an ILS based on the API that they provide. The benefit is that this code is available in all versions of MarcEdit, so once the integration is created, it works in the Windows version, the Linux version, and the Mac version without any additional work. If a community was interested in building a more robust integration client, then I/they could look at developing a plugin — but this would be outside of the integration framework, and takes a significant amount of work to make cross-platform compatible (given the significant differences in UI development between Windows, the MacOS, and Linux).
This sounds great, what do you need to integrate my ILS with MarcEdit?
This has been one of the most common questions I’ve received this weekend. Folks have watched or read about the Alma integration, and wondered if I can do it with their ILS. My general answer, and I mean this, is that I’m willing to integrate any ILS system with MarcEdit, so long as they can provide the available API end points that make it possible to:
Search for bibliographic data (holdings data is a plus)
Allow for the creation and update of bibliographic data
Utilize an application friendly authentication process, that hopefully allows the tool to determine user permissions
This is a pretty low bar. Basically, an API just needs to be present; and if there is one, then integrating the ILS with MarcEdit is pretty straightforward.
OK, so my ILS system has an API, what else do I need to do?
This is where it gets a bit trickier. ILS systems tend to not work well with folks that are not their customers, or who are not other corporations. I’m generally neither, and for the purposes of this type of development, I’ll always be neither. This means that getting this work to happen generally requires a local organization within a particular ILS community to champion the development, and by that, I mean either provide the introductions to the necessary people at the ILS, or provide access to a local sandbox so that development can occur. This is how the Alma integration was first initiated. There were some interested folks at the University of Maryland that spent a lot of time working with me and with ExLibris to make it possible for me to do this integration work. Of course, after getting started and this work gained some interest, ExLibris reached out directly, which ultimately made this a much easier process. In fact, I’m rarely impressed by our ILS community, but I’ve been impressed by the individuals at ExLibris for this specifically. While it took a little while to get the process started, they do have open documentation, and once we got started, have been very approachable in answering questions. I’ve never used their systems, and I’ve had other dealings with the company that have been less positive, but in this, ExLibris’s open approach to documentation is something I wish other ILS vendors would emulate.
I’ve checked, we have an API and our library would be happy to work with you…but we’ll need you to sign an NDA because the ILS API isn’t open
Ah, I neglected above to mention one of my deal-breakers and why I have not at present, worked with the APIs that I know are available in systems like Sirsi. I won’t sign an NDA. In fact, in most cases, I’ll likely publish the integration code for those that are interested. But more importantly, and this I can’t stress enough, I will not build an integration into MarcEdit to an ILS system where the API is something that must be purchased as an add-on service, or requires an organization to purchase a license to “unlock” the API access. API access is a core part of any system, and the ability to interact, update, and develop new workflows should be available to every user. I have no problem that ILS vendors work with closed sourced systems (MarcEdit is closed source, even though I release large portions of the components into the public domain, to simplify supporting the tool), but if you are going to develop a closed source tool, you have a responsibility to open up your APIs and provide meaningful gateways into the application to enable innovation. And let’s face it, ILS systems have sucked at this, and much to the library community’s detriment. This really needs to change, and while the ability to integrate with a tiny, insignificant tool like MarcEdit isn’t going to make an ILS system more open, I personally get to make that same choice, and I have made the choice that I will only put development time into integration efforts on ILS systems that understand that their community needs choices and actively embraces the ability for their communities to innovate. What this means, in practical terms, is if your ILS system requires you or I to sign an NDA to work with the API, I’m out. If your ILS system requires you or their customers to pay for access to the API through additional license, training, or as an add-on to the system (and this one particularly annoys me), I’m out. As an individual, you are welcome to develop the integrations yourself as a MarcEdit plugin, and I’m happy to answer questions and help individuals through that process, but I will not do the integration work in MarcEdit itself.
I’ve checked, my ILS system API meets the above requirements, how do we proceed?
Get in touch with me at email@example.com. The actual integration work is pretty insignificant (I’m just plugging things into the integration framework), usually, the most time consuming part is getting access to a test system and documenting the process.
Over the past few months, I’ve fielded a lot of questions from colleagues at libraries either looking for or starting the process of selecting a new ILS system. It’s a good time for it – as all the major vendors in the ILS space are shopping new systems and currently trying to court customers from their competitors. And while I have only ever worked at libraries that are III systems, working with MarcEdit has given me the opportunity to work with libraries, and migrate libraries, from many different systems.
When I get these questions and talk to folks, I think that they are often disappointed that I don’t have a pat answer – I don’t think that there is one right system out there for libraries. With this new crop of offerings, each have different pain points, and honestly, figuring out what are your points of tolerance really go a long way to determining which system is likely going to be the best fit. Of course, even with that information, the right choice for an institution, may not be the “right” choice…and this is where I’ve been thinking about III.
While at Oregon State University, I made no secret that I thought that the business-model (Ala carte) and system design (closed box) of the past regime were bad for libraries in general, but especially bad for our library. And over the years, much of the work that we did in the library was to figure out ways to minimize our reliance on the ILS as a public facing system, and essentially write it out of our infrastructure. Now, that was not always something that could be easily, or elegantly done, but nearly every project that required access to data from the ILS started with the question of, how can we get the data out of the system so we can do something with it.
I know that my experience isn’t unique. There is a reason why Millennium has long been a punch-line in the library development community – which is why it is interesting to me to watch how Innovative’s new management and the library community, react to Sierra – and the impact that history and reputation can become one of those pain points.
I guess for full disclosure, moving to The Ohio State University Libraries, I found myself back at an Innovative library (and one working through a migration to Sierra). This was a bit of u-turn, since Oregon State University, as part of the Orbis/Cascades Alliance, was migrating to ExLibris’s Alma product. So, I’d already started to make the mental shift to begin working with and getting to know the ExLibris community. But, I find myself back in the III fold and again find myself thinking about how the ILS fits into the overall library’s infrastructure.
However, after taking a year-long sabbatical from the III community, one of the things that has struck me about the current discussions that I hear around Sierra and I think, underlines one of the major challenges that III’s new management is going to face, moving into the future.
Since rejoining the III community, the most common thing that I hear when people discuss Sierra is that its basically just a spiffed up version of Millennium. I heard this when we looked at it as part of the Orbis/Cascades Alliance and I hear it now talking to folks in OhioLink thinking about their current migrations. The problem with this sentiment is that I honestly think that this isn’t a fair statement. I won’t get into all the reasons why, but I think that it’s not quite fair because it glosses over some of the important work that has been done in Sierra. Unfortunately for III, that important work isn’t work that users or staff will see, but rather happened at the fundamental system level. By moving away from their legacy web server and database and simply adopting apache and postgres, III has given III libraries a reliable way to read their metadata. For the first time, in a long-time, I’m looking at the ILS as a place where I may actually be able to mine data from, rather than simply as a something I would generally ignore.
The problem I see for Innovative’s management at this point, is two fold:
They have a messaging problem. III wants to talk about the leap forward that they made with Sierra, but the really interesting stuff that would make those gains easy to see (things like Read/Write API access) don’t exist at this point. Right now, all the improvements are hidden from view, and honestly, unless they provide some reasons for people to care, they will be changes made for a small niche group of people willing to work with postgres (or at least, willing to replicate postgres and index the data into a tool like solr for better response time and flexibility for report writing), and the current sentiment around Sierra will be the reality (regardless of if it is true or not).
Secondly, III has a trust problem. III’s previous regime had a tendency to overly fragment their system, to the point that the running joke was that if you wanted to do something interesting with the software, you had to buy x and the question was how many zeros would it cost. III’s ala carte pricing works for a lot of people (I think, I mean, it must, for someone) – but I think that they need to re-evaluate what is part of the core system. The API for example, this needs to be part of the core system. Likewise, given a history of unfulfilled promises around API development, III needs to do this development in the open. Right now, I’ve been hearing about the API development for 2 years, yet the community is still waiting for some kind of document highlighting what will be included. I’d argue, more of this work needs to be done in the public.
What I’ll be interested in, especially now that I’m back at a III library, is how they work to manage these challenges. III currently serves a lot of libraries – but I don’t think that these challenges can be understated. In the past two weeks, I’ve spoken to 8 libraries, all currently Innovative, all seriously looking to migrate to something else. In talking about their pain points, it’s clear that these libraries have started to wonder how much longer they can trust and wait for III to make the necessary changes to allow III libraries to re-engage with their ILS systems.
At the same time, I’m hopeful, maybe more so than I’ve been in a very long time. I’ve spoken with many of the new leaders at III and I think that they understand the problem. I’m also seeing movement towards collaborations that simply wouldn’t have been possible under the previous management – so it will be really interesting to see how this turns out. It will definitely be something worth watching,
So this is what it has come to…two iconic (III and OCLC that is) library software vendors wrestling in court. Not that I think this is all that surprising…too bad…but not particularly surprising. In fact, I think that many folks in the library community probably seen this coming. For me, the canary in the mine, so to speak, has been the OCLC record use policy revision process. Starting with the first attempt in Nov. 2008 which plain sucked and finishing with the present revision, which pretty much guaranteed a lawsuit, the records use policy has been an interesting illustration of what is both working and broken with OCLC. Why – because the record use policy linked the use of the records to OCLC’s WorldCat service. The policy, as it stands, doesn’t include just a list of rights and responsibilities, but spells out why it is needed…to protect the WorldCat database. My personal opinion, as written here, is that any record use policy should have been written separate from WorldCat. Joining the two together is problematic, and this lawsuit demonstrates how.
As more and more time passes, I’m convinced that OCLC, as it exists today, is of two minds. There is the membership mind and the vendor mind. The problem that libraries and library vendors face, is that in many circumstances, the vendor side of OCLC is unduly influencing the membership side of the organization. I think that the final record use policy is a good example of this, as OCLC placed a number of artificial walls around the WorldCat database – walls that really do nothing but protect OCLC’s web scale initiatives by putting up artificial barriers.
This is one of the reasons why I had suggested the need for OCLC to consider break itself up (http://blog.reeset.net/archives/579) back in Nov. 2008. The problem, as I seen it then, is that as long as OCLC continues to move and compete within the vendor space, this tension between itself as a membership governed coop and as a vender organization will be present. It will become increasingly difficult for OCLC to make decisions solely in respect to the memberships long-term needs, if those decisions must also be measured against the products and services that OCLC’s vendor operations handle. Regardless of how this lawsuit with III is resolved, I don’t think that it will be the end of the litigation. Over the past year, I’ve encountered too many vendors that are becoming more open with their feelings that OCLC is unfairly hiding behind their status as a tax-free organization to monopolize the library space. If OCLC can demonstrate that its web scale services will work for larger ACRL libraries, I think that more and more vendors will continue to push back. Vendors will continue to make their displeasure known and the membership is going to have to ask itself how many lawsuits its willing to endure.
I still believe that the simplest solution to this, however, would be breaking up the OCLC organization. I think it would ultimately be good for OCLC and libraries. It would allow OCLC’s vendor units to compete without having to worry about potential lawsuits and it would give OCLC’s research arms the ability to do research without the ever constant need to productize their work. And there is actually a number of precedents to this type of thinking. Universities for one, do this all the time. Publically funded research or development is commercialized under umbrella organizations. Why couldn’t OCLC do the same.
In some ways, I see this lawsuit mirroring the current discussions related to net neutrality. Should all information be treated as equal on the web. In a sense, that’s what III is asking OCLC to provide, a kind of net neutrality for libraries. This idea that WorldCat represents a core information pipe within the library community, and should be made open to the entire library community (both vendor and non-vendor) for a reasonable fee. Libraries currently pay a fee for access (our memberships) – likewise, vendors could pay a reasonable fee to access and develop against the WorldCat bibliographic and holdings database.
To take this further, when considering net neutrality, I believe that the library community would be nearly unanimous in supporting this as a necessary requirement for the future. It’s a problem that we feel we have a stake in since librarians ultimately trade in information. Likewise, I think that libraries will need to decide what type of organization to they want OCLC to be. As a membership organization, libraries still have some power to shape the long-term vision of the OCLC cooperative. So I wonder when librarians will start to push OCLC to embrace the same tenants of open and fair access to data that we currently demand from other vendors/information providers. OCLC is ours, and how we handle this resource will ultimately determine how others view the library community as a whole. In the end, will we pursue OCLC to reflect the communities larger vision and reflect our professional ethics in respect to open data and cooperation or will the larger information community simply look at the library community as a bunch of hypocrites – hypocrites that demand open data from other communities but isn’t will to reciprocate when the data is our own.
I’m running a little bit behind here, but the 14th annual NWIUG came to an end on Friday and a couple of interesting tidbits came out of it. Probably the most welcome tidbit to come out of the conference came during Betsy Graham’s Keynote early the first morning when detailing the changes that will be coming to Encore 3.0. Encore, for the uninitiated, is Innovative Interfaces web 2.0 solution. As of this point, Encore 3.0 is scheduled to include an API to allow users to query directly against the Encore platform. This is one of those things that I’ve been asking III for for the last 7 years (pre and post Encore) and am glad to see that they are making this move. It’s certainly welcome. Of course, API access to their system will only come as a part of Encore — so if you are a III system this announcement is only helpful if you decide to utilize their Encore software.
Couple of other notes. I gave a keynote as well and discussed moving the ILS to the network space and what that might look like within the Pacific Northwest (since we have some established partnerships that might make this easier). The more I’ve worked with our ILS, the more I’m convienced that there really isn’t a compelling need any longer for local ILS system, but am more interested in seeing libraries consolidate systems while we wait for someone to develop a networked alternative. In that vein, I’m curious to know what III is doing to position itself to survive within this space. Their development model is still very client focused — so I would be curious to see how they view their own future within this space.
Funny day. Great story. I’ve been hanging out in Minnesota attending DLF and got a funny message. Apparently, my absence at this year’s IUG was noticed and a few folks wanted to know if Innovative had somehow banned me from attending. I found this an odd statement for two reasons. First, I’m not sure what the author of this message had in mind. The first thought I had was an episode of the Simpson’s. During the first season, Homer is voted to negotiate with Mr. Burns to keep their dental plan. After failing to submit to Mr. Burn’s original offer, we have a scene with Homer at home.
Homer: Who’s there Voice: Hired Goons
[Homer opens the door to find two hired goons waiting to take him to Mr. Burns]
Is this the scenario that was being imagined? Probably not. I imagine what probably prompted the question was some recent events and announcements to come out of the Pacific Northwest, specifically as they had to do with Summit, a consortia made up of Oregon and Washington Libraries. For the past 13 years (if I remember correctly, Summit started my freshman year at the University of Oregon in 1995), the Summit (or Orbis consortia, as it was known for the first 9 or so years) consortia utilized III’s InnReach software. As of next fall, this will change, as the consortia and OCLC have entered into an agreement to build a consortia version of WorldCat Local. This decision has put some strain in the relationship between III and the consortia (and III and it’s members) over the past year — but it really is time to move on. In the end, we simply didn’t share the same needs and were moving in two different paths. On one hand, the consortia had a real desire to purchase a solution that provided much greater access to API for purposes of interoperability and local development. With OCLC’s product, this appears to be something that will be available to the consortia and it’s members — especially as OCLC’s grid services become a reality. III on the other hand, well, I think that they will eventually come around to making an API available — they are just doing it much slower than we would like. But more than anything though, I think it comes down to a difference of philosophy. III, for better or worst, still sees the library catalog as the central resource of a library’s infrastructure, while Summit and it’s members are starting to see it as one small piece of a much larger whole. Because of that, the two groups place different emphasizes on things like API access, NCIP support, OAI support, etc. — essentially, services that would ease the flow of data into and out of the library system. For Summit, not having the ability to use or develop these services became a deal breaker (and for III, our request for them was a deal breaker as well).
Secondly, and this is one of those things that I’m slightly concerned about as a customer and member of the IUG community is that there was an expectation by some that the above consortia changes would lead a company to blackball a group of members. Is that really how the IUG community looks at their relationship with III, one that is built on a foundation of eggshells? Or does it say something about librarians, who tend to treat their vendors with kid-gloves. I think it shows a little of both — because people in the IUG community are nervous (I hear it all the time) that III will bring the hammer down on institution if they criticize their products, which certainly feeds into the library community’s built-in timidness in regards to how we work with our vendors. But now I’m getting off topic. 🙂
So, for the record, the reason I’m not at the IUG is because I’m in Minneapolis attending DLF. So, no, III didn’t send Rocko to my office in Corvallis (at least, not yet) — sorry to disappoint.
I’ve become more and more convened over the past year talking to directors that for OSS development to be accepted as a part of the library community, it’s going to have to become a mainstream service. Too much R&D in libraries is done as part of an individual, student or demo project. To a large degree, front-line workers and developers within the library community have a healthy bent towards OSS. But organizational attitudes change slower and these are the ones that tend to matter. So, I’m going to be taking a different course over this next year — at least within my own small part of the world.
Over the past three months, I’ve been leading a group looking at next generation ILS services for our regional consortia, Summit. Summit is a consortia made up of 33 academic libraries throughout Oregon and Washington — with all system’s being Innovative. This is due to the fact that III’s consortia software really only works with III libraries. In looking at the various options available — we’ve tried to keep an open mind. I’ve been running copies of Koha and Evergreen over the past month to look at current functionality within a very untraditional consortial setting, folks have spoken to vendors like Endeca, Aquabrowser, III and OCLC as well as others. In all, the process has showed me a couple of things.
Given that this decision will be just on the consortia database, our options are somewhat limited. III doesn’t make the process of having an outside vendor interact with the Innreach system easy — though we’ve been told it could be done. This means that we migrate off III as a group (can’t see that happening), partnering with III (what I think many would consider to be the safer, least disruptive choice), working closely with OCLC — though the second and third options don’t hold much appeal to me personally.
Which leads me to number 2 — while the consortia has more than enough talent to develop an inhouse solution — the organization infrastructure simply doesn’t exist to allow such a solution to be considered.
The second realization is what struck me most. I spend a great deal of my time helping folks within the Pacific NW implement tools around their ILS — but there really isn’t a centralized or formalized R&D process within the consortia — and for a group this large, that seems to be a shame. There is a lot of talent tied up within the 33 member organizations, the question is how to get at it.
Well, I’ve got an idea. While my group really cannot make a recommendation related to the current software available (we can talk about what’s available and what I believe to be the future trends) — I can advise that we formalize an R&D group within the consortia. Fortunately, Summit is hiring a digital library coordinator — and I think that this position would be perfect to lead this group. I envision a committee that could be used to:
coordinate Summit development efforts and investigate options like SOPAC, metasearching within a consortial environment, OpenURL within a consortial environment, etc.
provide Summit with shared development resources — allowing member libraries to help drive development of services, while distributing the R&D between member libraries
advocating for OSS and an active R&D agenda to the member libraries directors and the Summit executive board.
In all honesty, I think #3 is the most important. The proprietary vendor community is very adapt at dealing with the library community at a high level, and this allows them to shape the overall environment within the organization. My hope is that by creating a formal working group within the consortia and identifying that this is indeed important — and help to lead to an attitude shift within the Pacific Northwest.
Will it work? Who knows. I’ve floated the idea by a few folks — some on other committees, some familiar with the current makeup of the executive committee, and the overall mood isn’t optimistic. The biggest challenge to overcome is this idea that one’s library doesn’t have any special skills to offer (or any bodies to offer). If R&D is valued at an organization — resources and people can be found.
Anyway, my hope is that the recommendations that come out of this study will help to move this conversation forward. As I said — there is a lot of talent in the Pacific Northwest — its time we started tapping into as a group and seeing what can be accomplished within a consortia when everyone contributes.
Well, I got off to a bit of a slow start today. I stayed with my brother and sister-in-law in Vancouver, WA and had to make the trip across the river back into Portland. The sessions started at 9:30 am, so I took off around 9 figuring that 30 minutes would be plenty of time. However, I was wrong. It took ~45 minutes just to travel the 4 miles on I-5 to get out of Vancouver and into Portland. Final travel time, ~1 1/2 hours. So instead of 9:30, I showed up at 10:30 am, which means I missed the first session of the day entitled: One-stop shopping for journal holdings: the ideal and the reality. Fortunately, we had a lot of folks from OSU at the event, so I’m sure one of our group had an opportunity to take in this session.
The rest of the day, I spent either speaking (2 topics) or preparing to speak. I did two topics. One was on III’s global update functionality, which is an innovative specific application for database maintenance. The second topic continues this recent spat of evangelism that I’ve been participating in regarding the need to require our vendors to provide open apis. The title of my talk was entitled, Being innovative without innovative and I thought went well. I actually recorded the talk, but I’m never sure what I can post and not post (III’s usergroups, both national and regional follow some courtesy rules when dealing with III topics) so I’ll have to see if I can post my talk.
A discussion of why and how the King County Library brought up AquaBrowser. AquaBrowser is an interesting application, but I’m not sure what to think about it to be honest. However, what I did find interesting was how they sync. data between AquaBrowser and III. I have a lot of methods that I use to extract data — but none of them would scale if exporting our catalog on a nightly basis. So what I really enjoyed was hearing about III’s MarcOut tool. Apparently, this tool provides a simplified method for extracting your MARC data. This is a tool I’m unfamiliar with — so I’m going to be spending some time chatting with the help desk to figure out what this tool is and how we can make use of it.
Sion Romaine and Linda Pitts, University of Washington
This session focuses on the implementation of MARC holdings within III and the UW’s process converting their free text holdings into MFHDs. The presentation gave a very quick overview of the MFHD format as well as some information relating to the problems that they have encountered both in moving the free text data as well as dealing with some III quirks with how the holdings information has been rendered.
I actually felt alittle bad attending this session. About 8 months ago, UW had asked if I would be willing to help them do an automatic conversion of these records. At the time, I had the time to work on it and spent time talking with them about various things needed to do to do the conversion automagically. I’ve done this in the past for libraries — but it takes a lot of time to get it done and unfortunately, their desire to start the conversion landed on my busy time (June – August), so I couldn’t dedicate the time to work as closely with them on this as I would have liked.
Thank god, this should be the last of my travel for some time. Thankfully, this is a local conference — the Northwest Innovative Usergroup. I’m actually presenting two topics, one related specifically to III’s products and one where I’m going to be doing a little evangelism for open access within our ILS (good luck, I know — III and open access seem to go together like oil and water).
The NWIUG conference is actually an interesting usergroup. Its different from the national conference in that there are a lot fewer III staff presentations — so you tend to get a lot of information from actual users of the system so you can see some interesting things folks are doing.
While at the same time, this is an III usergroup, which is reflected in the keynote. This year, Dinah Sanders gave a talk on the future of the WebPac, Innovatives public interface. The discussion this year centered around Encore, III’s next generation web opac. Encore will include a number web 2.0izy features like user tagging and comments. Will this be free — I doubt it, will it be interesting — for public libraries yes. Academics — interesting but I’d be curious to see how useful. Unfortunately, I think that there is a dangerous side to Encore as well. It will integrate all III packages, like their federated search, openURL, etc. Basically, it encourages vendor-lock, as the integration only works with III products — so it basically just makes a bigger silo’d data store. And unfortunately, I’m sure there will be a number of people ready to drink the III kool aid. The one bright spot of Encore is that III says that it will rely on webservices. From what I’ve heard, there isn’t an interest at this point in making these webservices available for public consumption, but I’m sure that could change.
We’ll see. From my perspective, III is on the clock. I don’t see the OPAC as having a future in libraries. It won’t go away right away, but I firmly believe that libraries need to stop spending money on it and start looking at other solutions. III could make lives much easier for innovative libraries by providing an open API — but so far, they haven’t and if they don’t, I predict that they will start to find themselves losing relevance within libraries. I given them 5 years. If they can’t recognize this shift from black box development to open architectures — well, I’d be concerned about their future — particularly in the academic market where libraries have development resources. Tick, tick.