(Note, I started this post last night, but had to put it away so I could get some rest before a 6 am flight. I finished the remainder of this while waiting for my flight).
So, after getting up way to early this morning, I staggered my way down to the LITA Top Tech Trends discussion this morning. Unfortunately, it seemed like a number of other folks did the same thing as well, so I only ended up hanging out for a little bit. I just don’t have the stamina in the morning to live through cramped quarters, poor broadband and no caffeine. I get enough of that when I fly (which I get to do tomorrow). Fortunately, a number of folks who had been asked to provide tech trends have begun (or have been) posting their lists and some folks who braved the early morning hours have started blogging their response (here). I personally wasn’t asked to provide my list of tech trends, but I’m going to anyway, as well as comment on a few of the trends either posted or discussed during the meeting. Remember, this is just one nuts list, so take it for what it is.
- Ultra-light and small PCs (Referenced from Karen Coombs)
Karen is one of a number of folks that has taken note of a wide range of low-cost computers currently being made available to the general public. These machines, which run between $189-$400, provide low-cost, portable machines that have the potential to bring computers to a wider audience. I’ll have to admit, I’m personally not sold on these machines, in part because of the customer-base that they are aiming for. Companies such as EeePC note that these machines are primarily targeted to users that are looking for a portable second machine and kids/elderly looking for a machine simply to surf the web. A look at the specifications for many of these low cost machines are celerion class processors with 512 MB of RAM with poor graphics processing. Is this good enough for web surfing or browsing the web? I’d argue, no. The current and future web is a rich environment, built on CSS, XML, XSLT, flash, java, etc. I think what people seem to forget is that this rich content takes a number of resources to simply view. Case in point — I setup a copy of Centos on a 1.2 MHz Centrino with 512 MB RAM and a generic graphics card (8 Mb of shared memory) and while I could use this machine to browse the web and doing office work with Open office, I certainly wouldn’t want to. Just running the Linux shell was painful, but web browsing is clunky and office work is basically unusable — essentially, surpassing the machine’s capabilities right out of the box. Is this the type of resource I’d want to be lending to my patrons…probably not since I wouldn’t want my patrons to associate my library’s technical expertise with sub-standard resources. Does this mean that ultra-portables will not be in vogue this year and the next? Well, I didn’t say that. A look at the success the IPhone is having (a pocket PC retailing for close to $1500 without a contract) seems to indicate that users are wanting to and willing to pay a premium price for portability — so long as that portability doesn’t come at too high of a price.
- Branding outside services as our own (and branding in general)
There was a little bit of talk about this — the idea of moving specific services outside the library to services like Google or Amazon, and essentially, rebranding them. This makes some sense — however, I always cringe when we start talking about branding and how to make the library more visible. From my perspective, the library is already too visible, i.e., intrusive into our users lives. Libraries want to be noticed, and we want our patrons and organizations to see where the library gives them value. It’s a necessary evil in times when competition for budget dollars is high. However, I think it does our users a disservice. Personally, I’d like to see the library become less visible — providing users direct access to information without the need to have the library’s finger prints all over the process. We can make services that are transparent (or mostly transparent), and we should.
The same thing goes for our vendors. I’ll use III as an example only because we are an Innovative Library so I’m more familiar with their software. By all rights, Encore is a serviceable product that will likely make III a lot of money. However, of the public instances currently available (Michigan State, Nashville Public Library), the III branding is actually larger than that of the library (if the library branding shows up as well). And this is in no way unique to III. Do patrons care what software is being used? I doubt it. Should they care? No. They should simply be concerned that it works, and works in a way that it doesn’t get in in their way. From my perspective, branding is just one more thing that gets in the way.
- Collections as services will change the way libraries do collection development
I’m surprised that we don’t here more about this — but I’m honestly of the opinion that metadata portability and the ability for libraries to build their collections as web services will change the way libraries do collection development. In the past, collection development was focused primarily on what could be physically or digitally acquired. However, as more organizations move content online (particularly primary resources), libraries will be able to shift from an acquisitions model to a services model. Protocols like OAI-PMH make it possible (and relatively simple) for libraries to actively “collect” content from their peer institutions in ways that were never possible in the past.
- Increased move to outside library IT and increased love for hosted services (whether we want them or not)
While it has taken a great deal of time, I think it is fair to say that libraries are more open to the idea of using Open Source software than ever before. In the short term, this has been a boon for library IT departments, which has seen an investment in hardware and programmer support. I think this investment in programming support will be short-lived. In some respects, I see libraries going through their own version of the .COM boom (just, without all the money). Open Source is suddenly in vogue. Sexy programs like Evergreen have made a great deal of noise and inroads into a very traditionally vendor oriented community. People are excited and that excitement is being made manifest by the growing number of software development positions being offered within libraries. However, at some point, I see the bubble bursting. And why? Because most libraries will come to realize that either 1) having a programmer on staff is prohibitively expensive or 2) that the library will be bled dry by what I’ve heard coined by Kyle Banerjee as vampire services. What is a vampire service? A vampire service is a service that consumes a disproportional number of resources but will not die (generally for political reasons). One of the dangers for libraries employing developers is the inclination to develop services as part of a grant or grandiose vision, that eventually becomes a vampire service. They bleed an organization dry and build a culture that is distrustful of all in-house development (see our current caution looking at open source ILS systems. It wasn’t too long ago that a number of institutions used locally developed [or open] ILS systems and the pain associated with those early products still affects our opinions of non-vendor ILS software today).
But here’s the good news. Will all software development position within the library go away? No. In fact, I’d like to think that as position within individual organizations become more scarce — that consortia will move to step into this vacated space. Like many of our other services moving to a network level, I think that the centralization of library development efforts would be a very positive outcome, in that it would help to increase collaboration between organizations and reduce the number of projects that are all trying to re-invent the same wheel. I think of our own consortia in Oregon and Washington– Summit — and the dynamic organization it could become if only the institutions within it would be willing to give over some of their autonomy and funding to create a research and development branch within the consortia. Much of the current development work (not all) could be moved up to the consortia level allowing more members to directly benefit from the work done.
At the same time, I see the increase of hosted services on the horizon. I think that folks like LibLime really get it. Their hosted services for small to medium size libraries presumably reduce LibLime’s costs to manage and maintain the software and those hosted libraries from the need to worry about hardware and support issues. When you look at the future of open source in libraries — I think that this is it. For every one organization willing to run open source within their library, there will be 5 others that will only be able to feasibly support that infrastructure if it is outsourced as a hosted service. We will see a number of open source projects move this direction. Hosted services for Dspace, Fedora, metasearch, the ILS — these will all continue to emerge and grow throughout this year and into the next 5 years. And we will see the vendor space start to react to this phenomenon as well. A number of vendors, like III, already provide hosted services. However, I see them making a much more aggressive push to compel their users (higher licensing, etc) to move to a hosted service model.
- OCLC will continue to down the path to becoming just another vendor
I’d like nothing more than to be wrong, but I don’t think I am. Whether its this year, the next or the year after that, OCLC will continue to alienate its member institutions, eventually losing the privileged status libraries have granted it throughout the years, becoming just another vendor (though a powerful one). Over the last two years, we’ve seen a lot of happenings come from Dublin, Ohio. There was the merger of RLG, the hiring of many talented librarians, WorldCat.org, WorldCat Local and OCLC’s newest initiatives circulating around their grid services. OCLC is amassing a great deal of capital (money, data, members) and I think we will see how they intend to leverage this capital this year and the next. Now, how they leverage this capital will go a long way to deciding what type of company OCLC will be from here forward. Already, grumblings are being heard within the library development community as OCLC continues to move to build new revenue streams from webservices made possible only through the contribution of metadata records from member libraries. As this process continues, I think you will continue to hear grumblings from libraries who believe that these services should be made freely available to members, since it was member dollars and time that provided OCLC exclusively with the data necessary to develop these services. **Sidebar, this is something that we shouldn’t over look. If you’re library is an OCLC member, you should be paying close attention to how OCLC develops their grid services. Remember, OCLC is suppose to be a member driven organization. It’s your organization. Hold it accountable and make your voice heard when it comes to how these services are implemented. Remember, OCLC only exists through the cooperative efforts of both OCLC and the thousands of member libraries that contribute metadata to the database.** Unfortunately, I’m not sure what OCLC could do at this point to retain this position of privilege. Already, too many people that I talk to see OCLC as just another vendor that doesn’t necessarily have the best interests of the library community at heart. I’d like to think that they are wrong — that OCLC still remains an organization dedicated to furthering libraries and not just OCLC. But at this point, I’m not sure we know (or they know). What we do know is that there are a number of dedicated individuals that came to OCLC because they wanted to help move libraries forward — let’s hope OCLC will continue to let them do so. And we watch, and wait.
Anyway, that’s my list of trends.