May 052007
 

It’s been a little while since I’d taken a look at the progress being made by the MONO group in regards to the GUI development and since someone had asked, I thought I’d see how it was working.  The current stable working version of MONO tested is 1.2.3.  Testing was done on CentOS.  And what did I find?

  1. All major parts of the core MarcEdit program work under the current implementation of MONO, save a few quirks.  This means that with a little documentation and a bootloader to load necessary items to the GAC, you could run MarcEdit on a ‘Nix platform using Mono 1.2.3.
  2. Ancillary programs like the Script Maker, the Delimited Text Translator, Z39.50 client…some of these work, though none work as designed because I need to pipe the call through Mono which isn’t happening at this time.  This is something I need to fix (and its an easy one)
  3. I need to find a good Unicode font for my Linux build.  I tested the MARC8-UTF8 and UTF8-MARC8 conversions and they work like a peach.  However, I don’t have a font to read the data in UTF8.
  4. Obviously, the program isn’t quite as polished on ‘Nix under Mono.  This is noticeable mostly in status windows and messaging windows.  Also, sometimes window sizes are a little off.  This doesn’t affect functionality — just bugs me personally since I know how its suppose to work. 

So, what does it look like.  Here’s some screen shots.

 
Extract Selected MARC records

 


MarcEditor

 


MarcBreaker (MARC=>MARC21XML)

 
MarcEditor Field Count

 

So, where do I go from here?  Well, I think its probably time to put out some documentation for those brave souls willing to run this beast on a ‘Nix platform at let me know how it all works.  I’m in the process of putting together a tar ball, some instructions and a pre-installer that will handle configuring setup files, GAC installation, etc. for users.  How long will this take — probably a few days mostly because I’m going to see if I can’t clean up a few display things and Jeremy has got me chained to my computer working on a few LibraryFind issues so we can post the source for 0.8.  However, I really do need to get it ready this week — mostly because I will be the the Innovative Users Group May 14-17 and be co-presenting a session on batch editing vendor data.  Since there will be a MarcEdit section with examples — I thought I’d do the samples in ‘Nix just to make it fun (and hopefully it will work). :) 

–TR

 Posted by at 11:20 pm
May 022007
 

So this is downright embarrassing.  For the second time in about a month, I’ve basically just tipped over on my bike and fallen off.  The first time it happened, I had gotten cut off and couldn’t get my feet out of my cleats fast enough to keep from falling.  That was less embarrassing since it wasn’t completely my fault — though the blood stains on my shoes from my fall mocks me still.

Today, I was coming into work and while coasting, started fishing around in a pocket for my keys.  Well, it was wet and I wasn’t paying attention and missed the ridge where the road and loading dock at the library come together…and off I went.  For someone that rides as often as I do, you’d think I’d be spending more time on my bike and less time on the ground.  :) 

–TR

 Posted by at 10:16 am
May 012007
 

So OCLC and the UW finally pulled off the cover and unveiled WorldCat Local to the world. A number of folks will write about the things that this does very well. The faceting is nice and clean, as is the layout. Some of the ajaxy features (like holdings retrieval) is a little clunky and I’ve found a bit dodgy as I play with it on different browsers and operating systems, but for mainstream users, it will be fine. I like the integration with other OCLC services. This is one of the real value added benefits that OCLC can offer that most other systems likely cannot. Also, I’m assuming that as OCLC continues to develop this service, API access will become available for member institutions allowing institutions using WorldCat Local to embed their data into other systems.

However, looking at the beta, I also have a number of questions that I don’t have quick answers to either. Here they are in no particular order.

  1. Localization: OCLC’s catalog model utilizes a single master record. This means that local information, relating to access restrictions, local notes, Call Numbers — become marginalized. At this point, the beta doesn’t appear to have a call number search, which makes sense. This is local information that OCLC’s database wouldn’t have. But there is other data — local subjects not found in the OCLC master record, enhanced notes…all things that catalogers create in MARC records to surface items queried via keyword, subject or title — all missing from WorldCat Local at this point. Do we need this local information? Maybe not — this is a good experiment to see if that’s so.
  2. OCLC’ization of libraries: At present, libraries currently job out a lot of services to OCLC. Cataloging, ILL, Online Reference — adding the catalog/federated search would seemingly follow this trend. But at what point does the library stop being an individual entity and simply become an OCLC reseller. This isn’t a hit at OCLC, just a question.
  3. Related to that, I have questions regarding scale of operations. Since OCLC’s move to a unified platform for FirstSearch and its Connexion Cataloging environment, I would characterize operations to be…fragile. Rock solid services that seemed to never have down time seemingly experience problems on a pretty regular basis. In the grade scheme of things, not being able to catalog for a couple hours isn’t that big of a deal. However, once that is moved to the Catalog level — well, then it is. If one’s library catalog is down for an extended period, people notice. If down time becomes systemic — people will complain. Centralized indexes like Google work because of the large data centers that they have around the country to deal with issues relating to scale. FirstSearch/Connexion are member only services. Libraries traditionally throttle their access to their resources (either through purchasing a specific number of queries, ports or simply by how they present the resource) — so it doesn’t fill me with confidence that OCLC’s current services seem to have so many access problems. Again, in the large scheme of things, these services are almost always available — but it makes a big difference when we are talking about uptime for staff versus uptime for the public. Which brings me back to load. Were WorldCat Local to be used as the default catalog, suddenly, OCLC would need to handle a much larger number of queries. Rather than dealing with 1000 FirstSearch queries — OCLC would be required to handle hundreds of thousands of queries, done daily by multiple member institutions. At this point, I question their ability to work with the library community at that scale — but maybe OCLC has been creating data centers that we are not currently aware of.
  4. Widening the digital divide. One think OCLC’s WorldCat Local does is present itself as an index for all library collections. The “find near you” feature suggests that they know if the title is in “your” library. Well, no — they know if the title exists in your library if your library is an OCLC member and subscribes to FirstSearch. Essentially, this amounts to an OCLC tax (hey, just like Microsoft) for being in their index. For many institutions with funding woes, this tax could become too high and the cost will be a widening of the haves and have not’s as those that have can be a part of the OCLC index and those that have not are thrown to the wolves…I mean their ILS vendor.
  5. Cost — I’m curious to see how OCLC prices this service since it doesn’t replace an already existing service but will be an add-on. How you say…well, most ILS vendors don’t price their webpacs as separate line items. This is just part of the system. So you can’t stop paying for this part of the system and shift those cost savings to OCLC. About the only think that you can do is stop paying for system add-ons. Would this replace a federated search tool? Probably not. OCLC’s tool is searching brief records and abstracts for their article search. This is very different than the full text searches that you can get from most federated search tools. Plus, this search only encompasses things that currently exist in the OCLC universe. Until that universe expands, there will be too many items that it misses. The only think that I see it replacing are tools like III’s Encore…so maybe that’s enough — maybe not. I think OCLC will need to think very hard about how they charge for this service in part because libraries already spend such a large percentage of their budgets on OCLC services. I’m not certain, but I would guess that we pay OCLC more than any other vendor, including our ILS provider for services throughout the year. I’m wondering if adding WorldCat Local wouldn’t add to the current annual sticker shock associated with paying for these services.
  6. Library Brain Drain: I’ve talked with a few people about this. OCLC is collecting a lot of talented library project managers from the library community. While this gives me a lot of hope that many of the questions that I have will eventually have answers — it worries me for the library community. Top administrators are very important and there is a very shallow pool of library administrators that actually understand and can envision next generation digital library services. I worry what this poaching will mean for my profession.

Anyway, I guess that’s a long way to say that I’ll be interested in watching and seeing how this service continues to develop.

–TR

 Posted by at 1:10 pm