Jan 292012
 

So I’ve been spending my time making a few changes to my proof of concept cataloging application using my phone.  A couple of things that I’ve learned along the way:

  1. No matter how good the OCR is, I’m not sure it ever gets to a point where you can just happily scan a catalog card and get all the data perfectly.  You can thank ISBD punctuation for that.
  2. Setting holds data in OCLC is much easier than you’d think it would be, thanks to the Z39.50 Extended properties.
  3. Adding a barcode reader really was easier than I thought it would be

Right now, the proof of concept allows users search (and set holdings) to OCLC (using their login credentials) or search and download records from US LC.  You can scan a barcode to get the record, or you can scan a library card and allow the program to attempt to disassemble the metadata to determine the best search profile.  Obviously, of the methods, this one is the most dodgy, but it’s interesting to see how it works and how OCR incrementally improves. 

As I’ve been working on this, it’s been making me wonder what are the real life implications for a project like this.  Obviously, one of the goals was to make taking catalog cards and making them easier to recon.  But the ability to use the phone as a barcode scanner and catalog on the fly also makes me wonder if a tool like this could be used while shelf reading or at point of acquisition of a text, or at a circulation desk when working with a book without a record. 

One benefit of this work as well, is that since this code is being written in C#, I’m starting to think about how I might co-op some of this work in MarcEdit.  The idea being that a user could upload a set of images to a folder and then MarcEdit could OCR those images and utilize the data from those images to automatically retrieve records for that content.  I’m not quite sure how reasonable of an idea this really is at this point due to limitations with OCR, but from a technical standpoint, I have all the components I would need to make this happen.  So who knows, maybe this work will spawn something new and innovative yet.  Well see.

 

–tr

 Posted by at 8:44 am
Jan 222012
 

The latest MarcEdit update has been baked and pushed out the door.  If you are running a current version of MarcEdit, you can expect to see the program prompt you for update (unless you’ve disabled that functionality).  Otherwise, you can find the update at: http://people.oregonstate.edu/~reeset/marcedit/html/downloads.html.  Originally, this update was planned to be primarily cosmetic, with two small bug fixes.  However, after working with a colleague playing with some large Hathi Trust metadata files, a few other updates ended up squeezing in.  So what’s changed?  See below:

  1. Enhancement: MARCXML => MARC enhancements.  When translating from MARCXML
    to MARC, MarcEdit will truncate records if the record data is too long (over
    the 99,999 bytes) or the field data is too long (over 9,999 bytes).  MarcEdit
    will truncate records that are too long or split the field data if too long.
    If either operation occurs, MarcEdit will recode the 008/38 to an "s".  This
    enhancement only affects the MARCXML=>MARC conversion function — however,
    that means that any function that converts data to MARC through MARCXML is
    affected by this change. 

    I discussed this change in more length here, but essentially, this change was necessitated because I’m occasionally running into XML data that I’d like to translate into MARC, but simply is too large.  The changes here allow MarcEdit when translating data through the MARCXML=>MARC process to automatically augment records that would otherwise be generated as invalid (as currently happens).  If you’d like to see how MarcEdit handles these types of errors, you can look at a sample file at: http://people.oregonstate.edu/~reeset/marcedit/anonymous/long_xml.xml.  This file has 3 MARCXML records.  The first one is roughly 3 times too large for a traditional MARC record thanks to the many 9xx fields in the record.  Prior to this update, MarcEdit would generate a record, calculating the length of the record incorrectly (it would calculate the length, then take the first 5 numbers in the value – since the record is longer than 5 values, the record length would be incorrect).  After this update, MarcEdit will now truncate fields once the record limit has been reached and notify the user through the UI that the truncation took place, in addition to the 008 modifications mentioned above.

  2. Bug Fix: Swap Field function:  Under certain rare conditions, moving data
    from a control field to a variable field results in the delimiter value being
    dropped on the swapped data.
  3. Bug Fix: Set Font function — when the function fails, the program will now exit the function gracefully and render the font in its default state.
  4. Enhancement: Validator has been augmented so that invalid record
    identification of records in .mrk format can be done outside of the
    MarcEditor.
  5. Enhancement: Added a new Change Case shortcut that allows users to set the
    initial character in a field to upper case, without modifying the case of any
    other characters in the subfield.

So that’s it for the updates.  The MARCXML=>MARC changes were very significant changes, but hopefully they will be useful ones.  I know that they will be welcomed at OSU since we occasionally run into issues of fields being too long when harvesting our ETD records from DSpace to generate our MARC records for the catalog.

–TR

 Posted by at 10:46 pm
Jan 212012
 

One of the benefits of moving the MARCXML=>MARC translation algorithm away from XSLT to an inline function is the ability to provide some sanity checking beyond the simple XML validation.  One of the issues that I see periodically when working with XML conversions is the need to code data truncation into my XSLT stylesheets.  For example, the ETD process that we use with DSpace looks for the abstract and makes sure that the data in the abstract doesn’t exceed the 9,999 bytes for a MARC field. 

Recently however, I found a different problem that I don’t run into often, but showed up when working with some data provided by the Hathi Trust.  Some colleagues were given a large sample of data (32 GBs of MARCXML) data to do some research into providing better identification of government documents records.  The new MarcEdit MARCXML process is able to make short work of this 32 GB file, translating the data into MARC in ~20 minutes.  The problem however, that arrives, is that some of these records are too long.  For reasons I cannot understand, the Hathi Trust data includes a local 9xx field, that from the context, appears to be item information.  Unfortunately, some records include thousands of items, meaning that when the data is translated, the resulting record is too large (exceeds the total length of 99,999 bytes). 

However, because of the new MARCXML process, I’ve been able to create a work around  for situations like this.  When processing MARCXML data, MarcEdit will internally track the record length of a translated record.  If that record would exceed the maximum record length, MarcEdit will truncate the record by dropping fields off the end of the record.  The program will also modify the 008/38 byte, setting the value to “s” (means modified) and will visually notify the user that a truncation occurred by changing the results panel purple.

image

While I generally take a hands off approach to modifying MARC data through the translation process, this seems to be a good compromise for dealing with what is now, a rare situation, but what I predict, will become an all too common situation as more data is created in systems without the MARC record limitations.

These changes to the translation engine will occur on the next MarcEdit update (scheduled for 1/23/2012), when I’ll post both an announcement and include a small record set that can demonstrate the new functionality. Hopefully, folks will find these changes useful, especially as technical services departments find themselves having to deal with more and more non-MARC metadata.

–TR

 Posted by at 1:10 am
Jan 192012
 

While I was at the PASIG conference this last weekend, a number of people talked about the death of the harddrive, at least in the sense of our personal portable devices.  The popularity of ultrabooks and small form notebooks was discussed many times, noting that personal computing will move more and more away from local copies to cloud-based drives because:

  • Solid State Drives provide the instant on/performance that people are wanting in their portable devices
  • The Expense of Solid State drives and their current relative small size will eventually relegate storage off the local device and into the cloud.

While I certainly agree that this likely will continue to be a trend (look at how tools like Dropbox are changing the way researchers store and share their data), I think think that many of the folks at PASIG may be too quick to overlook some of the very cool developments related to SSD technology that allow for microform factors, allowing ultra portables to support both a traditional SSD drive and the more traditional spinning drive.  Of course, I’m talking about the current work being done with msata drives. 

Currently, there are very few mainstream systems that support msata technology, which is unfortunate because these really are cool devices.  The two best probably are produced by Intel, which produces a 40 GB and 80 GB flavor of their drive (http://ark.intel.com/products/56547/Intel-SSD-310-Series-(80GB-mSATA-3Gbs-34nm-MLC)).  When I was looking for a replacement laptop this last month, I was looking specifically for a device that had both a SSD and traditional drive setup.  However, my requirements that the system be under 4 lbs and compact made this a difficult search.  However, in doing my research, I stumbled upon the Intel msata drive system. 

Now SSD drives are small to begin with, but the msata drives are downright microscopic.  The image below, taken from a review of these devices, shows just how small.  In fact, when I ordered one, I had a hard time believing that they really got an 80 GB drive on a chip a little bit bigger than a quarter.  Yet, they did.


(Image linked from http://hothardware.com/Reviews/Intel-310-Series-80GB-SSD-Review/)

So how well does this work?  From my limited experience with it (about 2 weeks) – great.  Intel provides a set of disk tools that allow you to migrate your current partitions onto the SSD disk – however, I choose to do a fresh install.  Installing Windows and all my programs onto the SSD drive cost me ~35 GB.  Setting up a little symlinking, I moved all the data components to the traditional harddrive (500 GB), leaving the SSD for just the operating system and programs.  Then I tested.

When I first received the laptop, I did some start up and shutdown testing.  On a clean system, the laptop, running a I-7 with 8 GB of RAM would take approximately 35 seconds for Windows 7 to finish it’s startup cycle.  Not bad, but not great.  Additionally, on a full charge, the system would run for ~3.7 hours on the battery (not good).  Running the Windows Experience tests, it gave the 500 GB, 7200 rpm drive a 6.2 (of 7.9) performance score.

After installing the msata drive and making it the primary boot partition, I gave the tests another whirl and the difference was striking.  First, on the Windows Experience testing, there was a significant different in rating.  Using the SSD as the primary system disk, the Experience tests gave the Intel 80 GB msata drive a score of 7.7 (of 7.9) – a pretty high score.  So what does that mean in real life?  Well, let’s start with boot times.  From a cold boot, it now takes Windows 7 approximately 5-7 seconds.  Closing the lid and opening it back up has essentially become instance on (for a while, I was wondering if the system was actually going to sleep when I closed the lid because it was on as soon as I opened it).  And finally, battery life.  On a full charge, under heavy use at the PASIG conference, I got nearly 8 hours on a single charge. 

While the move away from local disks may indeed happen in the near future, my more recent laptop purchasing experience showed me that for those that want to continue to have a very high performance system, with an small form factor – it is possible to have the best of both worlds utilizing these emerging SSD technologies to create very high performance (and relatively low-cost) portal systems.

–TR

 Posted by at 1:26 am
Jan 082012
 

One of my boys has really been on a bit of a writing jag lately.  I’m guessing that it’s because I write a lot for my job  (and his mom writes a lot as well) and they have a lot of stories that they’d like to tell.  Well, my youngest had asked me to setup the computer so that he could type a story there.  So, I fired up Word and let him have at it.  About an hour later, he came over and asked me how he could save his work.  Without thinking, I told him to click on the little disk picture in the upper left hand corner.  At which point, he looked at me like I had two heads because he’s never seen a disk.

My first reaction was to chuckle a little bit and then show him where the button was.  But, as I’m sitting here looking at MarcEdit, and thinking about the icon palette that I use in my own applications, it got me thinking.  So I started opening up applications on my computer and looked at how applications represent the “save” action.  Nearly universally, the save icon is still represented by some iteration of a disk (on my windows and linux systems) (I should note, MarcEdit uses a folder, with an arrow pointing downwards save).  Talking to my boys (hardly a representative sampling), I started asking about other icons on the palette, and one of the things that strikes me is that many of the graphical representations that we still use to represent actions are relics of a bygone technical past.  My oldest son knew which icon was used for saving, but the image was meaningless to him.  He’d learned how to save because someone had taught him which image to push. In fact, I think the idea of having “disks” sounded somewhat cool, until I told him that the disks were about the size of his little brother’s hand and didn’t have enough space to save his powerpoint to it.  At that point, we all agreed that his 4 GB jump drive was much better.   But to get back to my point, it was interesting that the save icons wasn’t something that he knew intuitively, and certainly isn’t something my youngest son knew intuitively.

This does bring up a question.  Technology is changing and if we are developing interfaces so that they can be used intuitively by our users, which users become our baseline when doing interface development?  Let’s use the save button as an example.  If we created a save button today, it would likely look much different, but would today’s users intuitively understand what it was?  My guess, no (actually, it’s not a guess, when I changed the save button in MarcEdit, it was quite confusing for a number of people).  So, an image of a nearly extinct technology remains the mainstream representation for how we “save” content within most applications.

But this isn’t just an application design question.  Libraries perpetuate these kinds of relic technologies as well.  A great example, in my mind is the ILS.  While this isn’t universally true, the ILS is essentially an electronic representation of a card catalog.  Users come to the library and generally can find things in the ILS, but really, how many would consider their ILS to be intuitive?  How many libraries have classes, online tutorials, etc. to teach users how to find things efficiently in their ILS?  I’m sure most still do.  We have these classes because we have to.  Our ILSs simply don’t work logically when considering current search technology.   And why should they, they were build to be electronic card catalogs, not search engines (or even product engines like amazon) that people use today.  These tools work by building transparent connections to other related options.  ILSs work by building connections through subject headings, or as I heard someone in the library say, by using, “what the hell’s a subject heading”.  Smile

Of course, changing things is easier said that done.  We have large groups of legacy users that would be just as confused if today’s interfaces were actually created to be more intuitive (i.e., were more representative of today’s technology).  [I think this is the same thing we say about why we still use MARC]  So, we have two pain points.  One is for new users that won’t understand today’s interfaces because they lack the institutional memory required to understand what they mean, and legacy users that are currently writing and designing the interfaces and tools in use today.  Guess who wins. 

I’ve got to believe that there is a way around this tension point between legacy users with institutional memory and new users looking for new intuitive interfaces.  Do we develop multiple interfaces (classic and contemporary) or maybe do something different all together.  I’m really not sure, but as I think about how I do my own coding and consider the projects that I’m working on – this is one of those things that I’m going to start thinking about.  We talk about taking bias out of the research process…well, this institutional memory represents its own type of bias that essentially poisons the design process.  I think one of my new years resolutions will be working on ways to eliminate this type of bias as I consider my own interface design projects.

–TR

 Posted by at 9:40 pm
Jan 082012
 

Over the past few years, I’ve owned a number of different smart phones.  I’ve had an IPhone, Android (the first in fact) and now a Windows 7 Phone.  I have admit, they are all great, especially when I compare them to my old Blackberry.  What you can do with each of these devices is quite cool.  One of my favorite aspects of these phones is how easy it is to hack on them.  When I had my IPhone, I spent some time learning Object C and writing a few simple IOS apps.  I did the same thing with Android and Java.  However, now that I have a Windows Phone, I find that I have many more opportunities to write applications for it because there’s no learning curve…I already use both Silverlight and C# in some personal coding projects. 

So, why do I bring this up.  Well, one of the things I’ve been thinking about is how these little micro computers that fit in our pockets can potentially be used in libraries.  There are some obvious uses (making our catalogs more mobile, using geolocation within a building to help users navigate to a book, etc), but what I’m more interested in is how we can make staff life a little easier with these devices.  Looking around our library, one area that I can definitely see where these kind of devices might be able to make a big impact is in cataloging and technical services – well, more specifically, eliminating the need to perform recon within cataloging and technical services. 

Travelling around a few libraries in my immediate area, one thing that I’ve found is many libraries still have small card catalogs.  The often are of materials that have yet to be reconned and represent older journal titles and monographs.  Many libraries also have large gift shelves, and areas in the stacks themselves, that remain uncataloged.  It would be nice if we could take these micro computers, fully equipped with a digital camera, and photograph ourselves out of this problem.  The difficulties of course relate to OCR and the conversion of this data into MARC itself…or maybe it’s not a difficulty.

I’ve been doing a little bit of playing around (well, more than a little bit) and here’s what I’ve found.  It’s easy to do OCR on the web (free OCR).  Folks my not realize it, but the Google Docs API provides a free OCR service.  So does Microsoft.  By working with the camera on a smart phone, it’s easy to send a snapshot of a book title page or card catalog card to one of these OCR services and return the results back to the phone.  Using MarcEdit (being written in C#, MarcEdit can be compiled to run on a windows phone, I’ve done it), I’m able to utilize the MARCEngine to take that OCR data and either retrieve data from Amazon, the library of congress, another library catalog – massage the data, and upload it to my catalog – all from my phone.  Pretty cool stuff. 

Right now, this work all remains in the research stages…its rough.  The UI is sad, and the parsing of the OCR’d data could be much better.  But the interesting thing is that it does work.  Does it have a real applicability in the library world – maybe, maybe not.  I’m just not sure if enough reconned material still exists for this type of application to be needed.  But what this type of experimentation does show is that libraries probably should be looking at these little micro computers as more than consumer devices (i.e., how they change the way our users interact with our services) and consider how these devices may change the way libraries perform their own work. 

BTW, if folks are interested in this recon project – my intention is to talk about it at C4L this Feb during a lightening talk.  Ideally, I’ll have it cleaned up enough to show it off, and maybe, if there is interest, talk to some folks about how they can run something like this on their own Windows 7 Phone. 

–TR

 Posted by at 5:29 pm