Tracking Santa

 Family, Santa, snow  Comments Off
Dec 242008
 

This has been a tradition in our house for the past 4 years — but each Christmas eve, the boys and I use NORAD to “track” Santa.  It’s a neat, fun little thing that NORAD does that the kids really enjoy.  If you’ve never seen it, check it out: http://www.noradsanta.org/en/home.html

 Posted by at 11:05 am
Dec 242008
 

While LibraryFind has always supported SOAP, LibraryFind 0.9 will finish the process of adding json apis for all the SOAP based apis.  This process started in LibraryFind 0.8.5.3 and continued in release 0.8.5.8 — but will be finished with 0.9.  I’m working up documentation for the Json calls (which basically emulate the SOAP calls for simplicity), so I’ll post the link once its finished.

 

–TR

 Posted by at 2:09 am
Dec 242008
 

One of the tenants behind LibraryFind has always been that LibraryFind would only query materials that provide some kind of standard search protocol.  However, there are many sites that provide API access, but it’s no a standard api access like OpenSearch for example.  For example, a user wanting to query Yahoo or Flickr (where many libraries are starting to build collections) would have previously been unable to use LibraryFind to query these resources.  However, that will change with LibraryFind 0.9.   LibraryFind 0.9 introduces a custom connectors framework, that will allow users (including OSU) to develop custom connectors to resources that utilize stable, formalized APIs within LibraryFind. 

Configuring these new resources is easy.  In the collection administration screen (note, this might change slightly), a user would simply note that the connection type is connector, and then name the connector in the Host area.  From there, the user doesn’t need to define any other elements (though you can). 

Admin Interface Example:

image

Once set, the application will utilize the connector as any other standard search class.  So far example, I created a test group and queried my name using our IR, Flickr and Yahoo.  Using these elements, I retrieve the following:

image

Here you can see an integration of Internet resources (from yahoo), images (from flickr) and Articles (our IR).  Bringing Internet resources into the results complicates relevancy ranking (in part because there is so little metadata about the items being retrieved), but that’s something that I’ll worry about as we start to work with these items within the results set.

So how will this work.  Well, I thought about going the plugin route (since Rails already provides a good model), but instead decided that I wanted to keep these custom search classes near the predefined search classes. So, in the environment.rb file, I defined an additional load_path under models (custom_connectors).  Within this directory, users can drop their home made custom connectors for use by the application. 

The connectors themselves must use the same format as the general search connector.  Within the directory, I’ll include an example connector, but in a nutshell, the code generally looks like the following:

   1:  # LibraryFind - Quality find done better.
   2:  # Copyright (C) 2007 Oregon State University
   3:  #         
   4:  # This program is free software; you can redistribute it and/or modify it under 
   5:  # the terms of the GNU General Public License as published by the Free Software 
   6:  # Foundation; either version 2 of the License, or (at your option) any later 
   7:  # version.
   8:  #       
   9:  # This program is distributed in the hope that it will be useful, but WITHOUT 
  10:  # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS 
  11:  # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
  12:  # this program; if not, write to the Free Software Foundation, Inc., 59 Temple 
  13:  # Place, Suite 330, Boston, MA 02111-1307 USA
  14:  # 
  15:  # Questions or comments on this program may be addressed to:
  16:  #   
  17:  # LibraryFind
  18:  # 121 The Valley Library
  19:  # Corvallis OR 97331-4501
  20:  #
  21:  # http://libraryfind.org
  22:   
  23:  require 'rubygems'
  24:   
  25:  class ExampleSearchClass < ActionController::Base
  26:    @cObject = nil
  27:    @pkeyword = ""
  28:    @feed_id = 0
  29:    @search_id = 0
  30:   
  31:      logger.debug("collection entered")
  32:      @cObject = _collect
  33:      @pkeyword = _qstring.join(" ")
  34:      @feed_id = _collect.id
  35:      @search_id = _last_id
  36:      begin
  37:        #perform the search
  38:        results = your_search(@pkeyword, _max.to_i)
  39:      rescue Exception => bang
  40:        if _action_type != nil
  41:           _lxml = ""
  42:           logger.debug("ID: " + _last_id.to_s)
  43:           return my_id, 0
  44:        else
  45:           return nil
  46:        end
  47:      end
  48:   
  49:      if results != nil
  50:        begin
  51:           _lrecord = parse_yahoo(results)
  52:        rescue Exception => bang
  53:          if _action_type != nil
  54:             _lxml = ""
  55:             return my_id, 0
  56:          else
  57:        end
  58:      end
  59:   
  60:          _lxml = CachedSearch.build_cache_xml(_lrecord)
  61:   
  62:          if _lxml != nil: _lprint = true end
  63:          if _lxml == nil: _lxml = "" end
  64:   
  65:          #============================================
  66:          # Add this info into the cache database
  67:          #============================================
  68:          if _last_id.nil?
  69:                  # FIXME:  Raise an error
  70:                  logger.debug("Error: _last_id should not be nil")
  71:          else
  72:                  status = LIBRARYFIND_CACHE_OK
  73:                  if _lprint != true
  74:                          status = LIBRARYFIND_CACHE_EMPTY
  75:                  end
  76:          end
  77:       else
  78:          _lxml = ""
  79:       end
  80:   
  81:       if _action_type != nil
  82:          if _lrecord != nil
  83:            return my_id, _lrecord.length
  84:          else
  85:            return my_id, 0
  86:          end
  87:       else
  88:          return _lrecord
  89:       end
  90:    end
  91:   
  92:    def self.strip_escaped_html(str, allow = [''])
  93:          str = str.gsub("&#38;lt;", "<")
  94:          str = str.gsub("&#38;gt;", ">")
  95:          str = str.gsub("&lt;", "<")
  96:          str = str.gsub("&gt;", ">")
  97:          str.strip || ''
  98:          allow_arr = allow.join('|') << '|\/'
  99:          str = str.gsub(/<(\/|\s)*[^(#{allow_arr})][^>]*>/, ' ')
 100:          str = str.gsub("<", "&lt;")
 101:          str = str.gsub(">", "&gt;")
 102:          return str
 103:   
 104:    def self.your_search(query, max)
 105:      xml = yourquery(query, max)
 106:      _objRec = RecordSet.new()
 107:      _title = ""
 108:      _authors = ""
 109:      _description = ""
 110:      _subjects = ""
 111:      _publisher = ""
 112:      _link = ""
 113:   
 114:      #Parse your data
 115:      _start_time = Time.now()
 116:   
 117:      #loop through your results and populate Record.
 118:      nodes.each  { |item|
 119:         begin
 120:            record = Record.new()
 121:            record.vendor_name = @cObject.alt_name
 122:            record.ptitle = normalize(_yourtitle)
 123:            record.title =  normalize(_yourtitle)
 124:            record.atitle =  ""
 125:            record.issn =  ""
 126:            record.isbn = ""
 127:            record.abstract = normalize(_yourdescription)
 128:            record.date = ""
 129:            record.author = normalize(_yourauthors)
 130:            record.link = ""
 131:            record.doi = ""
 132:            record.openurl = ""
 133:            record.direct_url = normalize(_yourlink)
 134:            record.static_url = ""
 135:            record.subject = normalize(_yoursubjects)
 136:            record.publisher = ""
 137:            record.callnum = ""
 138:            record.vendor_url = normalize(@cObject.vendor_url)
 139:            record.material_type = normalize(@cObject.mat_type)
 140:            record.volume = ""
 141:            record.issue = ""
 142:            record.page = ""
 143:            record.number = ""
 144:            record.start = _start_time.to_f
 145:            record.end = Time.now().to_f
 146:            record.hits = _hit_count
 147:            _record[_x] = record
 148:            _x = _x + 1
 149:         rescue Exception => bang
 150:          logger.debug(bang)
 151:          next
 152:         end
 153:      }
 154:      return _record
 155:   
 156:    end
 157:   
 158:    def self.normalize(_string)
 159:      return _string.gsub(/\W+$/,"") if _string != nil
 160:      return ""
 161:      #_string = _string.gsub(/\W+$/,"")
 162:      #return _string
 163:    end
 164:   
 165:  end

 

However, within the custom_connectors directory, there will at least be the yahoo_search_class.rb and the flickr_search_class.rb which will provide sample code sets for users wanting to see how a custom_connector may be created.

Anyway, as I continue marching towards the release of the 0.9 code-base, I’ll continue to post some of the new functionality that folks should expect to see in the new version. 

 

–TR

 Posted by at 2:00 am

Family Pictures

 Family  Comments Off
Dec 192008
 

IMG_0135

 

So, I’ve posted a couple of pictures on my facebook page for folks wanting to see what the family is up to.  So for those interested, you can find:

 Posted by at 8:56 am
Dec 192008
 

While the idea of having snow on Christmas still sounds like a good idea, I have to admit, I’m ready to start seeing the white stuff melt away.  Unlike many other states (or places that get snow with some regularity), Oregon simply doesn’t so a little bit of snow turns normal, decent drivers, into morons.  Yesterday I was lucky to make it home in one piece after someone tried to park their car in my back seat.  Of course, its not just the drivers.  Because we get snow so rarely in the Valley, the State and its cities simply are not prepared to remove it when it does come.  You won’t see snow plows clearing roads, or streets being de-iced (unless you live in Portland, and even that is a crap shoot).  So, when the snow shows up, it stays.  Fortunately, I have a 4 wheel drive with chains so I can get around most anywhere if I need to — though that will likely be tested today as I need to make a trip over the mountains (otherwise, I have two little boys that will be sadly missing time with the Grandparents and opening presents).  Anyway, the point.  Yesterday, we’d gotten some rain (freezing rain), but for the most part, all the snow had disappeared and it looked like live was returning to normal.  Today, I walk outside and this is what I find:

IMG_0161

And for me, the big downer to all this has been a real lack of cycling time.  For the most part, the snow has been too deep (or icy) for me to spend time out on the highway biking — but at least the boys and I will get to have another snowball fight this morning. :)

 

–TR

 Posted by at 8:32 am
Dec 182008
 

So far so good.  A number of people have updated without snags.  However (there had to be one, right), if you are a new MarcEdit 5.x user (installed today) or someone using the MarcEngine5.Query object, I’ve pushed a new update that will solve a missing file and object issue.  If you updated and there were no problems (you seen no errors) and you don’t use the MarcEngine5.Query object (i.e., you are not running a vbscript script or something that relies on it) — you are good.  But I was so worried about existing users having a smooth transition, I accidently left a file unmarked that is required for new users.  You’d know if the error affected you because you would see it when you ran MarcEdit for the first time. 

Both the Setup and Runtimes files have been uploaded:

As always, if you run into any problems, just give me a holler.

–TR

 Posted by at 11:37 pm
Dec 182008
 

One thing I did forget to highlight in the update [in reference to: http://blog.reeset.net/archives/597].  A number of changes were made to the Delimited Text Translator — the most notable being a fix that allows data saved in UTF16 or UTF32 to be translated into UTF8 so that it can be used to generate MARC21 Data.

 

–TR

 Posted by at 11:41 am
Dec 182008
 

It’s taken a little while to get this update finished, but I’m finally ready to make this available to the general public.  So what’s in the update?  Actually quite a bit, though I won’t go into a long list, but will say that there were some bug fixes (mostly related to UI or convenience functions), a couple of new API functions added (documentation has been updated for these elements), etc.

The big change however — is that I’ve spent a good deal of time reworking MarcEdit so that it will work better on Vista or Group managed systems.  The problem that people on these systems were having was that MarcEdit traditionally saved all data (config, temp) data into the Program Directory.  Since group managed systems (and Vista by default) restricts access to this folder by default, I’ve re-worked MarcEdit so that all the config/user defined folders/files have been moved to the Roaming Application User Directory (on 2000/XP — this is c:\documents and settings\user\Application Data\marcedit, in Vista, c:\users\[username]\app data\marcedit).  Really, the only directory not being moved is the xslt directory, and that’s because this information needs to be shared by all users working with MarcEdit.

The upside of this change is that MarcEdit should be easier to manage for people in group managed systems.  Also, MarcEdit is now profile aware, so multiple users on the same system can have their own MarcEdit profiles. 

So why did the update take so long?   Well, it’s the user-base.  If there wasn’t a large user community, this wouldn’t be a big deal as I wouldn’t need to worry about migrating existing user data.  However, since this process really needs to be transparent, a lot of time was spent trying to make sure that the migration of existing data was done smoothly so hopefully there won’t be any hiccups (or at least, those that I see are few and far between). 

What’s next:

If you have an idea for MarcEdit, give me a holler.  I always like to hear users suggestions.  However, here’s what I’m working on for my next update.

  1. Support for ‡biblios.net — an open data repository of MARC data.  I’m a firm believer that the metadata that we create within the library community should be made as freely as possible, without usage restrictions.  And to some degree, that’s what the ‡biblios.net project is about.  Very likely, I’ll add support for this service as a plug-in to MarcEdit.  This will allow users to search, download and more importantly, provide a way to automate batch upload of bibliographic metadata to the ‡biblios.net network.  For libraries wanting to make sure that their metadata exists in a venue that will always be accessible without restriction, the batch upload tool should allow users to integrate into their existing workflow.
  2. Like the first, I’d like to add the same support for Open Library, though I’m not sure if they currently provide automated ingestion mechanisms, so we’ll see.
  3. Z39.50 — I’ve been working on this for a little while and will make available a version of the Z39.50 client that will allow users to query multiple resources at once.  At this point, I’m not sure if I’ll limit multiple targets to 3-5 or allow it to be unlimited — I guess performance testing will help me make that decision.
  4. Enhanced XML metadata editing.  I’m thinking about adding an XML editor into MarcEdit to make it easier for users working with XML data and needing to do light editing a way of quickly doing that.

 

Download the new version of MarcEdit at: http://oregonstate.edu/~reeset/marcedit/software/development/MarcEdit51_Setup.exe
Download the redistributable’s (developers build) at: http://oregonstate.edu/~reeset/marcedit/software/development/MarcEdit51_Runtimes.exe

–TR

 Posted by at 11:30 am