Oct 292008
 

So, I got off easy this year  Unlike past years, when the boys have asked for pumpkin themes that took me hours to carve, this year, they picked fairly simple images (and patterns).  So here they are:

Kenny’s Pumpkin

100_1026

100_1027

 

Nathan’s Pumpkin (a skull with spider webs — he picked it):

100_1019

100_1020

 

–TR

 Posted by at 11:03 pm
Oct 192008
 

I’m running a little bit behind here, but the 14th annual NWIUG came to an end on Friday and a couple of interesting tidbits came out of it.  Probably the most welcome tidbit to come out of the conference came during Betsy Graham’s Keynote early the first morning when detailing the changes that will be coming to Encore 3.0.  Encore, for the uninitiated, is Innovative Interfaces web 2.0 solution.  As of this point, Encore 3.0 is scheduled to include an API to allow users to query directly against the Encore platform.  This is one of those things that I’ve been asking III for for the last 7 years (pre and post Encore) and am glad to see that they are making this move.  It’s certainly welcome.  Of course, API access to their system will only come as a part of Encore — so if you are a III system this announcement is only helpful if you decide to utilize their Encore software.

Couple of other notes.  I gave a keynote as well and discussed moving the ILS to the network space and what that might look like within the Pacific Northwest (since we have some established partnerships that might make this easier).  The more I’ve worked with our ILS, the more I’m convienced that there really isn’t a compelling need any longer for local ILS system, but am more interested in seeing libraries consolidate systems while we wait for someone to develop a networked alternative.  In that vein, I’m curious to know what III is doing to position itself to survive within this space.  Their development model is still very client focused — so I would be curious to see how they view their own future within this space.

–TR

 Posted by at 2:58 pm
Oct 192008
 

A friend of mine gave this to me the other day — it’s an old picture of Kenny, probably around when he is 14 or 15 months, so we would have still been in Eugene at the time.  To see him now, you’d have to wonder where all that blonde hair has went to. :)

 

ScannedImage

 

–TR

 Posted by at 2:39 pm
Oct 132008
 

After returning home from Vermont, I had to make good on a promise that I made back in June to my boys — so we packed up and heading down to Disneyland.  For Nathan and Kenny, this was their first time down at Disneyland.  In fact, for Nathan, it was his first time on an airplane.  And what I found was that Nathan likes to fly.  He thought it all was a big race and that when we took off, we were racing all the other planes at the airport down to Disneyland.  Pretty funny.

Oct. 11th

This was our first day to see the Mouse and everyone was excited, which you might be able to see from this picture.

100_0933

 

The first thing that we did in Disneyland was head to Toon Town.  This is where Mickey “lives” and has lots of little places for the boys to play.  There we saw and did lots of fun stuff, like ride on the Gadget roller coaster, Roger Rabbits spinning cab rides and look at all the little houses.  The boys also took some time to play on the little cars.

100_0935

100_0941

 

We were actually just about ready to leave Toon Town when someone dropped by to say hi…Pluto!  When Nathan saw Pluto he decided that he wanted to give him a hug.

100_0944

and then Kenny…

100_0946

followed quickly by Goofy and Minny

100_0947

100_0949

The rest of the day was spent riding rides.  We rode lots of kids rides (the tea cups, story time, the Mad Hatter, etc), some not so kids rides (Thunder Mountain, Splash Mountain) and those in between (Star Tours).  The funny thing is that Nathan is my little dare devil.  On Thunder Mountain, he was sitting besides me yelling faster, faster while the train ran down the mountain and would have rode it again and again had the line not been so long.  Pretty funny.  And the Buzz Light Year ride, that was a favorite because you got to have ray guns and shoot at the bad guys.  We kept trying to see who could score the most points.  I like the Buzz light year ride as well, at least, up to the point where I broke my digital camera.  I’d set it down on the floor and it got pinched between between the console and shattered the led screen.  Dandy.  Fortunately, it still takes pictures — I just can’t see any of them which will mean that I’ll likely be taking lots of duplicate pictures over the next few days to make sure that some of the pictures that I’m taking are getting saved correctly.

Anyway — that’s pretty much day one.  I have a few more pictures (prior to shattering the screen), so here are a few from the rest of the day.

 100_0943 100_0953 100_0963 100_0964 100_0965 100_0967

 

Oct. 12

On the second day, we thought we would try to do a few of the rides in Disneyland that we had missed and then go over to California Adventures and see what that park had to offer.  In general, we were holding off on the California theme park in part, because a number of the rides that I know that Nathan would like to ride, he’d be too small for.  And he was so proud of being a big boy, that I didn’t want to disappoint him.

So on to the day.  We went over to Disneyland and the first ride that we tried to get on was Nemo, but I swear, the lines there are just crazy — and I can’t figure out why.  Since we didn’t want to stand in line for an hour for Nemo, we went looking for other fun.  To start, we went on the Buzz Light Year ride again, followed by a ride on Star Tours.  Kenny really like Star Tours, so that one was for him.  Then, while walking over towards Splash Mountain to get a fast pass, we saw that Space Mountain was pretty much wide open.  Not surprisingly, Nathan was game for a ride on the roller coaster…Kenny was a little unsure and Alyce was ready to give it a go.  After the ride, Nathan was ready to go again, Alyce was walking around on wobbly legs and Kenny told me that he never wanted to ride on Space Mountain again.  :)  But I was impressed that everyone was willing to ride it with me — because I’d long since given up hope that I’d get anyone to ride Space Mountain with me.  My little thrill seeker and I rode on Splash Mountain one more time, and then we grabbed some lunch before going to California Adventures. 

At California Adventures, we did a lot of rides just for Nathan — and that was partly because Nathan got jipped out of one of the rides.  They have this ride called the Grizzly Run — which pretty much is a ride where you just get wet.  Nathan really wanted to ride it, but you have to be 42 inches so he was just too short.  He was really disappointed, but was a champ about it.  And because of that, we road a lot of other kids rides and then stayed for the pixar parade.  After the parade, we went back to Disneyland to ride on Thunder Mountain one more time (I’d promised Nathan) and then we headed back to the hotel.  Overall, I think that everyone had a great time so it was definitely worth the trip.  Anyway, here are a bunch of pictures from our second day.

 

100_0979 
Kenny after riding the Grizzly Mountain

100_0983  100_0987  100_0990 100_0995 100_1008 100_1011 100_1012

 

–TR

 Posted by at 11:43 pm
Oct 102008
 

As all good things must come to an end, so too has the ReadEx Digital Institute closed for another year.  Like always, this symposium offers an opportunity to connect with colleagues from various institutions and professional backgrounds to discuss some of the issues that we wrestle with as we work on developing and managing our digital collections.  There were a number of sessions throughout the 2 days, and below are some brief (and sometimes not so brief) thoughts on each (in addition to some of the fun stuff I did in between).

Day 1:

  • David Seaman, Dartmouth College
    From Ponderous Perfection to the Perpetual Beta: Library Services in an Age of Superabundant information

    Aside from the fact that I don’t like this image or idea of perpetual beta (and I’ve mentioned why here: http://blog.reeset.net/archives/566), David talked about a process that Dartmouth College when through this year to determine how the library could become much more nimble within our current information landscape.  Like many organizations, the administration at Dartmouth were feeling like they were falling farther and farther behind as they struggled to build systems and build digital collections to meet the needs of their current users.  As  David put  it, the results of the process really confirmed for them something that they already knew — the staff at Dartmouth would like to take more chances and move out a little more towards the bleeding edge, but that they felt like they needed a license from the administration, from their patrons, to take those chances.  And through this process, David hopes that this is comes out of this year-long assessment.  That staff at his institution can move past the paralysis of perfection and start to build patron services that are developed using a very iterative approach that allows the library to move very quickly in moving new services and improvements to services to patrons.

    From David’s talk, I asked a question that I would later give the answer in my mind, as to what the library community’s big win is — the thing that allows the community to cover our sins as we start to put out new services that might not be perfect.  A number of technical commentators discuss this concept when considering this dilution of “beta” development in considering Google and Microsoft’s development (they really are quite similar).  Google has it’s search/advertising functionality, Microsoft has Windows and Office — but what’s the library communities?  I’ll let you consider this for yourself and let you know what I believe that it is later below.

  • Meg Bellinger, Yale University
    The Collections Collaborative:  Putting content into the flow

    Meg’s talk primarily centered on Yale and some new directions that they were moving to continue to grow their digital presence and brand.  In part, a good deal of this work will be over seen by Meg, who discussed a new department that has been developed to support digital content.

  • Paul Duguid, University of California, Berkeley
    The World according to Grep:  Seeing text through the search box

    I’ll admit, this was a great talk in which Paul Duguid, a faculty member at the ISchool in Berkeley discussed the ways in which the search box shapes our world view.  His talk touched on a number of topics, include Google, Google Books, Itunes, etc. 

  • Steven Daniel, ReadEx
    The Serial Set goes to the movies:  Movie Screenplays and the 1912 Senate Titanic Hearings

    I’ve had the opportunity to see Steven talk a number of times and I always enjoy his musing.   Essentially, Steven is an expert when it comes to finding information within the serial set and each year, he mines the serial set and comes up with topics to discuss at this symposium.  This year, he discussed the Senate Hearings of the Titanic.  Within the serials set, these hearing records exist and apparently, through time, as Hollywood has created movies on the Titanic, they have used the descriptions, accounts, etc. as the basis for their movies.  It was interesting, as many of these movies take a different look at how these events played out, but the account descriptions, dialog, etc. varies little during the key moments of the tragedy.

 

Day 2:

  • Henry Snyder, University of California, Riverside
    Libraries and digitization:  forced marriage, marriage of convenience or love match?

    Henry is someone that describes himself as semi-retired, but I can only hope that at his age, I’ll still be as deeply informed and involved as he is.  Henry has spoken at the Digital Institute before, and this year, he provided a thought provoking talk that spanned the history of digitization in libraries — much of which he seen and participated in during his lifetime. 

  • Ray Siemen, University of Victoria
    A digital humanities approach to understanding the electric book

    This year was the year of the instructor.  Ray was the third teaching faculty member that was asked to speak at the symposium, and he spoke specifically about the rise of the electric book and how that impacts the humanities researcher.  In some sense, he was the perfect follow-up to Henry Snyder’s talk, as he picked up where Henry left off to discuss research and study that is currently ongoing in Canada related to digital humanities.

  • Grant Barrett, The Double-Tongued Dictionary
    Research Techniques in Digital Text:  Beyond “nifty” and on to “useful”

    Grant’s talk was unique in that it was one of a user of our digital resources.  Grant is an independent researcher and uses the digital resources that university, etc. make available to inform his work.  What made his talk unique really was his perspective.

  • Terry Reese, Oregon State University
    [Title not important]

    The title of my talk really wasn’t that important because, well, it was the title of my talk prior to re-writing it the night before.  After the first day of talks, I was left with a real need to continue the conversation that David started and answer my earlier question regarding the library community’s big win and what that means for the digital library community.  Essentially, my talk was a little scatter shot I thought, with the first half focusing on some analysis that I’ve been doing over the past 6 months regarding who actually uses our digital collections.  I think that in general, we understand that the materials that we digitize are now available globally, so some global users are accessing them, but I think that most people believe that the largest users of their digital collections are their traditional user community.  However, that’s not necessarily the case (or I think, won’t be the case as time moves forward).  Watching the digital access for our own institutional repository revealed that a growing number of invisible users, users with no relationship to the University, our local community, or even the state of Oregon were making use of the content stored in the IR.  For months, Oregon users would make up about a 1/4 or so of the total users, with the second highest user groups coming from outside the United States like Canada and India.  When taken in total, usage from outside the United States was often times greater than inside the United States (granted, we are now talking about a very large area to a very small area proportionately, but this still surprised me) and that users found the materials in our IR at nearly 90% using either a search engine or a direct link from another resource.  The implications of this is that very quickly, our primary users for our digital collections could cease to be those that we traditionally develop for, our campus or local community.  And how does this affect overall development of services, collections that are selected for digitization, etc.  Interesting questions to ponder.  

    And so I asked what does this mean for our development.  Well, originally, I’d planned on talking about search.  To a large degree, I think search is pretty easy.  Search across resources, domains, etc.  Discovery is hard.  Actually helping users find the content that they are looking for and sorting through the noise for the patron is the hard part.  But this is some of the work that we are doing with LibraryFind, so I figure I had some things I could say.  But after two days of talks, much of the conversation had settled on Search and how it must be made better and I think we ignored the less sexy, but actually more important problem with digital collections that really has no answer at this point, and that being preservation.  Above, when I talked about the library’s big win, our “product” that forgives the multitude of sins — well, I believe that it’s content, or more precisely, the preservation of content.  Because of libraries, content that would have been lost to time continues to exist throughout the world.  Libraries preserve content and we actively seek out content to acquire and preserve for future generations because we think in Library time, or perpetual preservation of materials.  This, I think, is the big win.  However, as I look at the current state of preservation services (both enterprise and academic) I think that we live in a time where more information could simply disappear than ever before.  The digital collections that we build have profound impacts on the world around us, but the irony is that they are created on media that has a shelf life of only a couple years.  From the moment a document is born digital, it begins to rot.   And while we have been able to design systems that allow for byte level preservation (in many cases), we (and by we, I mean everyone, library or no) will continue to struggle with the ability to migrate binary content forward as technologies and standards change.  Preservation is hard for a number of reasons, cost and resources being just two of the variables that we can only kind of control and plan for.  As time goes forward, organizations will spend millions of dollars just to keep the disks spinning paying for just energy/cooling costs and replacement media costs.  A big problem.  But one that I would argue isn’t an just an institutional one, but a cultural one as well.  I think that digital preservation will have to take on many of the same characteristics of preservation of our analog materials — i.e., be done cooperatively within the library community. 

 

So folks don’t believe that this entire trip was work, work, work — I can say that prior to the conference, I traveled up to Middlebury College to visit a good friend there, do some hiking and drive around NY.  I posted a few comments on that part of my trip at: http://blog.reeset.net/archives/564

 

–TR

 Posted by at 2:40 pm
Oct 082008
 

An interesting question brought up by David Seaman here at the Readex Digital Institute.  David provided the opening keynote for the conference and in it, discussed a process that Dartmouth College went through this year to consider how the library can become more nimble within our networked world.  How the library can be given a license, if you will, to allow the library to release services that aren’t perfect and are iterative in their development cycle.  And while I can certainly get where David is coming from, I don’t think that the logical outcome is a service model that lives within perpetual beta.

I think that part my objection to this phrase comes from my development background.  As a developer, I’m a firm believer in a very iterative approach to development.  Those that use MarcEdit can probably attest (sometimes to their dismay) that updates can come with varying frequency as the program adapts to changes within the metadata community.  Since MarcEdit doesn’t follow a point release cycle in the traditional sense, could it be a beta application?  I guess that might depend on your definition of beta — as it certainly would seem to meet the criteria that Google applies to many of it’s services.  However, there is a difference I think.  While I take an iterative approach to development, I also want to convey to users that I will support this resource into the future and as beta, that’s not the case.  My personal opinion is that services that languish in beta are doing two things:

  1. telling users that these services are potentially not long-term, so they could be gone tomorrow
  2. giving users a weak apology for the problems that might exist in the software (and deflecting blame for those issues, because hey, it’s beta).

So, I don’t see this as the road to nimble development.  Instead, when I heard David’s talk, I heard something else.  I believe libraries fail to innovate because as a group we are insecure that our users will leave us if we fail.  And when I hear talks like David’s, that’s what I’m hearing his organization say.  They are asking the library administrators for permission to fail, because what is technological research but a string of failures in search for a solution.  We learn through our failures, but I think that as a community, our fear of failing before our users can paralysis us.  The fear of failing before an administration that does not support this type of research and discover can paralysis innovation by smart, energetic people within an organization.  A lot of people, I think, look at Google, their labs, their beta services and say, yes, that is a model that we should emulate.  But they don’t fully understand I think that within this model, Google builds failure as an acceptable outcome into their development plan.  If libraries want to emulate anything from the Googles or the Microsofts of the world, they should emulate that and engender that type of discovery and research within their libraries. 

I am actually very fortunate at Oregon State University in that I have a director and an administration that understands that for our library to meet the needs of our users now and in the future, we cannot be standing still.  And while the path we might take might not always be the correct one, the point is that we are always moving and always learning and refining our understanding of what our users want today and tomorrow.  What I’d like to see for David’s library is that kind of environment — and I wish him luck in it.

–TR

 Posted by at 9:53 am
Oct 082008
 

LibraryFind 0.8.5.2 has officially been tagged and posted to the libraryfind.org website.  You can get the tarball here: http://libraryfind.org/release-0.8.5.2.tar.gz.  This release has admittedly been a long time coming.  What’s held it up?  Well, primarily work that we were doing for people interested in using LibraryFind outside of the public view.  Of course, the result of some of that work has been integrated into the 0.8.5.2 build which I think will make the overall application better and more reliable. 

The other thing that slowed the release of 0.8.5.2 has been the parallel development of 0.9.0.  The 0.9.0 branch will represent a different direction in the UI — in that the UI will be much more responsive to users, allow users to stop a query at any point and retrieve present results, see queries and search status, etc.  Because the 0.9.0 represented, in many ways, a redesign of the UI framework, the time that it took to make it all work together took more time as well.  Fortunately, at this point, the 0.9.0 test branch is also feature complete, so the turn around between the 0.8.5.2 and the 0.9.0 builds should be a short one.

–TR

 Posted by at 9:06 am
Oct 072008
 

The past couple of days, I’ve been spending a few quite days in Vermont ahead of the ReadEx Digital Institute.  I’ve had the opportunity to be able to attend and speak at this little shin-dig for a number of years, and this year is no different.  I actually like this get together a lot for a couple of reasons.  First, you get a real diverse range of librarians and libraries that attend.  The audience can range from the very technical to the not so technical — but every shares an interest in digital libraries, preservation and digitization in general.  Secondly, it gives me an opportunity to spend time in New England.  What a wonderful place to visit.  I’ve fallen in love a number of times with the hiking in this area and like previous years, I took a couple of days prior to the Digital Institute to head to the back country and do a little bit of hiking. 

This year, my hiking partner and went for something a little more leisurely and hiked up something called the Roosters Coop (I think).  Not very high (the book I think said 2700 ft) but really scenic.  On the climb up, we came across a waterfall off trail and decided to take a closer look:

100_0904

and then decided to go off trail a little more and climb up the side of the waterfall (because it’s quite a bit higher than the picture).  It ended up being a little bit slicker than either of us would have thought, but the view from the top was worth it.

 100_0914  100_0911

After fooling around on the waterfall and making back down without falling, we started back up the hill.  The climb opens up into two vistas.  One over looks Keene Valley (small town), while the other looks out more towards Giant Mt.  Throughout the climb, we were rained on, hailed on and at the vistas, enjoyed lots of sun.  A great day.  And while I didn’t get it in the pictures, I got to see snow on the mountains as well.  Around 3500 ft, their was visible snow on the mountain tops — which make me wish we’d picked a little bigger mountain. :) 

One thing I certainly love about this area is you can so tell that it’s fall.  The leaves change color in a way that just doesn’t happen in the Pacific NW.  Anyway, here are a few more pictures.

100_0921 100_0929 100_0925 100_0917

 

Of course, it’s easy to relax on a trip like this, when I know that batman is at home holding down the fort. :)

100_0900

–TR

 Posted by at 3:22 pm
Oct 072008
 

I don’t usually post information about pending updates, but I want to prep users for an upcoming change.  In all versions of MarcEdit, the application installs all application and configuration files by default to the Program Files/MarcEdit 5.0/ directory.  To help make MarcEdit friendlier for Vista/Group managed XP systems, the next build is going to separate the program files from the configuration files.  Why do this?  Well, in Vista and Group managed XP based systems, the Program Files directory is set as read-only for all users but workstation administrators and for the first time, I’m seeing a tipping point where more users are not doing everyday tasks logged in as administrators. 

Unfortunately, making this work in MarcEdit is tricky only because of the large established user-base and some other application needs so I’ve been taking my time trying to work out a process that will be pretty much transparent to the user.  In the coming update, the MarcEdit application will be doing three things:

1) On install, it will migrate your current configurations to the Roaming User application program (XP: c:\documents and settings\[user]\application settings\MarcEdit

2) On Install, MarcEdit will keep a shadow copy of your configuration files in the MarcEdit program directory/shadow.  These file essentially will be used for back-up purposes.

3) Remove the current directories at the program files/marcedit 5.0/ level (since they will be migrated and backed-up).

At present, I have a number of users that are trying out the new migration process and providing feedback so by the time this makes it out as an update, the process shouldn’t be noticeable.  When I post this update (soon), I’ll include an update on the migration process as well as a few shortcuts and diagnostic links that will be included to help get users to these user files quicker when needed (for example, if there’s a need to modify xslt files).

–TR

 Posted by at 3:16 pm