What I did this week

 Family  Comments Off
Aug 262010
 

So, with the summer winding down, I decided to take the family for a bit of a vacation – and then took a vacation day on my own with my dad.  Here’s a few pictures of the adventures.

Travelling with the family

Because of my injury this summer, my wife and I have been taking the kids all over Oregon to do a little hiking and exploring some of the real treasures in the state.  This week, we decided to go an look at a few things that you cannot find anywhere but in Oregon – the John Day Fossil Beds and the Painted Hills. 

The John Day Fossil Beds are an incredible treasure in the state.  Its is one of the few places in the world (maybe the only place in the world) that contains fossils from 4 continues eras from around 65 MA – 5 MA.  Because of this, the research done at John Day is often used to check and correlate research done elsewhere in the world.  Aside from being unique – it is incredibly beautiful.  The layers in the rocks represent different eras of lava flow or ash flows.  This creates incredible blues, reds, yellows and greens.  What’s more, fossils basically are just laying on the ground.  Scratch the surface of the ground and you’ll find fossils of plants that haven’t existed in Oregon for millions of years. 

Within the Fossil beds are what are called the painted hills.  These represent some of the oldest geographic layers and are simply brilliant.   Here’s a few pictures:

P1010269
Picture in the Blue Basin

P1010295  
Research office in the Visitors Center where you can watch them studying collected fossils.

P1010301
Painted Hills

P1010300 
Painted Hills

P1010361 
Clarno Unit Palisades (Petrified Forest)

P1010347
Boys finding fossils in Fossil, OR

My mini Vacation – Climbing South Sister

After driving all over eastern Oregon, my wife dropped me off in Bend, Oregon, so I could hitch a ride with my dad to South Sister.  The South Sister is the 3rd highest peak in Oregon at 10,300+ ft.  The trail to the top is non-technical, but difficult.  From the trail head, climbers gain nearly 6000 ft climbing up sharp, and loose volcanic rock.  

Leaving at a little after 7 am this morning, my Dad and I managed to grind our way up the hill in about 4-4 1/2 hours.  And our reward for all the work – nearly 60 mph winds at the top of the mountain that literately nearly blew me off the peak.  So, we decided not to spend much time at the top of the mountain, but did manage to get some great pictures before retreating to a safer area. 

The way down the hill was tricky.  While we were climbing, we didn’t think anything of the loose rocks.  But on the way down, they made coming down a bit more difficult.  Even more so for me since my right arm isn’t anywhere close to 100%.  Fortunately, we made it through the loose stuff, then ran down the rest of the hill.  So coming off the mountain only took about 2 hours, jogging at a brisk pace.  Here are a few of the pictures from the top:

P1010385
Looking south

P1010384
Devils Lake (I think)

We were actually really fortunate that we got up and down the mountain when we did.  After getting to the bottom of the trail, we looked up and the top of the mountain was completely covered with clouds.  On Sisters, that’s not a good thing.  Even though the freezing level was likely around 14,000 ft today, it was very likely that anyone still on the mountain had to deal with higher winds, freezing rain and some real nasty weather.  So we really lucked out.

–TR

 Posted by at 9:11 pm
Aug 202010
 

I was glancing through slashdot this morning and ran across this article by the wall street journal (http://online.wsj.com/article/SB10001424052748704554104575435243350910792.html) that discusses how advertising, embedded in ebooks, could be in our future. The article looks at some of the trends that point to this being true (patent applications by Amazon, advertising in other media, etc).

I realize that advertising being embedded in our electronic media isn’t something new. Probably the most in-genius use is product placement is found in movies, where the camera lingers on a computer brand, an actor only orders a specific drink or drives a specific car. We get use to them, filter them out. If done well, the viewer never really knows that they’ve sat through a number of mini-commercials while sitting in the theater. But could that same experience translate to a book? I don’t know — and nor am I sure I would want to. Imagine reading Douglas Adams’ Restaurant at the End of the Universe.  You come to the page where they are in the restaurant and the cow has come out to their table to suggest different parts of itself to be eaten. A hilarious section of the book. Somehow though, I imagine that some of the fun of the book might be muted if on the next page a big advertisement for McDonalds showed up. Maybe with the caption, Even our cows are lovin’ it.

Speaking only for myself, but I find that the reading experience is much different than my viewing experience. It is a much more intimate experience because as the reader, it is up to us to us our imagination to build that picture in our head. Understanding how I read, I’m wondering then how intrusive I would find such embedded advertisement (or linked advertisement) if it were suddenly part of the reading experience. It would also be interesting to see how advertising was placed in an ebook. Since most ebook readers are able to be online, would this give advertisers another method of providing targeted advertising based on your reading profile. I’m not so certain that would be a direction I’d be fond of either.

Maybe if (when) advertising comes to the ebook platform, we’ll see different purchasing levels. Kind of like the ITunes option to purchase DRM or DRM free versions of music — you’d have the option to purchase an ebook subsidized by ads or an ebook without such interruptions for a few dollars more.  I guess we shall see.

–TR

 Posted by at 7:19 am
Aug 192010
 

With my medical restrictions lifted, I made my first commute of the summer on my bike. And while I’ve ridden that road hundreds of times, it has never looked better.

TR

 Posted by at 7:49 pm

MakeCheckDigit Plugin

 MarcEdit  Comments Off
Aug 192010
 

I was asked earlier this year if there was a way automatically generate check-digits in MarcEdit.  After looking at the problem, I decided to build a quick plug-in that could be used to generate such digits.   The plug-in currently supports the Codabar format.  The interface is fairly simple:

image

To use, just install the plugin using the plugin manager and then run in the MarcEditor.

–TR

 Posted by at 12:01 am
Aug 182010
 

I visited my doctor today and after a set of x-rays, found that my elbow has successfully healed. Restrictions have been lifted and I plan on celebrating tomorrow by riding the recumbent for the first time in almost 3 months.

Today has been a very good day.

 Posted by at 9:44 am
Aug 182010
 

Maybe it’s because I have two young children, maybe it’s because I honestly think that we don’t do as a good of a job as we think we do serving those so called digital natives…but I have a fascination with the Internet Generation.  Recently, I had read John Palfrey’s book Born Digital. The book is speculative, making a number of assumptions regarding the internet generation.  Some related to privacy and the eroding there of, some on the new collaborative environment that digital natives participate in.

Now, as I say, this topic is something that interests me primarily because my two sons.  I’m sure I’ve mentioned this before, but I’m fascinated by the world they will grow up in.  They will grow up not knowing what it is like not to be constantly connected, to have access to high speed wireless internet at their home, a variety of computers and computer platforms, cell phones, social media, etc., etc.

While it comes in degrees, I think that many people outside of this age group, look at this group of digital natives threw skewed lens.  By and large, I think that they view this group as more technically savvy (or comfortable) because they interact on social media sites, that they are heavy information consumers and that in many ways, see the virtual world as a extension of the physical.  I think that we see this (or want to see this) because we ourselves, are so thrown by the immense changes that the Internet has brought on our world.

For non-digital natives, I think that the Internet occupies this magical space.  We see it as a revolutionary idea, a transformative medium that breaks down boundaries.  However, I have become more and more convinced that digital natives don’t share this wonder for the Internet.  And why should they.  The Internet didn’t change their world.  It’s simply a part of it.  Background noise that provides them with a communication medium.  In many ways, I think the Internet is analogous to this generations telephone.  It’s an important communication tool, but it’s become something that just a part of everyday, daily life.

So why do I care at all about this.  Well, more and more universities (and libraries) are looking at ways to make more and more of their information relevant to digital natives.  They are building online classes, invading Facebook, building social media sites, all to emulate an environment that will excite and interest these digital natives.  Does this hit the mark?  I’m not sure it does.  I think that by and large, we (the non-digital natives) are building classes and environments that we assume people would want to use, but build it from our perspective of the Internet (that it’s this radically transforming media).

Interestingly, within the last few years, a number of research studies have started to look specifically at this topic  Recently, two such studies have been conducted in Germany.  Both provide some interesting perspective on how digital natives view the virtual world and makes some comments on what this means for digital education as a whole.

–TR

 Posted by at 7:00 am
Aug 172010
 

I wanted to highlight a series of articles currently published on Wired discussing the changing nature of the Web.  The articles are entitled:

Personally, I find these kinds of articles quite fascinating, as I’m constantly struck by the changing nature of how we interact with the web.  I can remember when I started my undergraduates degree at the University of Oregon (yes, my secret shame – I am an Oregon Duck alumni.  Forgive me Saint Benny, the Beaver, of the holy Reiser Stadium), where navigating the Web was something that almost happened by accident.  Sure, there were search engines like Lycos and AltaVista, but actually finding information on the early web was akin to going to a dictionary, flipping the pages and sticking your finger in the book and hoping it landed on a word you are interested in.  At the same time, the early days of the Web were rough, unpolished and exciting because everyone had an equal voice.

As the medium matured, tools were developed to help find content as Google and the like, indexed the web and helped people sort through the plethora of information.  People could now scan the web for content, and receive pages back from anywhere on the web (practically).  Searching the web was almost a democratic process, as searchers materials from John Doe’s website may be placed side-by-side NASA’s.  The searcher (and the ranking algorithm) making decisions to decide which pages provided the most relevant snippets of information.

Today, however, I think we are seeing a sanitizing of the web…and I don’t think that it will be for the better.  While platforms like the IPad, Android, the IPhone are wildly successful, the application centric nature of these devices are changing the nature of the web.  One of the draws (and draw backs) of these new devices and platforms is that they create information gardens, if you will.  Device users no longer have to wade through much of the noise on the Web, because the device, the platform, it’s applications, sanitize and package appropriate content back to the user.

How this packaging and sanitizing of information is something to think about – especially for libraries and universities as we decide how to navigate this changing space.  What affect does this have for education (as more content moves on line, but only specific voices are let through the walls) and for future researchers.

–TR

 Posted by at 3:46 pm
Aug 172010
 

This change was implemented in June 2010, but I wanted to draw users attention to it, especially since I just commented on this change via the MarcEdit listserv.

As many people know, the state of Unicode usage in MARC, well, stinks.  While many systems utilize UTF-8 in their records, the UTF-8 notation that is used isn’t actually “used? in the real world.  In order to allow for lossless conversion between UTF-8 and MARC-8, the recommend UTF-8 notation in MARC is to utilize combining glyphs (KD notation), so you have two unique characters that are placed side by side to create a single combining character.  Within an ILS system, I’m fairly certain that vendors normalize this data to what is called the “KC? or “C? UTF-8 notations to turn these multiple glyphs into a single, UTF-8 character for indexing purposes.  However, when dealing with XML data or utilizing MARC UTF-8 data outside of the system, data formatted in the KD notation format is getting to be problematic – specifically for international users that tend to utilize the “KC? or “C? flavors.

Prior to June 2010, MarcEdit only supported the KD notation when translation data between MARC-8 and UTF-8.  Since this was the notation specified in the LC specs, this was the notation supported.  However, over the past 1/2 year, I’ve been receiving more and more requests from international users that are desperate to shed this albatross and utilize a more canonical normalization.  So, to that end, I’ve added an option to the MarcEdit preferences that give users the option to select the type of Unicode normalization utilized.

image

By default, the program will still utilize the KD notation, since that still remains the recommended specification for lossless conversions between MARC-8 and UTF-8, but I’m hoping that as more and more users, vendors and others call for support for more canonical encodings, LC will drop the current KD recommendations in favor of the more canonical flavors.

Finally, one last note.  The reason LC presently recommends the KD notation is to ensure a lossless conversion when encoding between MARC-8 and UTF-8.  In order to ensure that lossless conversion can still occur, MarcEdit will continue to, internally, convert data to the KD notation when a translation between UTF-8 to MARC8 is requested.  This allows users to utilize the canonical notation while still being backward compatible to the older MARC-8 specification.  The normalization switch occurs behind the scenes and is transparent to the user.

As I noted, this change occurred in June 2010, so if you are interested in taking advantage of the new Unicode notations, simply access the MarcEdit preferences and select the new option.

–TR

 Posted by at 11:00 am
Aug 162010
 

So this is what it has come to…two iconic (III and OCLC that is) library software vendors wrestling in court.  Not that I think this is all that surprising…too bad…but not particularly surprising.  In fact, I think that many folks in the library community probably seen this coming.  For me, the canary in the mine, so to speak, has been the OCLC record use policy revision process.  Starting with the first attempt in Nov. 2008 which plain sucked and finishing with the present revision, which pretty much guaranteed a lawsuit, the records use policy has been an interesting illustration of what is both working and broken with OCLC.  Why – because the record use policy linked the use of the records to OCLC’s WorldCat service.  The policy, as it stands, doesn’t include just a list of rights and responsibilities, but spells out why it is needed…to protect the WorldCat database.  My personal opinion, as written here, is that any record use policy should have been written separate from WorldCat.  Joining the two together is problematic, and this lawsuit demonstrates how. 

As more and more time passes, I’m convinced that OCLC, as it exists today, is of two minds.  There is the membership mind and the vendor mind.  The problem that libraries and library vendors face, is that in many circumstances, the vendor side of OCLC is unduly influencing the membership side of the organization.  I think that the final record use policy is a good example of this, as OCLC placed a number of artificial walls around the WorldCat database – walls that really do nothing but protect OCLC’s web scale initiatives by putting up artificial barriers.

This is one of the reasons why I had suggested the need for OCLC to consider break itself up (http://blog.reeset.net/archives/579) back in Nov. 2008.  The problem, as I seen it then, is that as long as OCLC continues to move and compete within the vendor space, this tension between itself as a membership governed coop and as a vender organization will be present.  It will become increasingly difficult for OCLC to make decisions solely in respect to the memberships long-term needs, if those decisions must also be measured against the products and services that OCLC’s vendor operations handle.  Regardless of how this lawsuit with III is resolved, I don’t think that it will be the end of the litigation.  Over the past year, I’ve encountered too many vendors that are becoming more open with their feelings that OCLC is unfairly hiding behind their status as a tax-free organization to monopolize the library space.  If OCLC can demonstrate that its web scale services will work for larger ACRL libraries, I think that more and more vendors will continue to push back.  Vendors will continue to make their displeasure known and the membership is going to have to ask itself how many lawsuits its willing to endure.

I still believe that the simplest solution to this, however, would be breaking up the OCLC organization.  I think it would ultimately be good for OCLC and libraries.  It would allow OCLC’s vendor units to compete without having to worry about potential lawsuits and it would give OCLC’s research arms the ability to do research without the ever constant need to productize their work.  And there is actually a number of precedents to this type of thinking.  Universities for one, do this all the time.  Publically funded research or development is commercialized under umbrella organizations.  Why couldn’t OCLC do the same.  

In some ways, I see this lawsuit mirroring the current discussions related to net neutrality.  Should all information be treated as equal on the web.  In a sense, that’s what III is asking OCLC to provide, a kind of net neutrality for libraries.  This idea that WorldCat represents a core information pipe within the library community, and should be made open to the entire library community (both vendor and non-vendor) for a reasonable fee.  Libraries currently pay a fee for access (our memberships) – likewise, vendors could pay a reasonable fee to access and develop against the WorldCat bibliographic and holdings database. 

To take this further, when considering net neutrality, I believe that the library community would be nearly unanimous in supporting this as a necessary requirement for the future.  It’s a problem that we feel we have a stake in since librarians ultimately trade in information.  Likewise, I think that libraries will need to decide what type of organization to they want OCLC to be.  As a membership organization, libraries still have some power to shape the long-term vision of the OCLC cooperative.  So I wonder when librarians will start to push OCLC to embrace the same tenants of open and fair access to data that we currently demand from other vendors/information providers.  OCLC is ours, and how we handle this resource will ultimately determine how others view the library community as a whole.  In the end, will we pursue OCLC to reflect the communities larger vision and reflect our professional ethics in respect to open data and cooperation or will the larger information community simply look at the library community as a bunch of hypocrites – hypocrites that demand open data from other communities but isn’t will to reciprocate when the data is our own. 

–TR

 Posted by at 7:00 pm

MarcEdit update

 MarcEdit  Comments Off
Aug 152010
 

I posted a minor MarcEdit update this evening.  The changes were minimal, mostly cosmetic.  The main issues corrected were:

  1. Z39.50 batch file checking:  If the user file isn’t generated on install, the program would throw an error.  The program will now autogenerate that file if it is missing.
  2. Window cleanup:  When running the MarcEditor and using the Find/Find All functions, a number of windows can be openned and left running.  The program will now close those windows for the user when the application shuts down.

 

If you have a current version of MarcEdit, the program will automatically update itself.  However, if you need to pick up the update manually, you can find it at:

–TR

 Posted by at 10:58 pm