For Christmas, I posted an updated version of MarcEdit. It included a number of new features, the biggest being an update to the Merge Records tool. This is the first part of a two part update process. I’m hoping the second part of the update will be completed by New Years.
Anyway, the matching tool has been updated so that if a set of records are missing control numbers for matching, you can utilizing a multiple matching approach. The program will utilize the following items for matching when MARC21 is selected from the dropdown box:
- LDR position 6
- LDR position 7
- 008 date 1 and date 2
- 008 lang
- 008 form
- 245 a-n
The new updated screen shows up below.
I’m not sure many people realize it, but there is a completely command-line driven version of MarcEdit. This version provides an easy to use command-line tool that can be used with other cmd tools to pipe data or processes together.
One of the fundamental changes between the cmarcedit and marcedit GUI applications is that any error encountered when using the cmarcedit tool is a stop condition. However, in order to make the cmarcedit tool a functional replacement for the older MARCBreakr tool (created by LC), I’ve added a new switch, –pd. This switch will change the program so that errors are not stop conditions, but rather, are logged as errors but still outputted. For example, a record that includes Malformed record data (records that cannot be processed through the loose record algorithm):
c:\net_marcedit\C#\MProgram\MarcEdit\bin\Debug>cmarcedit.exe -s f:\orig.mrc -d f:\orig_rev.mrk -break -pd
72 records have been processed in 0.790045
Records processed using the loose breaking algorithm will provide the following output (again, something that use to be a stop condition):
Structure error, loose breaking function used…Record: 1
1 records have been processed in 0.707040 seconds.
This is one of many changes that will happen by the next update.
Hardcover: 224 pages
Publisher: Andrews McMeel Publishing; 1 edition (October 20, 2009)
Product Dimensions: 10.1 x 8.4 x 1.1 inches
I had read this book sometime ago. My wife, who is quite the prolific book blogger, had received this book for something or other, and I’m not sure why, but I was annoyed with our cats and happy to see why someone else thought dogs were obviously the better companion. (Truth be told, I’m pretty ambivalent to the cats – though, I’m pretty sure they are ambivalent to just about everything around them – so it breaks even). Anyway, the book was a fun one. Lots of full pictures of dogs and cats, with often times funny, yet witty observations about dogs, cats and the people who love them. And in the end, I felt completely justified in my favoritism to my happy, 70 lb lab. puppy.
Anyway, what made me think of this book was I found it today. We were putting up Christmas decorations and clearing off one of the tables and there it was. So I started perusing the book again. Again, it made me smile and look at my dog a bit more warmly. Especially the section where they talk about a dog’s biggest weakness – that being it’s nature to love its family unconditionally. Regardless of how poorly you treat your doggy, that pup is going to continue to come back to you because once you are their family, they will be loyal to a fault. If that’s their biggest weakness, then I’ll take my dog any day of the week.
I’ve finally gotten around to re-introducing a function that had been in the 4.x version of MarcEdit into the 5.x series. The function re-introduced is the ability to edit subsets of records and then save the data back into the data file where the subset resides. For example, say have a large set of records, but your really just want to work with just the records that have specific characteristics. In the past, if you wanted to do this in MarcEdit, you’d have to extract the data, then delete those records from the file because MarcEdit wouldn’t allow you to save the subset data back into the original source file. Well, as of Dec. 6th, that changes. Now, if you have a file and you want to edit a subset – you can use the following procedure:
- Open your data in the MarcEditor
- Select File/Select Individual Records to Make
- In the Dialog – Import the records, search for the elements to change, and then extract the records.
- Change those files in the MarcEditor
- To save the data back to the original source file, simply save the records. To save the data to a separate file, select Save As.
And that’s it. I’ve posted a video explanation of this new behavior on YouTube. You can find it at:
Extract and Edit subsets of records
As noted above, this change will be live as of the Dec. 6th MarcEdit update.
While my team didn’t win the civil war, I’ve had a good time this year dyeing my hair various shades of orange. For the civil war, I decided to go with more of an ‘80’s style. This was after the game with the family.
Sorry for what I’m sure will be a longish post. This is a bit of a brain dump —
Lately, I’ve been thinking a lot about how libraries determine if services that they provide are successful. Well, specifically, how libraries determine if digital services that they provide are successful. And after attending DLF and listening to more than a few folks talk about very cool digital programs, I’m starting to think that libraries view digital programs as living in a kind of alternative reality, where the rules of regular evaluation and assessment don’t apply.
Our recently retired University Librarian and I would spend a good deal of time talking about assessing digital projects. However, the months prior to her retirement, we spent a lot of time talking about digital services and the necessity for libraries to being looking at those services that we provide critically and assess their feasibility in terms of impact and cost of operation.
I think in the library world, there is a feeling that libraries need to catch up, and in many respects, catching up is a code-word for building digital programs, creating mobile sites, creating institutional repositories, etc. It could be a lot of things – but I think the question that sometimes gets lost is the question of if a library should participate in those activities. Let’s use the institutional repository as an example. I know that libraries and librarians love to jump on the open access hobby horse (yes, I’m quite cynical about this one), but does every library need to have an institutional repository. I’d argue no. In fact, OSU’s IR is very successful when compared to other IR efforts in the library community, but even here, I sometimes wonder if its necessary for an institution of our size to maintain such a repository. I honestly think that as repository efforts go, OSU’s is one of the best. At the same time, I know how many resources (both infrastructure and people) go into delivering this service and because of that, I sometimes get discouraged when I consider the substantial costs per item. At the same time, I see how this effort becomes more important to the campus and the library each passing year as more content finds its way into the IR.
At the same time, it seems to me that the concept of an IR is an old one. Yes, organizations want to maintain walls around their information to demonstrate ownership and provenance, but in reality, I often times believe our patrons would be much better served with repository efforts that removed the concept of the organization. Thinking about OSU’s repository efforts…could this work have more impact if we could leverage a larger state-wide repository effort. For that matter, couldn’t every institution in Oregon. And why would it have to just be statewide…again, those are fairly arbitrary boarders.
I think the thing that I find myself struggling against sometimes is that libraries search for digital services to distinguish themselves and provide services that make resources more readily available for their patrons. But we often times have approached this the same way we build traditional print collections. We start a local program, brand it, promote it. When in reality, the digital space provides libraries an opportunity to work outside the traditional boundaries of ownership and build more collaborative services. And collaborative not in the sense that I run and IR and you run and IR and we have an API that let’s us communicate with each other – but collaborate in the sense that we build tools that everyone shares. We see this model happening and it will happen more as libraries are forced to justify stretched resources, but I often times think that we could be doing so much more today.
Anyway – what has got me thinking about this lately has been talking to people about users and their digital services. Over the past couple of months, the number of times I’ve spoken to folks building mobile sites, or grant driven projects that excitedly talk about 500 visitors a day, or 2000 visitors daily – and see those numbers as justifying tens of thousands of dollars of startup monies or locking up finite FTE resources makes me wonder if libraries have lost their minds, and should be doing better assessment with our digital collections. Because of MarcEdit’s automatic updating, I know that it’s opened over 5,000 times daily. A few of my map cataloging tools on my page that I don’t even update any more get a few hundred visitors a day – these are tools that are created in my spare time, with little resources – yet many times, have larger audiences than many digital library services being spun out by libraries. And yet, it are these vampire projects that are siphoning valuable time and resources away from libraries and make really transitive development more difficult.
This isn’t to say that libraries shouldn’t be doing things. We should. However, what I’m finding myself looking for more and more at digital library conferences, are librarians talking seriously about assessment of their digital assets and projects. They are out there –