Feb 142016

The topic of charactersets is likely something most North American catalogers rarely give a second thought to.  Our tools, systems – they all are built around a very anglo-centric world-view that assumes data is primarily structured in MARC21, and recorded in either MARC-8 or UTF8.  However, when you get outside of North America, the question of characterset, and even MARC flavor for that matter, becomes much more relevant.  While many programmers and catalogers that work with library data would like to believe that most data follows a fairly regular set of common rules and encodings – the reality is that it doesn’t.  While MARC21 is the primary MARC encoding for North American and many European libraries – it is just one of around 40+ different flavors of MARC, and while MARC-8 and UTF-8 are the predominate charactersets in libraries coding in MARC21, move outside of North American and OCLC, and you will run into Big5, Cyrillic (codepage 1251), Central European (codepage 1250), ISO-5426, Arabic (codepage 1256), and a range of many other localized codepages in use today.  So while UTF-8 and MARC-8 are the predominate encodings in countries using MARC21, a large portion of the international metadata community still relies on localized codepages when encoding their library metadata.  And this can be a problem for any North American library looking to utilize metadata encoded in one of these local codepages, or share data with a library utilizing one of these local codepages.

For years, MarcEdit has included a number of tools for handling this soup of character encodings – tools that work at different levels to allow the tool to handle data from across the spectrum of different metadata rules, encodings, and markups.  These get broken into two different types of processing algorithms.

Characterset Identification:

This algorithm is internal to MarcEdit and vital to how the tool handles data at a byte level.  When working with file streams for rendering, the tool needs to decide if the data is in UTF-8 or something else (for mnemonic processing) – otherwise, data won’t render correctly in the graphical interface without first determining characterset for use when rendering.  For a long time (and honestly, this is still true today), the byte in the LDR of a MARC21 record that indicates if a record is encoded in UTF-8 or something else, simply hasn’t been reliable.  It’s getting better, but a good number of systems and tools simply forget (or ignore) this value.  But more important for MarcEdit, this value is only useful for MARC21.  This encoding byte is set in a different field/position within each different flavor of MARC.  In order for MarcEdit to be able to handle this correctly, a small, fast algorithm needed to be created that could reliably identify UTF8 data at the binary level.  And that’s what’s used – a heuristical algorthm that reads bytes to determine if the characterset might be in UTF-8 or something else.

Might be?  Sadly, yes.  There is no way to auto detect characterset.  It just can’t happen.  Each codepage reuses the same codepoints, they just assign different characters to those codepoints based on which encoding is in use. So, a tool won’t know how to display textual data without first knowing the set of codepointer rules that data was encoded under.  It’s a real pain the backside.

To solve this problem, MarcEdit uses the following code in an identification function:

          int x = 0;
            int lLen = 0;

                x = 0;
                while (x < p.Length)
                    if (p[x] <= 0x7F)
                    else if ((p[x] & 0xE0) == 0xC0)
                        lLen = 2;
                    else if ((p[x] & 0xF0) == 0xE0)
                        lLen = 3;
                    else if ((p[x] & 0xF8) == 0xF0)
                        lLen = 4;
                    else if ((p[x] & 0xFC) == 0xF8)
                        lLen = 5;
                    else if ((p[x] & 0xFE) == 0xFC)
                        lLen = 6;
                        return RET_VAL_ANSI;
                    while (lLen > 1)
                        if (x > p.Length || (p[x] & 0xC0) != 0x80)
                            return RET_VAL_ERR;
                    iEType = RET_VAL_UTF_8;
            catch (System.Exception kk) {
                iEType= RET_VAL_ERROR

            return iEType;

This function allows the tool to quickly evaluate any data at a byte level and identify if that data might be UTF-8 or not.  Which is really handy for my usage.

Character Conversion

MarcEdit has also included a tool that allows users to convert data from one character encoding to another.


This tool requires users to identify the original characterset encoding for the file to be converted.  Without that information, MarcEdit would have no idea which set of rules to apply when shifting the data around based on how characters have been assigned to their various codepoints.  Unfortunately, a common problem that I hear from librarians, especially librarians in the United States that don’t have to deal with regularly this problem, is that they don’t know the file’s original characterset encoding, or how to find it.  It’s a common problem – especially when retrieving data from some Eastern European publishers and Asian publishers.  In many of these cases, users send me files, and based on my experience looking at different encodings, I can make a couple educated guesses and generally figure out how the data might be encoded.

Automatic Character Detection

Obviously, it would be nice if MarcEdit could provide some kind of automatic characterset detection.  The problem is that this is a process that is always fraught with errors.  Since there is no way to definitively determine the characterset of a file or data simply by looking at the binary data – we are left having to guess.  And this is where heuristics comes in again.

Current generation web browsers automatically set character encodings when rendering pages.  This is something that they do based on the presence of metadata in the header, information from the server, and a heuristic analysis of the data prior to rendering.  This is why everyone has seen pages that the browser believes is one character set, but is actually in another, making the data unreadable when it renders.  However, the process that browsers are currently using, well, as sad as this may be, it’s the best we got currently.

And so, I’m going to be pulling this functionality into MarcEdit.  Mozilla has made the algorithm that they use public, and some folks have ported that code into C#.  The library can be found on git hub here: https://github.com/errepi/ude.  I’ve tested it – it works pretty well, though is not even close to perfect.  Unfortunately, this type of process works best when you have lots of data to evaluate – but most MARC records are just a few thousand bytes, which just isn’t enough data for a proper analysis.  However, it does provide something — and maybe that something will provide a way for users working with data in an unknown character encodings to actually figure out how their data might be encoded.

The new character detection tools will be added to the next official update of MarcEdit (all versions).


And as I noted – this is a tool that will be added to give users one more tool to evaluating their records.  While detection may still only be a best guess – its likely a pretty good guess.

The MARC8 problem

Of course, not all is candy and unicorns.  MARC8, the lingua franca for a wide range of ILS systems and libraries – well, it complicates things.  Unlike many of the localized codepages that are actually well defined standards and in use by a wide range of users and communities around the world – MARC-8 is not.  MARC8 is essentially a made up encoding – it simply doesn’t exist outside of the small world of MARC21 libraries.  To a heuristical parser evaluating character encoding, MARC-8 looks like one of four different characterset: USASCII, Codepage 1252, ISO-8899, and UTF8.  The problem is that MARC-8, as an escape-base language, reuses parts of a couple different encodings.  This really complicates the identification of MARC-8, especially in a world where other encodings may (probably) will be present.  To that end, I’ve had to add a secondary set of heuristics that will evaluate data after detection so that if the data is identified as one of these four types, some additional evaluation is done looking specifically for MARC-8’s fingerprints.  This allows, most of the time, for MARC8 data to be correctly identified, but again, not always.  It just looks too much like other standard character encodings.  Again, it’s a good reminder that this tool is just a best guess at the characterset encoding of a set of records – not a definitive answer.

Honestly, I know a lot of people would like to see MARC as a data structure retired.  They write about it, talk about it, hope that BibFrame might actually do it.  I get their point – MARC as a structure isn’t well suited for the way we process metadata today.  Most programmers simply don’t work with formats like MARC, and fewer tools exist that make MARC easy to work with.  Likewise, most evolving metadata models recognize that metadata lives within a larger context, and are taking advantage of semantic linking to encourage the linking of knowledge across communities.  These are things libraries would like in their metadata models as well, and libraries will get there, though I think in baby steps.  When you consider the train-wreck RDA adoption and development was for what we got out of it (at a practical level) – making a radical move like BibFrame will require a radical change (and maybe event that causes that change).

But I think that there is a bigger problem that needs more immediate action.  The continued reliance on MARC8 actually posses a bigger threat to the long-term health of library metadata.  MARC, as a structure, is easy to parse.  MARC8, as a character encoding, is essentially a virus, one that we are continuing to let corrupt our data and lock it away from future generations.  The sooner we can toss this encoding to the trash heap, the better it will be for everyone – especially since we are likely the passing of one generation away from losing the knowledge of how this made up character encoding actually works.  And when that happens, it won’t matter how the record data is structured – because we won’t be able to read it anyway.


 Posted by at 3:48 pm
Feb 062016

Would this be the super bowl edition? Super-duper update? I don’t know – but I am planning an update. Here’s what I’m hoping to accomplish for this update (2/7/2016):

MarcEdit (Windows/Linux)

· Z39.50/SRU Enhancement: Enable user defined profiles and schemas within the SRU configuration. Status: Complete

· Z39.50/SRU Enhancement: Allow SRU searches to be completed as part of the batch tool. Status: ToDo

· Build Links: Updating rules file and updating components to remove the last hardcode elements. Status: Complete

· MarcValidators: Updating rules file Status: Complete

· RDA Bug Fix: 260 conversion – rare occasions when {} are present, you may lose a character Status: Complete

· RDA Enhancement: 260 conversion – cleaned up the code Status: Complete

· Jump List Enhancement: Selections in the jump list remain highlighted Status: Complete

· Script Wizard Bug Fix: Corrected error in the generator that was adding an extra “=” when using the conditional arguments. Status: Complete

MarcEdit Linux

· MarcEdit expects the /home/[username] to be present…when it’s not, the application data is being lost causing problems with the program. Updating this so allow the program to drop back to the application directory/shadow directory. Status: Testing

MarcEdit OSX

· RDA Fix [crash error when encountering invalid data] Status: Testing

· Z39.50 Bug: Raw Queries failing Status: Complete

· Command-line MarcEdit: Porting the Command line version of marcedit (cmarcedit). Status: Testing

· Installer – Installer needs to be changed to allow individual installation of the GUI MarcEdit and the Command-line version of MarcEdit. These two version share the same configuration data Status: ToDo


 Posted by at 5:09 am
Jan 252016

I’ve posted an update for all versions – changed noted here:

The significant change was a shift in how the linked data processing works.  I’ve shifted from hard code to a rules file.  You can read about that here: http://blog.reeset.net/archives/1887

If you need to download the file, you can get it from the automated update tool or from: http://marcedit.reeset.net/downloads.


 Posted by at 10:04 pm
Jan 252016

One of the changes in the current MarcEdit update is the introduction of a linked data rules file to help the program understand what data elements should be processed for automatic URL generation, and how that data should be treated.  The Rules file is found in the Configs directory and is called: linked_data_profile.xml




The rules file is pretty straightforward.  At this point, I haven’t created a schema for it, but I will to make defining data easier.  Until then, I’ve added references in the header of the document to note fields and values. 

Here’s a small snippet of the file:

<?xml version=”1.0″ encoding=”UTF-8″?>
    rules block:
        top level: field
                type: authority, bibliographic, authority|bibliographic
            tag (required):
                Value: Field value
                Description: field to process
            subfield (required):
                Value: Subfield codes
                Description: subfields to use for matching
            index (optional):
                Values: subfield code or empty
                Description: field that denotes index
                Values: 1 or empty
                Description: determines if field should be broken up for uri disambiguation
            special_instructions (optional):
                Values: name|subject|mixed
                Description: special instructions to improve normalization for names and subjects. 
            uri (required):
                Values: subfield code to include a url
                Description: Used to determine which subfield is used to embed a URI
            vocab (optional):
                Values (see supported vocabularies section)
                Description: when no index is supplied, you can predefine a supported index
  Supported Vocabularies:
    Value: lcshac
    Description: LC Childrens Subjects
    Value: lcdgt
    Description: LC Demographic Terms
    Value: lcsh
    Description: LC Subjects
    Value: lctmg
    Description: TGM
    Value: aat
    Description: Getty Arts and Architecture Thesaurus
    Value: ulan
    Description: Getty ULAN
    Value: lcgft
    Description: LC Genre Forms
   Value: lcmpt
   Descirption: LC Medium Performance Thesaurus
   Value: naf
   Description: LC NACO Terms
   Value: naf_lcsh
   Description: lcsh/naf combined indexes.
   Value: mesh
   Description: MESH indexes
    <field type=”bibliographic”>
    <field type=”bibliographic”>

The rules file is pretty straightforward.  You have a field where you define a type.  Acceptable values are: authority, bibliographic, authority|bibliographic.  This tells the tool which type of record the process rules apply to.  Second you define a tag, subfields to process when evaluating for linking, a uri field (this is the subfield used when outputting the URI, special instructions (if there are any), where the field is atomized (i.e., broken up so that you have one concept per URI), and vocab (to preset a default vocabulary for processing).  So for example, say a user wanted to atomize a field that currently isn’t defined as such – they would just find the processing block for the field and add: <atomize>1</atomized> into the block – and that’s it.

The idea behind this rules file is to support the work of a PCC Task Force while they are testing embedding of URIs in MARC records.  By shifting from a compiled solution to a rules based solution, I can provide immediate feedback and it should make the process easier to customize and test. 

An important note – these rules will change.  They are pretty well defined for bibliographic data, but authority data is still being worked out. 


 Posted by at 10:01 pm

MarcEdit Update (all versions)

 MarcEdit  Comments Off on MarcEdit Update (all versions)
Jan 182016

This update includes a new tool, changes to the merge tool, and a behavior change in the MARCEngine.  You can see the change log at:

You can get the update through MarcEdit’s automated update mechanism or from: http://marcedit.reeset.net/downloads/


 Posted by at 3:05 pm

MarcEdit and OpenRefine

 MarcEdit  Comments Off on MarcEdit and OpenRefine
Jan 162016

There have been a number of workshops and presentations that I’ve seen floating around that talk about ways of using MarcEdit and OpenRefine together when doing record editing.  OpenRefine, for folks that might not be familiar, use to be known as Google Refine, and is a handy tool for working with messy data.  While there is a lot of potential overlap between the types of edits available between MarcEdit and OpenRefine, the strength of the tool is that it allows you to access your data via a tabular interface to easily find variations in metadata, relationships, and patterns.

For most folks working with MarcEdit and OpenRefine together, the biggest challenge is moving the data back and forth.  MARC binary data isn’t supported by OpenRefine, and MarcEdit’s mnemonic format isn’t well suited for import using OpenRefine’s import options as well.  And once the data has been put into OpenRefine, getting back out and turned into MARC can be difficult for first time users as well.

Because I’m a firm believe that uses should use the tool that they are most comfortable with – I’ve been talking to a few OpenRefine users trying to think about how I could make the process of moving data between the two systems easier.  And to that end, I’ll be adding to MarcEdit a toolset that will facilitate the export and import of MARC (and MarcEdit’s mnemonic) data formats into formats that OpenRefine can parse and easily generate.  I’ve implemented this functionality in two places – one as a standalone application found on the Main MarcEdit Window, and one as part of the MarcEditor – which will automatically convert or import data directly into the MarcEditor Window.

Exporting Data from MarcEdit

As noted above, there will be two methods of exporting data from MarcEdit into one of two formats for import into OpenRefine.  Presently, MarcEdit supports generating either json or tab delimited format.  These are two formats that OpenRefine can import to create a new project.

OpenRefine Option from the Main Window

OpenRefine Export/Import Tool.

If I have a MARC file and I want to export it for use in OpenRefine – I would using the following steps:

  1. Open MarcEdit
  2. Select Tools/OpenRefine/Export from the menu
  3. Enter my Source File (either a marc or mnemonic file)
  4. My Save File – MarcEdit supports export in json or tsv (tab delimited)
  5. Select Process

This will generate a file that can used for importing into OpenRefine.  A couple notes about that process.  When importing via tab delimited format – you will want to unselect options that does number interpretation.  I’d also uncheck the option to turn blanks into nulls and make sure the option is selected that retains blank rows.  These are useful on export and reimport into MarcEdit.  When using Json as the file format – you will want to make sure after import to order your columns as TAG, Indicators, Content.  I’ve found OpenRefine will mix this order, even though the json data is structured in this order.

Once you’ve made the changes to your data – Select the export option in OpenRefine and select the export tab delimited option.  This is the file format MarcEdit can turn back into either MARC or the mnemonic file format.  Please note – I’d recommend always going back to the mnemonic file format until you are comfortable with the process to ensure that the import process worked like you expected.

And that’s it.  I’ve recorded a video on YouTube walking through these steps – you can find it here:

This if course just shows how to data between the two systems.  If you want to learn more about how to work with the data once it’s in OpenRefine, I’d recommend one of the many excellent workshops that I’ve been seeing put on at conferences and via webinars by a wide range of talented metadata librarians.

*** Update****

In addition to the addition of the tool, I’ve set it up so that this tool can be selected as one of the user defined tools on the front page for quick access.  This way, if this is one of the tools you use often, you can just get right to it.

MarcEdit's Start Window Preferences with new OpenRefine Data Transfer Tool Option

MarcEdit’s Start Window Preferences with new OpenRefine Data Transfer Tool Option

Main Window with OpenRefine Data Transfer Tool

Main Window with OpenRefine Data Transfer Tool

 Posted by at 6:27 pm

MarcEdit Update (all versions)

 MarcEdit  Comments Off on MarcEdit Update (all versions)
Jan 102016

I decided to celebrate my absence from ALA’s Midwinter by doing a little coding.  Smile  I’ve uploaded updates for all versions of MarcEdit, though the Mac version has experienced the most significant revisions.  The changes:

Windows/Linux ChangeLog:

OSX ChangeLog:

You can get the update from the Downloads page (http://marcedit.reeset.net/downloads) or using the automated updating tools within MarcEdit.



 Posted by at 8:39 pm

MarcEdit Mac: Verify URLs

 MarcEdit  Comments Off on MarcEdit Mac: Verify URLs
Jan 102016

In the Windows/Linux version — on of the oldest tools has been the ability to validate URLs.  This tool generates a report providing the HTTP status codes returned for URLs in a record set.  This didn’t make the initial migration  — but has been added to the current OSX version of MarcEdit.

To find the resource, you open the main window and select the menu:

MarcEdit Mac: Main Window Menu -- Verify URLs

MarcEdit Mac: Main Window Menu — Verify URLs

Once selected, if works a lot like the Windows/Linux version.  You have two report types (HTML/XML), you can define a title field, you can also set the fields to check.  By default, MarcEdit selects all.  To change this — you just need to add each new field/subfield combination in a new line.

MarcEdit Mac: Verify URLs screen

MarcEdit Mac: Verify URLs screen

Questions, let me know.


 Posted by at 8:36 pm
Jan 102016

One of the functions that didn’t make the initial migration cut in the MarcEditor was the ability to edit the 006/008 in a graphical interface.  I’ve added this back into the OSX version.  You can find it in the Edit Menu:

MarcEdit Mac -- Edit 006/008 Menu Location

MarcEdit Mac — Edit 006/008 Menu Location

Invoking the tool works a little differently than the windows/linux version.  Just put your cursor into the field that you want to edit, and the select Edit.  MarcEdit will then read your record data and generate an edit form based on the material format selected (or the material format from the record if editing).

MarcEdit Mac -- Edit 006/008 Screen

MarcEdit Mac — Edit 006/008 Screen

Questions — let me know.


 Posted by at 8:30 pm
Jan 092016

Couple of interesting questions this week got me thinking about a couple of enhancements to MarcEdit.  I’m not sure these are things that other folks will make use of often, but I can see these being really useful answering questions that come up on the listserv.

The particular question that got me thinking about this today was the following scenario:

The user has two fields – an 099 that includes data that needs to be retained, and then an 830$v that needs to be placed into the 099.  The 830$v has trailing punctuation that will need to be removed. 

Example data:
=830  \\$aSeries Title $v 12-031.

The final data output should be:
=099  \\$aELECTRONIC RESOURCE 12-013
=830  \\$aSeries Title $v 12-031.

With the current tools, you can do this but it would require multiple steps.  Using the current build new field tool, you could create the pattern for the data:
=099  \\$a{099$a} {830$v}

This would lead to an output of:
=099  \\$aELECTRONIC RESOURCE 12-031.

To remove the period – you could use a replace function and fix the $a at the same time.  You could have also made the ELECTRONIC RESOURCE string a constant in the build new field – but the problem is that you’d have to know that this was the only data that ever showed up in the 099$a (and it probably won’t be).

So thinking about this problem, I’ve been thinking about how I might be able to add a few processing “macros” into the pattern language – and that’s what I’ve done.  At this point, I’ve added the following commands:

  • replace(find,replace)
  • trim(chars)
  • trimend(chars)
  • trimstart(chars)
  • substring(start,length)

The way that these have been implemented – these commands are stackable – they are also very ridged in structure.  These commands are case sensitive (command labels are all lower case), and in the places where you have multiple parameters – there are no spaces between the commas. 

So how does this work – here’s some examples (not full patterns):

As you can see in the patterns, the commands are initialized by adding “.command” to the end of the field pattern.  So how we would apply this to the user story above.  It’s easy:
=099  \\$a{099$a.replace(“DATA”,”RESOURCE”)} {830$v.trimend(“.”)}

And that would be it.  With this single pattern, we can run the replacement on the data in the 099$a and trim the data in the 830$v. 

Now, I realize that this syntax might not be the easiest for everyone right out of the gate, but as I said, I’m hoping this will be useful for folks interested in learning the new options, but am really excited to have this in my toolkit for answering questions posed on the listserv.

This has been implemented in all versions of MarcEdit, and will be part of this weekend’s update.


 Posted by at 9:17 pm