Oct 272009
 

Yesterday, I posted pictures of my families first three jack-o-lanterns.  These were Kenny, Nathan and Alyce’s pumpkins – but I didn’t have one.  So, I went to the store and picked up a new pumpkin so I could carve my own jack-o-lantern.  I had an idea of what I wanted – over the past two years, I’ve been interested in doing a Simpson’s themed pumpkin – but I’ve always been busy cutting the boy’s pumpkins.  Well, this year, I finally got a chance to carve out my Devil Flanders pumpkin.  I think it turned out well – pictures below.

Devil Flanders in the light

 IMG_1585

Devil Flanders in the dark

IMG_1586

 

–TR

 Posted by at 10:56 pm
Oct 262009
 

As the habit here at my house, the boys and I finally got our jack-o-lanterns finished for this year.  Well, mostly finished.  I still need to make one for work – but I got the important ones finished. 

The boys really look forward to our pumpkin carving nights.  We spend a good deal of time planning and picking out just the right pumpkins.  This year, we picked our pumpkins from a local Pumpkin Patch – a wheelbarrel full.  Kenny picked the biggest, a 45 lb beast of a pumpkin, while Nathan picked 3 white pumpkins.  We had one more at home – so that’s what we were carving tonight.

At the pumpkin patch, they had a lot of the fun stuff.  A corn maze, hay rides, some hay pyramids, etc.  When we were in the corn maze, I tried to convince them that we had to hurry because of the corn zombies – but I don’t think anyone believed me.  They were too cool for that.  Of course, there are some scary corn mazes around our area that I’ve taken people too while in college that reduced many a man to tears (which is funny) – but I doubt the boys are ready for that (or me ready for spending time up because of the nightmares.)

Anyway, our new pumpkins.  We carved 3 from 5 pumpkins.  One is the Jolly Rogers, one Aang, from Avatar the Last Air Bender and 3 to make up our jack-o-lantern pumpkin (this was what Nathan wanted).  So here they are:

IMG_1570

IMG_1571

 IMG_1572

 IMG_1573

 IMG_1574

 IMG_1575 IMG_1576

 

This one is Aang, the Last Air Bender

 IMG_1577

This is the glowing Aang

IMG_1578 IMG_1580

IMG_1582

IMG_1583

 

–TR

 Posted by at 11:39 pm
Oct 202009
 

I’m just about to the point where I have this work completed and will be ready to send it out for a few people for testing.  However, I want to provide some feedback so folks have an idea how this will work (even if you’re not that interested).

Paging:

The idea here is that loading the entire data file into an edit window is a big waste of resources and a performance killer.  So, rather than load all the data, we load small snippets of data, but allow users to search the entire file or page through it.  At this point, here’s what this looks like:

image

This is a sample using a 109 MB file.  Previously, this would have consumed over 450 MB of virtual memory to open, and editing would be limited.  Using the paging approach, memory allocation is down to 37 MB – essentially the memory allocated when the program opens (thanks to the need to initialize the .NET framework)

image

This is a big difference and it shows.  But how does this actually work exactly so that as you page through files, performance doesn’t suffer?

Well, here’s the process when paging. 

  1. The user selects a file to open
  2. MarcEdit opens the file, and does the following preprocessing steps
    1. Is Preview mode selected –> If yes, open in Preview mode
    2. Is Preview mode turned off –> If yes, continue to paging
      1. Pull the configuration option that defines number of records per page (found on the preferences dialog)
      2. Pre-process the file.  Preprocessing does the following
        1. Determine number of records in the file
        2. Determine number of pages to display
        3. Create an internal memory map of the file, capturing a structure of start and end positions within the file for a set of pages.

 

The most important part of the paging process is the pre-processing that occurs on the file.  In order to do paging (at the record level), MarcEdit must read the file and determine how many records are in the file.  This means that when you open a large file, there will be an initial pause while the file is pre-processed – but once this preprocessing is done, there should be no need for the program to need to do this again unless the file is reloaded (through a global edit, etc).  How long will it take?  This is hard to say.  The process that I use is fairly optimized, uses buffers, etc.  So, for example, on the 109 MB file example above, preprocessing took approximately 2 seconds.  I think that this is fair.  However, once the processing is done, each page, no matter where in the file, should be able to be addressed in under a second (or right at 1 second for allocation and render).  For my 109 MB test file, page rendering is an average of 0.7 seconds.  I’m happy with this.

Saving/edits:

I knew when doing this that saving and handling edits on paged data would be one of the biggest issues of this method.  The primarily reason is that in most cases, the method that would be used would be to create a shadow copy (memory mapped file) of the original and save changes to it as the user paged through and made edits.  The problem with this approach are two fold.  Since we are dealing with records (not characters) – each edit would need to be saved, re-preprocessed (because file positions would change) and then re-rendered.  When I attempted to use this approach on my 109 MB test file, paging jumped to nearly 6 seconds to render a page because of all the work being done to save and reprocess the file.  Obviously, that’s not acceptable.  So, I’ve decided to use a different approach.  Internally, I’ve added an enumerated structure that stores a page number and a file pointer.  As pages are changed, a temporary file is created that stores just that modified page.  As MarcEdit is paged, it checked the enumerator to see if a page exists before pulling it from the source.  This way, if you change page 1, then move to page 2 and go back to page 1, you’d see your changes – which would be pulled directly from the shadow buffer.  These temp files will be stored and will then be rectified when:

  1. The user saves a file
  2. The user completes a global edit function (because these always require a full save – even if it is to an internal shadow file).

Using this approach, paging isn’t affected by edits to pages, and saving appears to work fine. 

Anyway, that’s the approach that I’m working with right now.  As I say, I’m hoping to wrap up this work tonight/tomorrow and given that occurs, I’ll be posting a test version for those brave souls who what to give this a whirl and give me feedback.  While may let folks see one more tool – I’m going to add a debugger switch which will allow you to capture a log file that stores variable states at critical moments.  This is something that I’ve been wanting – as it should help me when people as for debugging help.

 

–TR

 Posted by at 1:23 pm
Oct 152009
 

I asked this question on the MarcEdit Listserv, but will post it here as well.  Below, is the message and images of the wireframes that are mentioned.  If you have an opinion – feel free to join the list and let me know, or if you like, you can contact me directly at: terry.reese@oregonstate.edu

 

******* Forwarded Message from the MarcEdit-L Archive **********

I have a question and I’m hoping that the collective wisdom of the MarcEdit-L list can help me solve it.  I’ve got an update for MarcEdit that I’ve been sitting on for about a month because I have a specific issue (usability mostly) that I’m trying to solve, and I have an idea how to do it, but it will change the way that you edit MARC records in the editor (at least, how they are displayed) and before I go forward, I wanted to quickly take the communities pulse on this.

The problem

So let’s start with an explanation of the problem.  As folks that have worked with both MarcEdit 4.x and MarcEdit 5.x know, the ability for the Editor to load a lot of data into is much different.  In MarcEdit 4.x, the application utilized a custom edit control written in assembly for loading and editing records in the MarcEditor.  This allowed users to load very large files (150 MB or so) into the editor without a noticeable change in speed when adding new data to the editor, resizing windows, etc.  In MarcEdit 5.x, I made a conscious decision to utilize all .NET components to preserve the ability to port MarcEdit to the Linux and Mac platforms (Linux will be officially completed at the next release btw) – however, this had some implications with the editor in two ways.  1) Loading rich content into the editor has a much higher memory cost and 2) this higher memory cost has a definite effect on performance (loading and editing).  This is why I introduced the preview mode – a read-only mode that allowed users to load a snippet of the file and then make their global edits.  For my usage of MarcEdit, this worked beautifully – but I’m finding that a number of users have workflows that require them to load the entire file and perform single record edits which is, I’ll admit, painful when files start to get close to 8-10 mbs in size – as changes in the editing window often times are made, but are made with a delay (i.e., you type a word – a pause, then the data catches up).  This also affects screen resizing, etc.  Tied to this problem is the various character encodings that MarcEdit supports (it’s beyond MARC8 and UTF8).  This as well causes an issue with memory usage depending on the encoding in use – and honestly, is one of the big reasons for the change away from the assembly components in MarcEdit 4.x – that component simply didn’t do Unicode well and that’s the future of MARC.  The current component in MarcEdit does Unicode very well, but certain scripts give Windows some fits rendering (performance wise) – so it’s a problem – one that I’d like to solve.

Solutions

Anyway, that’s the problem I’m looking to solve.  I’m looking for a solution that will allow users that want to make individual record changes on large dataset within the MarcEditor, and do so in a way that allows the editor to gracefully handle memory management and performance.  The present solution, the one that is completely untenable, is to load all the data into an edit control.  On my test machines, I can load files up to ~150 MB in size into the control (your mileage will vary due to virtual memory restrictions and available ram) but it comes at a huge cost.  In Windows (and virtual languages like .NET especially), rendering content virtually is expensive.  Memory consumed is roughly 4x the source – so, rendering 150 MB of data costs my system ~600 MB of virtual ram.  Painful, and performance shows.  This is why the preview mode is there.  But let’s say you are dealing with a smaller dataset, something in the 8-10 MB range.  You are still consuming close to 40 MB to render the data – and performance can suffer depending on hardware and memory available.  If you need to make individual record changes on a batch in that size range, making these changes may be frustrating as you may indeed have to deal with a delay in entering data as the system re-buffered available memory to handle the work.  I’m pretty sure that everyone that’s had this happen agrees that this needs to change (I’ve heard from 3 people recently that have been experience this problem and are trying to figure out how to make it work within existing workflows) and I’m sure there are others that have not spoken up or may still use MarcEdit 4.x for very specific tasks simply because the handling of larger files for individual record editing was better (which is fair, but becomes less and less of a reliable solution as more data becomes available in UTF8).

So I’ve been thinking about this a lot over the past month, writing some test code, developing some wireframes and I want to present some options and get some feedback.  Essentially, there are two ways that I think I can deal with this issue.  One is to essentially provide real-time random access to large files [not preferred], so that the only data loaded into the editor will be available within the memory buffer.  This would likely be the ideal solution, but it also is the most difficult to write simply because all data would need to be mapped to temporary buffers, tracked, etc.  Also, when dealing with really large files, the random access will not be immediate, meaning that as you move further down the file, the ability to page down may become more labored.  The benefits however, is that the memory footprint would be much, much lower so performance for general, individual record editing, should improve greatly.  It also would most closely resemble the current way that MarcEdit provided editing within the MarcEditor.  All data would appear to be loaded in a Notepad-like interface – you’d page down, scroll down just as you do now.  I’m not sure how this would affect Find and Replace – but I’m sure we could make it work. 

And while the above may be the more ideal, it’s not the one that I’m leaning towards (hence this message).  I’ve been thinking a lot about how MARC records are represented in MarcEdit, how they are edited, etc. and I’m beginning to believe that when working with a large set of MARC records, the best solution wouldn’t be to provide simply a complete picture of all loaded records, but would be to display groups of records, with the ability to page through a recordset.  I’ve attached some wireframes to illustrate this point in the attached PowerPoint.  In slide 1, I’ve provided a demo of how I think the editing may look (ignore the menus, icons – these are just part of my test code).  Essentially, users would define how many records they want to display per “page?.  I’m thinking that the sweet spot would likely be about 500 – but I’d make this user defined.  MarcEdit can then, very quickly, determine how many records are in the file and then break up the record set as pages.  MarcEdit then would only load one page of records at a time.  This allows users the ability to quickly do individual edits of records, reduces memory footprint and greatly improves the overall experience of using large data files.  It also takes system memory limitations completely out of the equation, as only a small block of records will be displayed at any given time.

Using this system also would let me rethink how we do finds within a Recordset.  At present, when you use the find tool, MarcEdit has to enumerate over the entire record set and this is, for all intensive purposes, a very memory intensive operation.  Slow too if you have a lot of records.  In this new model, I’d add a new button to the Find dialog – Find All (see slide 2).  When Find All was used, what would be generated is a report of all occurrences of the needle found within the record set.  The report would show the criteria in context, with the ability to jump to the specific page where the text was found.  Personally, I think that this could be a big improvement over current find, as users would immediately be able to see all the cases in which a criteria exists without having to jump through the entire file.  Additionally, this type of a design would allow me to start thinking about the MarcEditor itself, so that record set editing could be done with pages (so you could for example, span a new page within a new MarcEditor tab so pages could be compared [see slide 3]).  I think that this type of design could eventually lead to some fairly interesting enhancements – but I also recognize that it will be different.  It represents a different way to view and edit records in MarcEdit – though, this change really only affect how you edit records individually (since global editing is done differently). 

Finally, implementation – if I move down the above path – I can integrate the current test code into the existing MarcEdit application with little work.  I could wrap up my update and not have to really worry about introducing regression errors.  If I try to implement the first solution, all bets are off in terms of when it would be done.  It would represent a major change to how data is handled within the program and I’d have to step back, re-write a lot of code and then find some willing users to try  it because there would be a significant chance for regression errors.

Anyway, that’s my idea.  I think it addresses a known weakness in the program and makes individual record editing better, and does so without causing too much interruption to the user.  And, if successful, may allow me to slowly remove the preview mode from the MarcEditor, as it would no longer be needed.

How can you help

If you stayed with me this long and looked at the wireframes, you are probably wondering how you can help.  Well, I’m looking for comments and ideas on this.  MarcEdit is a very community oriented project.  I’d say that over 90% of the work that goes into the program, is done at the community’s request.  This is an issue that I know has been raised by members of the user community, and I’m really waiting to make the community involved in the decision.  I’m definitely open to other suggestions and suggestions for how to tweak the wireframes (since I recognize that there are many places where usability could be improved) – but that’s kind of where I’m at right now. 

Thanks everyone who made it this far,

–TR

********************************
Terry Reese
Gray Family Chair
for Innovative Library Services
121 Valley Libraries
Corvallis, Or 97331
tel: 541.737.6384
********************************

 

Wireframes:

 

Slide 1

 

Slide1

 

 

 

Slide 2

 Slide2

 

 

Slide 3

 

Slide3

 Posted by at 2:20 pm

MarcEdit Listserv

 MarcEdit  Comments Off
Oct 152009
 

So, the good folks at George Mason University have offered to host a MarcEdit Listserv.  If you are interested, you can find it here: http://www.lsoft.com/scripts/wl.exe?SL1=MARCEDIT-L&H=MAIL04.GMU.EDU

This list would be a great place for folks looking to ask questions.  I’m one of many moderators to the list, so if a question is asked, I (or someone) will try to answer it.  What I’m most excited about is that this will create a searchable archive for folks looking for help.

–TR

 Posted by at 2:10 pm