Archive for August, 2010

Enter the Spork

Monday, August 23rd, 2010

Enter the Spork

Over the last eight months I have been actively working on a new project for Equinox and under contract from OHIONET called FulfILLment, the goal of which is to create a hybrid physical/virtual union catalog and ILL system for seamlessly sharing resources between libraries, regardless of the ILS each library happens to use.

The thinking behind FulfILLment is simple — take the power and scalability of the Evergreen circulation environment, where we have nearly full a priori knowledge of global system state and strong algorithms to help get items to patrons, and project that up to an ILL environment which, heretofore, has typically had little global state information.

Evergreen and FulfILLment have been, at the code level, the same project thus far. Many of the recent improvements to Evergreen that I’ve been involved with can be credited, partially if not completely, to work on FulfILLment, including in-db ingest and import rulesets, search speed enhancement, true facets and new features in BibTemplate. This symbiotic relationship will, of course, continue because much of what both systems do is very similar on a high level.

Even accepting that Evergreen and FulfILLment will facilitate similar ends at the institutions that use them — specifically, getting items into the hands of users — and will share a great deal of internal code and structure, we’ve now reached a point where the details of many of the common goals of the two have been tackled. And so, on August 2, 2010, Evergreen grew a spork.

FulfILLment now has its own identity and will now rise or fall in its own Subversion repository, on its own server, with its own mailing lists and (though I hope there will be a lot of crossover) its own community.

It’s not a f-f-f … f-f-f … you know, that f-word, because FulfILLment will not compete with Evergreen. They will serve different purposes and constituencies, and there will always be things one can do that the other cannot. And, they will feed (on) each other, both in terms of specific code and conceptual design, moving forward. FulfILLment is, in the best possible sense of the term, a derivative project based on Evergreen.

So that’s the code part, but Open Source is about the community, right? This is an open call to all: jump right in! Grab the code (not much different than trunk Evergreen today, but that will be changing fast), join the mailing lists (not much traffic, but if you join then that can change!), hop in the IRC channel (#fulfillment on FreeNode). Dip your toe in, ask questions. This should be a fun ride — it was the first time around with Evergreen — and the more the merrier.


Update on First and Second Quarter FulfILLment Development

Monday, August 2nd, 2010

On July 22, 2010, Mike Rylander, VP of Research and Design for Equinox Software, Inc. (ESI) presented a webinar on the development that has occurred during the second quarter of the FulfILLment project. This is a two year project with an anticipated completion date of December 2011. Mike Rylander is the Lead Developer on the project, with the assistance of Michael Smith, ESI Developer, and Suzannah Lipscomb, ESI Project Manager.

During the second quarter, Mike and Michael continued to build upon the foundation established in the first quarter. Development occurs, sometimes simultaneously, at three different levels: database (or bottom), business logic (or middle) and user interface (UI). Much like building a house, necessary infrastructure must be created and put in place so that desired functionality can later be built on top or extended.

Database and user interface development occurred during the first quarter. The database development involved the following: record ownership; Next Generation Discovery Interface (NGDI); and automated record ingest. Regarding record ownership, fields were added to record record ownership. This will allow filtering of record visibility based on the institutions that own a title, regardless of item availability which is unknown or not authoritative at the time of searching. Later in the process, there will also be visibility checking of individual copies. Also, infrastructure was built which allows for permissions required to edit bib records directly in FulfILLment and for using record ownership instead of copy visibility for search result filtering. The NGDI completed work involves the generalized query infrastructure and true faceting support. For the generalized query infrastructure, the replacement query infrastructure, required for most search-related features, was built. For true faceting support, the functionality to cache and deliver query-specific faceting data was built. Regarding automated record ingest, rule sets were developed which will be applied to incoming records to facilitate normalization and data insertion. Additionally, an in-database ingest of incoming and updated records will speed up and increase the flexibility of record caching, while automated pipelining will facilitate the ingest of full and differential record additions without human intervention. UI development also occurred in the first quarter. The ILL requests (holds) list was developed which will allow patrons to view the status of, and change parameters of, ILL requests (holds) placed through the FulfILLment NGDI. Also, in order to support the interface for placing holds directly on metarecords, the interface extensions, providing the ability to place holds directly on metarecords from the main result screen, were built.

Development on the database in the second quarter continued to build on the work in the first quarter. For example, the developers worked on breaking up search logic in preparation for record-level visibility testing, which is dependent on the NGDI work completed in the first quarter. Additionally, development has been completed which will allow the ability to cache item visibility information for non-authoritative record visibility display. This work will result in faster searching capability because of the distinction between record ownership and copy location. Caching based on record ownership, as opposed to items simply involved in ILL transactions, makes the caching process more efficient and makes the display process faster. General testing of the Ruby Jangle Core implementation began with Evergreen (EG) being targeted as the downstream source. Initial testing results indicated the jangle core has robust capability. Some progress on user caching was completed with the EG prototype connector implementation of Jangle Actor Application Programming Interfaces (APIs). Approximately 30 – 40 percent of the actor API work in connection with the EG Connector has been completed and work on the EG Connector has already begun. In preparation for the work which will occur on LAI Connectors, dataflow diagram and design work for connectors was completed. More work was completed on NGDI, specifically infrastructure and UI widgets for skinning of search/discovery interfaces. The automated record ingest development from first quarter was streamlined by extending rule sets and application while providing and extending APIs for foreign record ingest. The API work allowed Mike, at the request of OhioNet, to upload approximately 90,000 metarecords and optimize them so that a variety of metarecords may be examined in detail. More work was completed on cache purging to allow automatic purging of item and patron data on a scheduled basis to automatically expire stale, cached data. In terms of UI work, the EG results screen is stabilized now. This is the result of the work on the ingest pipeline and how records are pulled in. Bibtemplate functionality was added which provides a richer display and allows Google Book Browse to be turned on. Finally, developer documentation is in progress with more technical detail regarding the specifics of this development.