Posts Tagged ‘FulfILLment development’

This week in FulfILLment…Patron Management and Patron Privacy!

Thursday, January 12th, 2012

Have you been wondering how FulfILLment will handle patron management and privacy? If so, you’ve come to the right place. Read on for an overview of both.

Patron Management in FulfILLment:

Did you know…?

  • FulfILLment can automatically create patron records on the fly when an ILL is initiated by a patron or staff member on behalf of the patron by pulling information directly from the home ILS.
  • Standard patron fields will include: names, home library, barcode, DOB, email address, 3 phone fields, unlimited physical addresses, profile, standing, barred, expiration, group, and statistical categories (stat cats). A statistical category is a reportable field in Evergreen Open Source ILS that allows staff to organize and gather information about sets of patrons and/or copies. Examples of commonly used patron statistical categories include residency, age, school district or department. Examples of commonly used copy statistical categories include funding, genre or reading level. (Thanks to Shae Tetterton, Project Manager, Equinox Software, Inc. for this description of stat cats!)
  • FulfILLment will support searching on all patron fields.
  • FulfILLment will prevent patrons from requesting material for which they already have an active ILL request.
  • FulfILLment can track patron fines, charges and overdues.
  • FulfILLment allows staff to create policies to determine/control patron eligibility. This is flexible down to specific libraries or specific item types.

Patron Privacy in FulfILLment:

Did you know…?

  • FulfILLment can be set to remove patron information after ILL transactions are complete.
  • FulfILLment can be set to require confirmation that a patron is sharing their data with a foreign library upon ILL initiation.
  • FulfILLment can be set to retain as much or as little patron data as local policy requires.

Be sure to check back next week for another update. The next post will cover the exciting realm of Policies and Permissions.

Update on First and Second Quarter FulfILLment Development

Monday, August 2nd, 2010

On July 22, 2010, Mike Rylander, VP of Research and Design for Equinox Software, Inc. (ESI) presented a webinar on the development that has occurred during the second quarter of the FulfILLment project. This is a two year project with an anticipated completion date of December 2011. Mike Rylander is the Lead Developer on the project, with the assistance of Michael Smith, ESI Developer, and Suzannah Lipscomb, ESI Project Manager.

During the second quarter, Mike and Michael continued to build upon the foundation established in the first quarter. Development occurs, sometimes simultaneously, at three different levels: database (or bottom), business logic (or middle) and user interface (UI). Much like building a house, necessary infrastructure must be created and put in place so that desired functionality can later be built on top or extended.

Database and user interface development occurred during the first quarter. The database development involved the following: record ownership; Next Generation Discovery Interface (NGDI); and automated record ingest. Regarding record ownership, fields were added to record record ownership. This will allow filtering of record visibility based on the institutions that own a title, regardless of item availability which is unknown or not authoritative at the time of searching. Later in the process, there will also be visibility checking of individual copies. Also, infrastructure was built which allows for permissions required to edit bib records directly in FulfILLment and for using record ownership instead of copy visibility for search result filtering. The NGDI completed work involves the generalized query infrastructure and true faceting support. For the generalized query infrastructure, the replacement query infrastructure, required for most search-related features, was built. For true faceting support, the functionality to cache and deliver query-specific faceting data was built. Regarding automated record ingest, rule sets were developed which will be applied to incoming records to facilitate normalization and data insertion. Additionally, an in-database ingest of incoming and updated records will speed up and increase the flexibility of record caching, while automated pipelining will facilitate the ingest of full and differential record additions without human intervention. UI development also occurred in the first quarter. The ILL requests (holds) list was developed which will allow patrons to view the status of, and change parameters of, ILL requests (holds) placed through the FulfILLment NGDI. Also, in order to support the interface for placing holds directly on metarecords, the interface extensions, providing the ability to place holds directly on metarecords from the main result screen, were built.

Development on the database in the second quarter continued to build on the work in the first quarter. For example, the developers worked on breaking up search logic in preparation for record-level visibility testing, which is dependent on the NGDI work completed in the first quarter. Additionally, development has been completed which will allow the ability to cache item visibility information for non-authoritative record visibility display. This work will result in faster searching capability because of the distinction between record ownership and copy location. Caching based on record ownership, as opposed to items simply involved in ILL transactions, makes the caching process more efficient and makes the display process faster. General testing of the Ruby Jangle Core implementation began with Evergreen (EG) being targeted as the downstream source. Initial testing results indicated the jangle core has robust capability. Some progress on user caching was completed with the EG prototype connector implementation of Jangle Actor Application Programming Interfaces (APIs). Approximately 30 – 40 percent of the actor API work in connection with the EG Connector has been completed and work on the EG Connector has already begun. In preparation for the work which will occur on LAI Connectors, dataflow diagram and design work for connectors was completed. More work was completed on NGDI, specifically infrastructure and UI widgets for skinning of search/discovery interfaces. The automated record ingest development from first quarter was streamlined by extending rule sets and application while providing and extending APIs for foreign record ingest. The API work allowed Mike, at the request of OhioNet, to upload approximately 90,000 metarecords and optimize them so that a variety of metarecords may be examined in detail. More work was completed on cache purging to allow automatic purging of item and patron data on a scheduled basis to automatically expire stale, cached data. In terms of UI work, the EG results screen is stabilized now. This is the result of the work on the ingest pipeline and how records are pulled in. Bibtemplate functionality was added which provides a richer display and allows Google Book Browse to be turned on. Finally, developer documentation is in progress with more technical detail regarding the specifics of this development.