The ABCs of Digitizing Section A

I’m not sure anyone who currently works in the library has any idea when the phrase “Section A” was first coined as a call number for small manuscript collections. Before the library’s renovation, before we barcoded all our books and boxes — back when the Rubenstein was still RBMSCL, and our reading room carpet was a very bright blue — there was a range of boxes holding single-folder manuscript collections, arranged alphabetically by collection creator. And this range was called Section A.

Box 175 of Section A
Box 175 of Section A

Presumably there used to be a Section B, Section C, and so on — and it could be that the old shelf ranges were tracked this way, I’m not sure — but the only one that has persisted through all our subsequent stacks moves and barcoding projects has been Section A. Today there are about 3900 small collections held in 175 boxes that make up the Section A call number. We continue to add new single-folder collections to this call number, although thanks to the miracle of barcodes in the catalog, we no longer have to shift files to keep things in perfect alphabetical order. The collections themselves have no relationship to one another except that they are all small. Each collection has a distinct provenance, and the range of topics and time periods is enormous — we have everything from the 17th to the 21st century filed in Section A boxes. Small manuscript collections can also contain a variety of formats: correspondence, writings, receipts, diaries or other volumes, accounts, some photographs, drawings, printed ephemera, and so on. The bang-for-your-buck ratio is pretty high in Section A: though small, the collections tend to be well-described, meaning that there are regular reproduction and reference requests. Section A is used so often that in 2016, Rubenstein Research Services staff approached Digital Collections to propose a mass digitization project, re-purposing the existing catalog description into digital collections within our repository. This will allow remote researchers to browse all the collections easily, and also reduce repetitive reproduction requests.

This project has been met with enthusiasm and trepidation from staff since last summer, when we began to develop a cross-departmental plan to appraise, enhance description, and digitize the 3900 small manuscript collections that are housed in Section A. It took us a bit of time, partially due to the migration and other pressing IT priorities, but this month we are celebrating a major milestone: we have finally launched our first 2 Section A collections, meant to serve as a proof of concept, as well as a chance for us to firmly define the project’s goals and scope. Check them out: Abolitionist Speech, approximately 1850, and the A. Brouseau and Co. Records, 1864-1866. (Appropriately, we started by digitizing the collections that began with the letter A.)

A. Brouseau & Co. Records carpet receipts, 1865

Why has it been so complicated? First, the sheer number of collections is daunting; while there are plenty of digital collections with huge item counts already in the repository, they tend to come from a single or a few archival collections. Each newly-digitized Section A collection will be a new collection in the repository, which has significant workflow repercussions for the Digital Collections team. There is no unifying thread for Section A collections, so we are not able to apply metadata in batch like we would normally do for outdoor advertising or women’s diaries. Rubenstein Research Services and Library Conservation Department staff have been going box by box through the collections (there are about 25 collections per box) to identify out-of-scope collections (typically reference material, not primary sources), preservation concerns, and copyright concerns. These are excluded from the digitization process. Technical Services staff are also reviewing and editing the Section A collections’ description. This project has led to our enhancing some of our oldest catalog records — updating titles, adding subject or name access, and upgrading the records to RDA, a relatively new standard. Using scripts and batch processes (details on GitHub), the refreshed MARC records are converted to EAD files for each collection, and the digitized folder is linked through ArchivesSpace, our collection management system. We crosswalk the catalog’s name and subject access data to both the finding aid and the repository’s metadata fields, allowing the collection to be discoverable through the Rubenstein finding aid portal, the Duke Libraries catalog, and the Duke Digital Repository.

It has been really exciting to see the first two collections go live, and there are many more already digitized and just waiting in the wings for us to automate some of our linking and publishing processes. Another future development that we expect will speed up the project is a batch ingest feature for collections entering the repository. With over 3000 collections to ingest, we are eager to streamline our processes and make things as efficient as possible. Stay tuned here for more updates on the Section A project, and keep an eye on Digital Collections if you’d like to explore some of these newly-digitized collections.

The Research Data Team: Hitting the Ground Running

There has been a lot of blogging over the last year about the Duke Digital Repository’s development and implementation, about its growth as a platform and a program, and about the creation of new positions to support research data management and curation. My fellow digital content analyst also recently posted about how we four new hires have been creating and refining our research data curation workflow since beginning our positions at Duke this past January. It’s obviously been (and continues to be) a very busy time here for the repository team at Duke Libraries, including both seasoned and new staff alike.

Besides the research data workflows between our two departments, what other things have the data management consultants and the digital content analysts been doing? In short, we’ve been busy!

 

In addition to envisioning stakeholder needs (which is an exercise we continuously do), we’ve received and ingested several data collections this year, which has given us an opportunity to also learn from experience. We have been tracking and documenting the types of data we’re receiving, the various needs that these types of data and depositors have, how we approach these needs (including investigating and implementing any additional tools that may help us better address these), how our repository displays the data and associated metadata, and the time spent on our management and curation tasks. Some of these are in the form of spreadsheets, others as draft policies that will first be reviewed by the library’s research data working group and then by a program committee, and others simply as brain dumps for things that require a further, more structured investigation by developers, the metadata architect, subject librarians, and other stakeholders. These documents live in either our shared online folder or our shared Box account, and, if a wider Duke library and/or public audience are required, are moved to our departments’ content collaboration software platforms (currently Confluence/Jira and Basecamp). The collaborative environments of these platforms support the dynamic nature of our work, particularly as our program takes form.

We also value the importance of face-to-face discussions, so we hold weekly meetings to talk through all of this work (we prefer outside when the weather is nice, and because squirrels are awesome).

One of the most exciting, and at times challenging, aspects of where we are is that we are essentially starting from the ground up and therefore able to develop procedures and features (and re-develop, and on and on again) until we find fits that best accommodate our users and their data. We rely heavily on each other’s knowledge about the research data field, and we also engage in periodic environmental scans of other institutions that offer data management and curation services.

When we began in January, we all considered the first 6-9 months as a “pilot phase”, though this description may not be accurate. In the minds of the data management consultants and the digital content analysts, we’re here and ready. Will we run into situations that require an adjustment to our procedures? Absolutely. It’s the nature of our work. Do we want feedback from the Duke community about how our services are (or are not) meeting their needs? Without a doubt. And will the DDR team continue to identify and implement features to better meet end-user needs? Certainly. We fully expect to adjust and readjust our tools and services, with the overall goal of fulfilling future needs before they’re even evident to our users. So, as always, keep watching to see how we grow!

A Tough Nut to Crack: Developing Digital Repositories

Folks, developing digital repositories is hard.  There are so many different layers of complexity built into the stack, compounded by the unique variety of end-users, or stakeholders, that we serve.

Consider the breadth of this work:

Starting at the bottom of the stack, you have our Preservation layer.  This is where we capture your bits, and ensure the long-term preservation of your digital assets.  But it goes well beyond just logging a single record in a database.  It involves capturing the data stream, writing that file and all associated files (metadata) to storage, replicating the data to various geographically dispersed servers, validating the ingest, logging the validation, ensuring successful recovery of replicated assets, and more.

All of that comes post-ingest.  I’ll not even belabor the complexities of data modeling and ingest here, but you get the idea… it’s hairy stuff.  Receiving and massaging a highly diverse body of data into a package appropriate for homogeneous ingest is a monumental effort in normalization.

Move up the stack into our Curation layer.  Currently we have a single administrative application that facilitates management and curatorial activities of our digital objects following ingest.  Roles or access controls can be managed here, in addition to various types of metadata (description about the item), etc.  There are a variety of other applications that are managed at this layer, which interact with, and store, various values that fuel display and functionality within the user interface.  This layer is quickly evolving in a way that necessitates diversification.  We have found that a single monolithic application is not a one-size-fits-all solution for our stakeholders who are in the business of data production/curation; it is at this layer where we are getting increasingly more pressure to integrate and inter-operate with a myriad of other tools and platforms for resource/data management.  This is tricky business as each of these tools handle data in different ways.

Finally, we have the Discovery layer.  The user interface.  This is what the public sees and consumes.  It’s where access to ingested materials occurs.  It is itself an application requiring significant custom development to meet the needs of various programs and collections of materials.  It is tightly coupled with the Curation layer, and therefore highly complex and customized to meet the needs of different focal areas.  Search functionality is yet another piece of complexity that requires maintenance and customization of a central index.  Nothing is OOTB (out of the box).  Everything requires configuration and customization.

And ALL of this- all of it- is inter-related.  Highly coupled and complex.  Few things reap easy wins, and often our work challenges foundational assumptions that have come well before.  It’s an exercise in balancing technical debt and moving forward without re-inventing the wheel every six months.

What I have presented here is a simplistic view of our software eco-system.  It’s just a snapshot of the various puzzle pieces that support the operation of a production repository.  In general, digital repositories are still fairly new on the scene.  No one has them figured out entirely and everyone does them a little bit differently.  There’s a strength to that which manifests in diverse platforms and a breadth of development possibilities.  There’s a weakness to it because there is no cookie-cutter approach that defines an easy path forward.

So it’s an exercise in evolution.  In iteration.  In patience.  In requirements definition.  We’re not going to always get it right, and our efforts will largely take a bit of time and experimentation, but we’re constantly working to improve, to enhance, and to mature our repository platform to meet the growing and evolving needs of our University.

So, here’s to many years of hard work ahead!  And many successful collaborations with our Duke community to realize our repository’s future.  We’re ready if you are!

Multispectral Imaging: What’s it good for?

At the beginning of March, the multispectral imaging working group presented details about the imaging system and the group’s progress so far to other library staff at a First Wednesday event. Representatives from Conservation Services, Data and Visualization Services, the Digital Production Center, the Duke Collaboratory for Classics Computing (DC3) and the Rubenstein Library each shared their involvement and interest in the imaging technology. Our presentation attempted to answer some basic questions about how the equipment works and how it can be used to benefit the scholarly community. You can view a video of that presentation here

Some of the images we have already shared illustrate a basic benefit or goal of spectral imaging for books and manuscripts: making obscured text visible. But what else can this technology tell us about the objects in our collections? As a library conservator, I am very interested in the ways that this technology can provide more information about the composition and condition of objects, as well as inform conservation treatment decisions and document their efficacy.

Conservators and conservation scientists have been using spectral imaging to help distinguish between and to characterize materials for some time. For example, pigments, adhesives, or coatings may appear very differently under ultraviolet or infrared radiation. Many labs have the equipment to image under a few wavelengths of light, but our new imaging system allows us to capture at a much broader range of wavelengths and compare them in an image stack.

Adhesive samples under visible and UV light.
(Photo credit Art Conservation Department, SUNY Buffalo State)

Spectral imaging  can help to identify the materials used to make or repair an object by the way they react under different light sources. Correct identification of components is important in making the best conservation treatment decisions and might also be used to establish the relative age of a particular object or to verify its authenticity.  While spectral imaging offers the promise of providing a non-destructive tool for identification, it does have limitations and other analytical techniques may be required.

Pigment and dye-based inks under visible and infrared light.
(Photo credit Image Permanence Institute)

Multispectral imaging offers new opportunities to evaluate and document the condition of objects within our collections. Previous repairs may be so well executed or intentionally obscured that the location or extent of the repair is not obvious under visible light. Areas of paper or parchment that have been replaced or have added reinforcements (such as linings) may appear different from the original when viewed under UV radiation. Spectral imaging can provide better visual documentation of the degradation of inks ( see image below) or damage from mold or water that is not apparent under visible light.

Iron gall ink degredation. Jantz MS#124, Rubenstein Library
(Jantz MS#124, Rubenstein Library)

This imaging equipment provides opportunities for better measuring the effectiveness of the treatments that conservators perform in-house. For example, a common treatment that we perform in our lab is the removal of pressure sensitive tape repairs from paper documents using solvents. Spectral imaging before, during, and after treatment could document the effectiveness of the solvents or other employed techniques in removing the tape carrier and adhesive from the paper.

Tape removal before and during treatment under visible and UV light.
(Photo credit Art Conservation Department, SUNY Buffalo State)

Staff from the Conservation Services department have a long history of participating in the library’s digitization program in order to ensure the safety of fragile collection materials. Our department will continue to serve in this capacity for special collections objects undergoing multispectral imaging to answer specific research questions; however, we are also excited to employ this same technology to better care for the cultural heritage within our collections.

______

Want to learn even more about MSI at DUL?

 

Going with the Flow: building a research data curation workflow

Why research data? Data generated by scholars in the course of investigation are increasingly being recognized as outputs nearly equal in importance to the scholarly publications they support. Among other benefits, the open sharing of research data reinforces unfettered intellectual inquiry, fosters reproducibility and broader analysis, and permits the creation of new data sets when data from multiple sources are combined. Data sharing, though, starts with data curation.

In January of this year, Duke University Libraries brought on four new staff members–two Research Data Management Consultants and two Digital Content Analysts–to engage in this curatorial effort, and we have spent the last few months mapping out and refining a research data curation workflow to ensure best practices are applied to managing data before, during, and after ingest into the Duke Digital Repository.

What does this workflow entail? A high level overview of the process looks something like the following:

After collecting their data, the researcher will take what steps they are able to prepare it for deposit. This generally means tasks like cleaning and de-identifying the data, arranging files in a structure expected by the system, and compiling documentation to ensure that the data is comprehensible to future researchers. The Research Data Management Consultants will be on hand to help guide these efforts and provide researchers with feedback about data management best practices as they prepare their materials.

Our form for metadata capture

Depositors will then be asked to complete a metadata form and electronically sign a deposit agreement defining the terms of deposit. After we receive this information, someone from our team will invite the depositor to transfer their files to us, usually through Box.

Consultant tasks

As this stage, the Research Data Management Consultants will begin a preliminary review of the researcher’s data by performing a cursory examination for personally identifying or protected health information, inspecting the researcher’s documentation for comprehension and completeness, analyzing the submitted metadata for compliance with the research data application profile, and evaluating file formats for preservation suitability. If they have any concerns, they will contact the researcher to make some suggestions about ways to better align the deposit with best practices.

Analyst tasks

When the deposit is in good shape, the Research Data Management Consultants will notify the Digital Content Analysts, who will finalize the file arrangement and migrate some file formats, generate and normalize any necessary or missing metadata, ingest the files into the repository, and assign the deposit a DOI. After the ingest is complete, the Digital Content Analysts will carry out some quality assurance on the data to verify that the deposit was appropriately and coherently structured and that metadata has been correctly assigned. When this is confirmed, they will publish the data in the repository and notify the depositor.

Of course, this workflow isn’t a finished piece–we hope to continue to clarify and optimize the process as we develop relationships with researchers at Duke and receive more data. The Research Data Management Consultants in particular are enthusiastic about the opportunity to engage with scholars earlier in the research life cycle in order to help them better incorporate data curation standards in the beginning phases of their projects. All of us are looking forward to growing into our new roles, while helping to preserve Duke’s research output for some time to come.

Rethinking Repositories at CNI Spring ’17

One of the main areas of emphasis for the CNI Spring 2017 meeting was “new strategies and approaches for institutional repositories (IR).” A few of us at UNC and Duke decided to plug into the zeitgeist by proposing a panel to reflect on some of the ways that we have been rethinking – or even just thinking about – our repositories.

Continue reading Rethinking Repositories at CNI Spring ’17

Photographing the Movement

Ella Baker, 1964, Danny Lyon, Memories of the Southern Civil Rights Movement 12, dektol.wordpress.com

You never know what to expect with the SNCC Digital Gateway Project project. This three-and-a-half year collaboration with veterans of the Civil Rights Movement, scholars, and archivists has brought constant surprises, one of which came this past January.

The SNCC Digital Gateway website went live on December 13, 2016. One of the 20th century’s most influential activists, Ella Baker, brought the Student Nonviolent Coordinating Committee (SNCC) into being out of the student sit-in movement in 1960, and it would have been her 113th birthday. The SNCC Digital Gateway tells the story of how young activists in SNCC united with local people in the Deep South to build a grassroots movement for change that empowered the Black community and transformed the nation.

Bob Dylan, Courtland Cox, Pete Seeger, and James Forman sitting outside the SNCC office in Greenwood, Mississippi, July 1963, Danny Lyon, dektol.wordpress.com

Over 175 SNCC staff, local activists, mentors, and allies are profiled on the site. The entire project is built on open communication and collaboration between movement veterans and scholars, so after the website went live, SNCC Legacy Project president Courtland Cox sent all living SNCC veterans a link to their profile and requested their feedback. And this is where the unexpected happened. Danny Lyon, SNCC’s first staff photographer, wrote back, offering the use of his photographs in the SNCC Digital Gateway.

Photograph of Danny Lyon with his Nikon F Reflex, Chicago 1960, Danny Lyon Photography, Magnum Photos
Photograph of Danny Lyon with his Nikon F Reflex in Chicago, 1960, Danny Lyon, dektol.wordpress.com

In many ways, Danny Lyon’s photos are the iconic photos of SNCC. In the summer of 1962, Lyon, then a student at the University of Chicago, hitchhiked to Cairo, Illinois with his camera to photograph the desegregation movement that SNCC was helping to organize. After SNCC’s executive secretary James Forman brought Lyon onto staff, he spent next two years documenting SNCC organizing work and demonstrations across the South. Many SNCC posters and pamphlets featured Lyon’s photographs, helping SNCC develop its public image and garner sympathy for the Movement. Julian Bond described Lyon’s photos  as helping “to make the movement move.”

For Danny Lyon to offer these images to the SNCC Digital Gateway was something like winning the lottery. The ties between SNCC veterans run deep, and Lyon explained that he wanted to help. Due to the website’s sustainability standards, our sources for photographs have been limited. While the number of movement-related digital collections are growing, the vast majority are made up of documents, not images. Lyon agreed to give us digital copies of any his movement photos for use in the site. These included rarely photographed people, like Prathia Hall, Worth Long, Euvester Simpson, and many others.

We spent two months working with Danny Lyon and the people at Magnum Photos to identify images, hammer out terms of use, and embed the photos in the site. Today, the SNCC Digital Gateway website proudly features over 70 of Danny Lyon’s photographs. Spend some time, and check them out. And thank you, Mr. Danny Lyon!

James Forman leads singing with staffers in the SNCC office on Raymond Street, 1963, Danny Lyon, Memories of the Southern Civil Rights Movement 123, dektol.wordpress.com

Nuts, Bolts, and Bits: Further Down the Preservation Path

It’s been awhile since we last wrote about the preservation architecture underlying the repository in Preservation Architecture: Phase 2 – Moving Forward with Duke Digital Repository.   Iceberg.  Fickr user: pere.We’ve made some terrific progress in the interim, but most of that is invisible to our users not unlike our chilly friends, icebergs.

Let’s take a brief tour to surface some these changes!

 

Policy and Procedure Development

The recently formed Digital Preservation Advisory Group has been working on policy and procedure to bring DDR into compliance with the ISO 16363 Audit and Certification of Trustworthy Digital Repositories Minimum Criteria. We’ve been working on diverse policy areas like defining how embargoes may be set; how often fixity must be checked and reported to stakeholders; in what situations may content be removed and who must be involved in that decision; and what conditions necessitate a ‘tombstone’ to explain the removal of an object.   Some of these policies are internal and some have already been made publicly available.  For example, see our Deaccession Policy and our Preservation Policy.   We’ve made great progress due to the fantastic example set by our friends at Purdue University Research Repository and others.

Preservation Infrastructure

Duke, DuraCloud, and GlacierDurham, North Carolina, is a lovely city– close to mountains, the beach, and full of fantastic restaurants!  Sometimes, though, your digital assets just need to get away from it all.  Digital preservation demands some geographic diversity.  No repository wants all of its data to be subject to a hurricane, of course!  That’s why we’ve partnered with DuraCloud, a preservation-focused cloud provider, to store copies of our digital assets in geographically diverse locations.  Our data now enjoys homes at Duke, at DuraCloud, and in Amazon Glacier!

To bring transparency to the process of remotely replicating our assets and validating the local and remote assets, we’ve recently implemented a process that externalizes these tasks from Fedora and delivers scheduled reports to stakeholders enumerating and detailing the health of their assets.

 

Research and Development

The DDR has grown tremendously in the last year and with it has grown the need to standardize and scale to demand.  Writing Python to arrange files to conform to our Standard Ingest Format was a perfectly reasonable solution in early 2016.  Likewise, programmatic reformatting of endangered file formats wasn’t feasible with the resources available at the time.  We also did need to worry about traffic scaling back then.  Times have changed!

DDR staff are exploring tools to allow non-developers to easily ingest large amounts of material, methods to identify and migrate files to better supported formats, and are planning for more sustainable and durable architecture like increased inter-application messaging to allow us to externalize processes that have been handled within the repository to external servers.

Let’s Get Small: a tribute to the mighty microcassette

In past posts, I’ve paid homage to the audio ancestors with riffs on such endangered–some might say extinct–formats as DAT and Minidisc.  This week we turn our attention to the smallest (and perhaps the cutest) tape format of them all:  the Microcassette.

Introduced by the Olympus Corporation in 1969, the Microcassette used the same width tape (3.81 mm) as the more common Philips Compact Cassette but housed it in a much smaller and less robust plastic shell.  The Microcassette also spooled from right to left (opposite from the compact cassette) as well as using slower recording speeds of 2.4 and 1.2 cm/s.  The speed adjustment, allowing for longer uninterrupted recording times, could be toggled on the recorder itself.  For instance, the original MC60 Microcassette allowed for 30 minutes of recorded content per “side” at standard speed and 60 minutes per side at low speed.

The microcassette was mostly used for recording voice–e.g. lectures, interviews, and memos.  The thin tape (prone to stretching) and slow recording speeds made for a low-fidelity result that was perfectly adequate for the aforementioned applications, but not up to the task of capturing the wide dynamic and frequency range of music.  As a result, the microcassette was the go-to format for cheap, portable, hand-held recording in the days before the smartphone and digital recording.  It was standard to see a cluster of these around the lectern in a college classroom as late as the mid-1990s.  Many of the recorders featured voice-activated recording (to prevent capturing “dead air”) and continuously variable playback speed to make transcription easier.

The tiny tapes were also commonly used in telephone answering machines and dictation machines.

As you may have guessed, the rise of digital recording, handheld devices, and cheap data storage quickly relegated the microcassette to a museum piece by the early 21st century.  While the compact cassette has enjoyed a resurgence as a hip medium for underground music, the poor audio quality and durability of the microcassette have largely doomed it to oblivion except among the most willful obscurantists.  Still, many Rubenstein Library collections contain these little guys as carriers of valuable primary source material.  That means we’re holding onto our Microcassette player for the long haul in all of its atavistic glory.

image by the author. other images in this post taken from Wikimedia Commons (https://commons.wikimedia.org/wiki/Category:Microcassette)

 

Notes from the Duke University Libraries Digital Projects Team