Respectfully Yours: A Deep Dive into Digitizing the Booker T. Washington Collection

Post authored by Jen Jordan, Digital Collections Intern.

Hello, readers. This marks my third, and final blog as the Digital Collections intern, a position that I began in June of last year.* Over the course of this internship I have been fortunate to gain experience in nearly every step of the digitization and digital collections processes. One of the things I’ve come to appreciate most about the different workflows I’ve learned about is how well they accommodate the variety of collection materials that pass through. This means that when unique cases arise, there is space to consider them. I’d like to describe one such case, involving a pretty remarkable collection. 

Cheyne, C.E. “Booker T. Washington sitting and holding books,” 1903. 2 photographs on 1 mount : gelatin silver print ; sheets 14 x 10 cm. In Washington, D.C., Library of Congress Prints and Photographs Division.

In early October I arrived to work in the Digital Production Center (DPC) and was excited to see the Booker T. Washington correspondence, 1903-1916, 1933 and undated was next up in the queue for digitization. The collection is small, containing mostly letters exchanged between Washington, W. E. B. DuBois, and a host of other prominent leaders in the Black community during the early 1900s. A 2003 article published in Duke Magazine shortly after the Washington collection was donated to the John Hope Franklin Research Center provides a summary of the collection and the events it covers. 

Arranged chronologically, the papers were stacked neatly in a small box, each letter sealed in a protective sleeve, presumably after undergoing extensive conservation treatments to remediate water and mildew damage. As I scanned the pages, I made a note to learn more about the relationship between Washington and DuBois, as well as the events the collection is centered around—the Carnegie Hall Conference and the formation of the short-lived Committee of Twelve for the Advancement of the Interests of the Negro Race. When I did follow up, I was surprised to find that remarkably little has been written about either.

As I’ve mentioned before, there is little time to actually look at materials when we scan them, but the process can reveal broad themes and tone. Many of the names in the letters were unfamiliar to me, but I observed extensive discussion between DuBois and Washington regarding who would be invited to the conference and included in the Committee of Twelve. I later learned that this collection documents what would be the final attempt at collaboration between DuBois and Washington.

Washington to Browne, 21 July 1904, South Weymouth, Massachusetts

Once scanned, the digital surrogates pass through several stages in the DPC before they are prepared for ingest into the Duke Digital Repository (DDR); you can read a comprehensive overview of the DPC digitization workflow here. Fulfilling patron requests is top priority, so after patrons receive the requested materials, it might be some time before the files are submitted for ingest to the DDR. Because of this, I was fortunate to be on the receiving end of the BTW collection in late January. By then I was gaining experience in the actual creation of digital collections—basically everything that happens with the files once the DPC signals that they are ready to move into long term storage. 

There are a few different ways that new digital collections are created. Thus far, most of my experience has been with the files produced through patron requests handled by the DPC. These tend to be smaller in size and have a simple file structure. The files are migrated into the DDR, into either a new or existing collection, after which file counts are checked, and identifiers assigned. The collection is then reviewed by one of a few different folks with RL Technical Services. Noah Huffman conducted the review in this case, after which he asked if we might consider itemizing the collection, given the letter-level descriptive metadata available in the collection guide. 

I’d like to pause for a moment to discuss the tricky nature of “itemness,” and how the meaning can shift between RL and DCCS. If you reference the collection guide linked in the second paragraph, you will see that the BTW collection received item-level description during processing—with each letter constituting an item in the collection. The physical arrangement of the papers does not reflect the itemized intellectual arrangement, as the letters are grouped together in the box they are housed in. When fulfilling patron reproduction requests, itemness is generally dictated by physical arrangement, in what is called the folder-level model; materials housed together are treated as a single unit. So in this case, because the letters were grouped together inside of the box, the box was treated as the folder, or item. If, however, each letter in the box was housed within its own folder, then each folder would be considered an item. To be clear, the papers were housed according to best practices; my intent is simply to describe how the processes between the two departments sometimes diverge.  

Processing archival collections is labor intensive, so it’s increasingly uncommon to see item-level description. Collections can sit unprocessed in “backlog” for many years, and though the depth of that backlog varies by institution, even well-resourced archives confront the problem of backlog. Enter: More Product, Less Process (MPLP), introduced by Mark Greene and Dennis Meissner in a 2005 article as a means to address the growing problem. They called on archivists to prioritize access over meticulous arrangement and description.  

The spirit of folder-level digitization is quite similar to MPLP, as it enables the DPC to provide access to a broader selection of collection materials digitized through patron requests, and it also simplifies the process of putting the materials online for public access. Most of the time, the DPC’s approach to itemness aligns closely with the level of description given during processing of the collection, but the inevitable variance found between archival collections requires a degree of flexibility from those working to provide access to them. Numerous examples of digital collections that received item-level description can be found in the DDR, but those are generally tied to planned efforts to digitize specific collections. 

Because the BTW collection was digitized as an item, the digital files were grouped together in a single folder, which translated to a single landing page in the DDR’s public user interface. Itemizing the collection would give each item/letter its own landing page, with the potential to add unique metadata. Similarly, when users navigate the RL collection guide, embedded digital surrogates appear for each item. A moment ago I described the utility of More Product Less Process. There are times, however, when it seems right to do more. Given the research value of this collection, as well as its relatively small size, the decision to proceed with itemization was unanimous. 

Itemizing the collection was fairly straightforward. Noah shared a spreadsheet with metadata from the collection guide. There were 108 items, with each item’s title containing the sender and recipient of a correspondence, as well as the location and date sent. Given the collection’s chronological physical arrangement, it was fairly simple to work through the files and assign them to new folders. Once that was finished, I selected additional descriptive metadata terms to add to the spreadsheet, in accordance with the DDR Metadata Application Profile. Because there was a known sender and recipient for almost every letter, my goal was to identify any additional name authority records not included in the collection guide. This would provide an additional access point by which to navigate the collection. It would also help me to identify death dates for the creators, which determines copyright status. I think the added time and effort was well worth it.

This isn’t the space for analysis, but I do hope you’re inspired to spend some time with this fascinating collection. Primary source materials offer an important path to understanding history, and this particular collection captures the planning and aftermath of an event that hasn’t received much analysis. There is more coverage of what came after; Washington and DuBois parted ways, after which DuBois became a founding member of the Niagara Movement. Though also short lived, it is considered a precursor to the NAACP, which many members of the Niagara Movement would go on to join. A significant portion of W. E. B. DuBois’s correspondence has been digitized and made available to view through UMass Amherst. It contains many additional letters concerning the Carnegie Conference and Committee of Twelve, offering additional context and perspective, particularly in certain correspondence that were surely not intended for Washington’s eyes. What I found most fascinating, though, was the evidence of less public (and less adversarial) collaboration between the two men. 

The additional review and research required by the itemization and metadata creation was such a fascinating and valuable experience. This is true on a professional level as it offered the opportunity to do something new, but I also felt moved to try to understand more about the cast of characters who appear in this important collection. That endeavor extended far beyond the hours of my internship, and I found myself wondering if this was what the obsessive pursuit of a historian’s work is like. In any case, I am grateful to have learned more, and also reminded that there is so much more work to do.

Click here to view the Booker T. Washington correspondence in the Duke Digital Repository.

*Indeed, this marks my final post in this role, as my internship concludes at the end of April, after which I will move on to a permanent position. Happily, I won’t be going far, as I’ve been selected to remain with DCCS as one of the next Repository Services Analysts!    

Sources

Cheyne, C.E. “Booker T. Washington sitting and holding books,” 1903. 2 photographs on 1 mount : gelatin silver print ; sheets 14 x 10 cm. In Washington, D.C., Library of Congress Prints and Photographs Division. Accessed April 5, 2022. https://www.loc.gov/pictures/item/2004672766/

 

We are Hiring: 2 Repository Services Analysts

Duke University Libraries (DUL) is recruiting two (2) Repository Services Analysts to ingest and help collaboratively manage content in their digital preservation systems and platforms. These positions will partner with the Research Data Curation, Digital Collections, and Scholarly Communications Programs, as well as other library and campus partners to provide digital curation and preservation services. The Repository Services Analyst role is an excellent early career opportunity for anyone who enjoys managing large sets of data and/or files, working with colleagues across an organization, preserving unique data and library collections, and learning new skills.

DUL will hold an open zoom session where prospective candidates can join anonymously and ask questions. This session will take place on Wednesday January 12 at 1pm EST; the link to join is posted on the libraries’ job advertisement.

The Research Data Curation Program has grown significantly in recent years, and DUL is seeking candidates who want to grow their skills in this area. DUL is a member of the Data Curation Network (DCN), which provides opportunities for cross-institutional collaboration, curation training, and hands-on data curation practice. These skills are essential for anyone who wants to pursue a career in research data curation. 

Ideal Repository Services Analyst applicants have been exposed to digital asset management tools and techniques such as command line scripting. They can communicate functional system requirements between groups with varying types of expertise, enjoy working with varied kinds of data and collections, and love solving problems. Applicants should also be comfortable collaboratively managing a shared portfolio of digital curation services and projects, as the two positions work closely together. The successful candidates will join the Digital Collections and Curation Services department (within the Digital Strategies and Technology Division).

Please refer to the DUL’s job posting for position requirements and application instructions.

Wars of Aliens, Men, and Women: or, Some Things we Digitized in the DPC this Year

Post authored by Jen Jordan, Digital Collections Intern. 

As another strange year nears its end, I’m going out on a limb to assume that I’m not the only one around here challenged by a lack of focus. With that in mind, I’m going to keep things relatively light (or relatively unfocused) and take you readers on a short tour of items that have passed through the Digital Production Center (DPC) this year. 

Shortly before the arrival of COVID-19, the DPC implemented a folder-level model for digitization. This model was not developed in anticipation of a life-altering pandemic, but it was well-suited to meet the needs of researchers who, for a time, were unable to visit the Rubenstein Library to view materials in person. You can read about the implementation of folder-level digitization and its broader impact here. To summarize, before spring of 2020 it was standard practice to fill patron requests by imaging only the item needed (e.g. – a single page within a folder). Now, the default practice is to digitize the entire folder of materials. This has produced a variety of positive outcomes for stakeholders in the Duke University Libraries and broader research community, but for the purpose of this blog, I’d like to describe my experience interacting with materials in this way.

Digitization is time consuming, so the objective is to move as quickly as possible while maintaining a high level of accuracy. There isn’t much time for meaningful engagement with collection items, but context reveals itself in bits and pieces. Themes rise to the surface when working with large folders of material on a single topic, and sometimes the image on the page demands to be noticed. 

Even while working quickly, one would be hard-pressed to overlook this Vietnam-era anti-war message. One might imagine that was by design. From the Student Activism Reference collection: https://repository.duke.edu/dc/uastuactrc.

On more than one occasion I’ve found myself thinking about the similarities between scanning and browsing a social media app like Instagram. Stick with me here! Broadly speaking, both offer an endless stream of visual stimuli with little opportunity for meaningful engagement in the moment. Social media, when used strategically, can be world-expanding. Work in the DPC has been similarly world-expanding, but instead of an algorithm curating my experience, the information that I encounter on any given day is curated by patron requests for digitization. Also similar to social media is the range of internal responses triggered over the course of a work day, and sometimes in the span of a single minute. Amusement, joy, shock, sorrow—it all comes up.

I started keeping notes on collection materials and topics to revisit on my own time. Sometimes I was motivated by a stray fascination with the subject matter. Other times I encountered collections relating to prominent historical figures or events that I realized I should probably know a bit more about.

 

Image from the WPSU Scrapbook.

First wave feminism was one such topic that revealed itself. It was a movement I knew little about, but the DPC has digitized numerous items relating to women’s suffrage and other feminist issues at the turn of the 20th century. I was particularly intrigued by the radical leanings of the UK’s Women’s Social and Political Union (WSPU), organized by Emmeline Pankhurst to fight for the right to vote. When I started looking at newspaper clippings pasted into a scrapbook documenting WSPU activities, I was initially distracted by the amusing choice of words (“Coronation chair damaged by wild women’s bomb”). Curious to learn more, I went home and read about the WSPU. The following excerpt is from a speech by Pankhurst in which she provides justification for the militant tactics employed by the WSPU:

I want to say here and now that the only justification for violence, the only justification for damage to property, the only justification for risk to the comfort of other human beings is the fact that you have tried all other available means and have failed to secure justice. I tell you that in Great Britain there is no other way…

Pankhurst argued that men had to take the right to vote through war, so why shouldn’t women also resort to violence and destruction? And so they did.

As Rubenstein Library is home to the Sallie Bingham Center, it’s unsurprising that the DPC digitizes a fair amount of material on women’s issues. To share a few more examples, I appreciate the juxtaposition of the following two images, both of which I find funny, and yet sad.

Source collection: Young woman’s scrapbook, 1900-1905 and n.d.

The advertisement to the right is pasted inside a young woman’s scrapbook dated 1900—1905. It contains information on topics such as etiquette, how to manage a household, and how to be a good wife. Are we to gather that proper shade cloth is necessary to keep a man happy?

In contrast, the image below and to the left is from the book L’amour libre by French feminist, Madeleine Vernet, describes prostitution and marriage as the same kind of prison, with “free love” as the only answer. Some might call that a hyperbolic comparison, but after perusing the young woman’s scrapbook, I’m not so sure. I’m just thankful to have been born a woman near the end of the 20th century and not the start of it.

From the book L’amour libre by Madeline Vernet

This may be difficult to believe, but I didn’t set out to write a blog so focused on struggle. The reality, however, is that our special collections are full of struggle. That’s not all there is, of course, but I’m glad this material is preserved. It holds many lessons, some of which we still have yet to learn. 

I think we can all agree that 2021 was, well, a challenging year. I’d be remiss not to close with a common foe we might all rally around. As we move into 2022 and beyond, venturing ever deeper into space, we may encounter this enemy sooner than we imagined…

Image from an illustrated 1906 French translation of H.G. Wells’s ‘War of the Worlds’.

Sources:

Pankhurst, Emmeline. Why We Are Militant: A Speech Delivered by Mrs. Pankhurst in New York, October 21, 1913. London: Women’s Press, 1914. Print.

“‘Prayers for Prisoners’ and church protests.” Historic England, n.d., https://historicengland.org.uk/research/inclusive-heritage/womens-history/suffrage/church-protests/ 

 

Rethinking Our Approach to Website Content Management

Fourteen-hundred pages with 70 different authors, all sharing information about library services, resources, and policies — over the past eight years, any interested library staff member has been able to post and edit content on the Duke University Libraries (DUL) website. Staff have been able to work independently, using their own initiative to share information that they thought would be helpful to the people who use our website.

Unfortunately, DUL has had no structure for coordinating this work or even for providing training to people undertaking this work. This individualistic approach has led to a complex website often containing inconsistent or outdated information. And this is all about to change.

Our new approach

We are implementing a team-based approach to manage our website content by establishing the Web Editorial Board (WEB) comprised of 22 staff from departments throughout DUL. The Editors serving on WEB will be the only people who will have hands-on access to create or edit content on our website. We recognize that our primary website is a core publication of DUL, and having this select group of Editors work together as a team will ensure that our content is cared for, cohesive, and current. Our Editors have already undertaken training on topics such as writing for the web, creating accessible content, editing someone else’s content, and using our content management system.

Our Editors will apply their training to improve the quality and consistency of our website. As they undertake this work, they will collaborate with other Editors within WEB as well as with subject matter experts from across the libraries. All staff at DUL will be able to request changes, contribute ideas, and share feedback with WEB using either a standard form or by contacting Editors directly.

The scope of work undertaken by WEB includes:

  • Editing, formatting, and maintaining all content on DUL’s Drupal-based website
  • Writing new content
  • Retiring deprecated content
  • Reviewing, editing, and formatting content submitted to WEB by DUL staff, and consulting with subject matter experts within DUL
  • Deepening their expertise in how to write and format website content through continuing education

While there are times when all 22 Editors will meet together to address common issues or collaborate on site-wide projects, much of the work undertaken by WEB will be organized around sub-teams that we refer to as content neighborhoods, each one meeting monthly and focused on maintaining different sections of our website. Our eight sub-teams range in size from two to five people. Having sub-teams ensures that our Editors will be able to mutually support one another in their work.

Initially, Editors on WEB will serve for a two-year term, after which some members will rotate off so that new members can rotate on. Over time it will be helpful to balance continuity in membership with the inclusion of fresh viewpoints.

WEB was created following a recommendation developed by DUL’s Web Experience Team (WebX), the group that provides high-level governance for all of our web platforms. Based on this WebX recommendation, the DUL Executive Group issued a charge for WEB in the spring and WEB began its orientation and training during the summer of 2021. Members of WEB will soon be assisting in our migration from Drupal 7 to Drupal 9 by making key updates to content prior to the migration. Once we complete our migration to Drupal 9 in March 2022, we will then limit hands-on access to create or edit content in Drupal to the members of WEB.

The charge establishing WEB contains additional information about WEB’s work, the names of those serving on WEB, and the content neighborhoods they are focusing on.

FOLIO Update

Duke is using FOLIO in production! We have eight apps that we’re using in production. For our electronic resources management, we are using Agreements, Licenses, Organizations, Users, and Settings. Those apps went live in July of 2020, even with the pandemic in full force! In July of 2021, we launched Courses and Inventory so that professors and students could store and access electronic reserves material. In Summer 2022, we plan to launch the eUsage app that will allow us to link to vendor sites and bring our eUsage statistics into one place.

2020:

 

 

 

 

 

2021:

 

 

2022:



In Summer 2023, we plan to launch the rest of the FOLIO, moving all of our acquisitions, cataloging, and circulation functions into their respective apps. Currently the total number of apps included in FOLIO is 20. We’re almost halfway there!

To learn more about FOLIO, you can visit FOLIO.org, the FOLIO wikispace, or the documentation portal

To learn more about our local project, visit FOLIO@Duke and read our newsletters!

FOLIO@Duke Newsletter v. 3 no. 3

 

Sometimes You Feel Like a Nutch: The Un-Googlification of a Library Search Service

Quick—when was the last time you went a full day without using a Google product or service? How many years ago was that day?

We all know Google has permeated so many facets of our personal and professional lives. A lot of times, using a Google something-or-other is your organization’s best option to get a job done, given your available resources. If you ever searched the Duke Libraries website at any point over the past seventeen years, you were using Google.

It’s really no secret that when you have a website with a lot of pages, you need to provide a search box so people can actually find things. Even the earliest version of the library website known to the Wayback Machine–from “way back” in 1997–had a search box. Those days, search was powered by the in-house supported Texis Webinator. Google was yet to exist.

July 24, 2004 was an eventful day for the library IT staff. We went live with a shiny new Integrated Library System from Ex Libris called Aleph (that we are still to this day working to replace). On that very same day, we launched a new library website, and in the top-right corner of the masthead on that site was–for the very first time–a Google search box.

2004 version of the library website, with a Google search box in the masthead.
2004 version of the library website, with a Google search box in the masthead.

Years went by. We redesigned the website several times. Interface trends came and went. But one thing remained constant: there was a search box on the site, and if you used it, somewhere on the next page you were going to get search results from a Google index.

That all changed in summer 2021, when we implemented Nutch…

Nutch logo

Why Not Google?

Google Programmable Search Engine (recently rebranded from “Google Custom Search Engine”), is easy to use. It’s “free.” It’s fast, familiar, and being a Google thing, it’s unbeatable at search relevancy. So why ditch it now? Well…

  • Protecting patron privacy has always been a core library value. Recent initiatives at Duke Libraries and beyond have helped us to refocus our efforts around ensuring that we meet our obligations in this area.
  • Google’s service changed recently, and creating a new engine now involves some major hoop-jumping to be able to use it ad-free.
  • It doesn’t work in China, where we actually have a Duke campus, and a library.
  • The results are capped at 100 per query. Google prioritizes speed and page 1 relevancy, but it won’t give you a precise hit count nor an exhaustive list of results.
  • It’s a black box. You don’t really get to see why pages get ranked higher or lower than others.
  • There’s a search API you could potentially build around, but if you exceed 100 searches/day, you have to start paying to use it.

What’s Nutch?

Apache Nutch is open source web crawler software written in Java. It’s been around for nearly 20 years–almost as long as Google. It supports out-of-the-box integration with Apache Solr for indexing.

Diagram showing how Nutch works.
Slide from Sebastian Nagel’s “Web Crawling With Apache Nutch” presentation at ApacheCon EU 2014.

What’s So Good About Nutch?

  • Solr. Our IT staff have grown quite accustomed to the Solr search platform over the past decade; we already support around ten different applications that use it under the hood.
  • Self-Hosted. You run it yourself, so you’re in complete control of the data being crawled, collected, and indexed. User search data is not being collected by a third party like Google.
  • Configurable. You have a lot of control over how it works. All our configs are in a public code repository so we have record of what we have changed and why.

What are the Drawbacks to Using Nutch?

  • Maintenance. Using open source software requires a commitment of IT staff resources to build and maintain over time. It’s free, but it’s not really free.
  • Interface. Nutch doesn’t come with a user interface to actually use the indexed data from the crawls; you have to build a web application. Here’s ours.
  • Relevancy. Though Google considers such factors as page popularity and in-link counts to deem pages as more relevant than others for a particular query, Nutch can’t. Or, at least, its optional features that attempt to do so are flawed enough that not using them gets us better results. So we rely on other factors for our relevancy algorithm, like the segment of the site that a page resides, URL slugs, page titles, subheading text, inlink text, and more.
  • Documentation. Some open source platforms have really clear, easy to understand instruction manuals online to help you understand how to use them. Nutch is not one of those platforms.

How Does Nutch Work at Duke?

The main Duke University Libraries website is hosted in Drupal, where we manage around 1,500 webpages. But the full scope of what we crawl for library website searching is more than ten times that size. This includes pages from our blogs, LibGuides, exhibits, staff directory, and more. All told: 16,000 pages of content.

Searching from the website masthead or the default “All” box in the tabbed section on our homepage brings you to QuickSearch results page.

Two boxes on the library homepage will search QuickSearch.
Use either of these search boxes to search QuickSearch.

You’ll see a search results page rendered by our QuickSearch app. It includes sections of results from various places, like articles, books & media, and more. One of the sections is “Our Website” — it shows the relevant pages that we’ve crawled with Nutch.

A QuickSearch page showing results in various boxes
QuickSearch results page includes a section of results from “Our Website”

You can just search the website specifically if you’re not interested in all those other resources.

Search results from the library website search box.
An example website-only search.

Three pieces work in concert to enable searching the website: Nutch, Solr, and QuickSearch. Here’s what they do:

Nutch

  • Crawls web pages that we want to include in the website search.
  • Parses HTML content; writes it to Solr fields.
  • Includes configuration for what pages to include/exclude, crawler settings, field mappings

Solr

  • Index & document store for crawled website content.

QuickSearch

Crawls happen every night to pick up new pages and changes to existing ones. We use an “adaptive fetch schedule” so by default each page gets recrawled every 30 days. If a page changes frequently, it’ll get re-crawled sooner automatically.

Summary

Overall, we’re satisfied with how the switch to Nutch has been working out for us. The initial setup was challenging, but it has been running reliably without needing much in the way of developer intervention.  Here’s hoping that continues!


Many thanks to Derrek Croney and Cory Lown for their help implementing Nutch at Duke, and to Kevin Beswick (NC State University Libraries) for consulting with our team.

The Shortest Year

Featured image – screenshot from the Sunset Tripod2 project charter.

Realizing that my most recent post here went up more than a year ago, I pause to reflect. What even happened over these last twelve months? Pandemic and vaccine, election and insurrection, mandates and mayhem – outside of our work bubble, October 2020 to October 2021 has been a churn of unprecedented and often dark happenings. Bitstreams, however, broadcasts from inside the bubble, where we have modeled cooperation and productivity, met many milestones, and kept our collective cool, despite working nearly 100% remotely as a team, with our stakeholders, and across organizational lines.

Last October, I wrote about Sunsetting Tripod2, a homegrown platform for our digital collections and archival finding aids that was also the final service we had running on a physical server. “Firm plans,” I said we had for the work that remained. Still, in looking toward that setting sun, I worried about “all sorts of comical and embarrassing misestimations by myself on the pages of this very blog over the years.” I was optimistic, but cautiously so, that we would banish the ghosts of Django-based systems past.

Reader, I have returned to Bitstreams to tell you that we did it. Sometime in Q1 of 2021, we said so long, farewell, adieu to Tripod2. It was a good feeling, like when you get your laundry folded, or your teeth cleaned, only better.

However, we did more in the past year than just power down exhausted old servers. What follows are a few highlights from the work of the Digital Strategies and Technology division of Duke University Libraries’ software developers, and our collaborators (whom we cannot thank or praise enough) over the past twelve months. 

In November, Digital Projects Developer Sean Aery posted on Implementing ArcLight: A Reflection. The work of replacing and improving upon our implementation for the Rubenstein Library’s collection guides was one of the main components that allowed us to turn off Tripod2. We actually completed it in July of 2020, but that team earned its Q4 victory laps, including Sean’s post and a session at Blacklight Summit a few days after my own post last October.

As the new year began, the MorphoSource team rolled out version 2.0 of that platform. MorphoSource Repository Developer Jocelyn Triplett shared a A Preview of MorphoSource 2 Beta in these pages on January 20. The launch took place on February 1.

One project we had underway as I was writing last October was the integration of Globus, a transfer service for large datasets, into the Duke Research Data Repository. We completed that work in Q1 of 2021, prompting our colleague, Senior Research Data Management Consultant Sophia Lafferty-Hess, to post Share More Data in the Duke Research Data Repository! in a neighboring location that shares our charming cul-de-sac of library blogs.

The seventeen months since the murder of George Floyd have seen major changes in how we think and talk about race in the Libraries. We committed ourselves to the DUL Racial Justice Roadmap, a pathway for recognizing and attacking the pervasive influence of white supremacy in our society, in higher education, at Duke, in the field of librarianship, in our library, in the field of information technology, and in our own IT practices. During this time, members of our division have also participated broadly in DiversifyIT, a campus-wide group of IT professionals who seek to foster a culture of inclusion “by providing professional development, networking, and outreach opportunities.”

Digital Projects Developer Michael Daul shared his own point of view with great thoughtfulness in his April post, What does it mean to be an actively antiracist developer? He touched on representation in the IT industry, acknowledging bias, being aware of one’s own patterns of communication, and bringing these ideas to the systems we build and maintain. 

One of the ideas that Michael identified for software development is web accessibility; as he wrote, we can “promote the benefits of building accessible interfaces that follow the practices of universal design.” We put that idea into action a few months later, as Sean described in precise technical terms in his July post, Automated Accessibility Testing and Continuous Integration. Currently that process applies to the ArcLight platform, but when we have a chance, we’ll see if we can expand it to other services.

The question of when we’ll have that chance is a big one, as it hinges on the undertaking that now dominates our attention. Over the past year we have ramped up on the migration of our website from Drupal 7 to Drupal 9, to head off the end-of-life for 7. This project has transformed into the raging beast that our colleagues at NC State Libraries warned us it would at the Code4Lib Southeast in May of 2019

Screenshot of NC State Libraries presentation on Drupal migration
They warned us – Screenshot from “Drupal 7 to Drupal 8: Our Journey,” by Erik Olson and Meredith Wynn of NC State Libraries’ User Experience Department, presented at Code4Lib Southeast in May of 2019.

We are on a path to complete the Drupal migration in March 2022 – we have “firm plans,” you could say – and I’m certain that its various aspects will come to feature in Bitstreams in due time. For now I will mention that it spawned two sub-projects that have challenged our team over the past six months or so, both of which involve refactoring functionality previously implemented as Drupal modules into standalone Rails applications:

  1. Quicksearch, aka unified search, aka “Bento search” – see Michael’s Bento is Coming! from 2014 – is now a standalone app; it also uses the open-source tool Apache Nutch, rather than Google CSE.
  2. The staff directory app that went live in 2019, which Michael wrote about in Building a new Staff Directory, also no longer runs as a Drupal module.

Each of these implementations was necessary to prepare the way for a massive migration of theme and content that will take place over the coming months. 

Screenshot of a Jira issue related to the Decouple Staff Directory project.
Screenshot of a Jira issue related to the Decouple Staff Directory project.

When it’s done, maybe we’ll have a chance to catch our breath. Who can really say? I could not have guessed a year ago where we’d be now, and anyway, the period of the last twelve months gets my nod as the shortest year ever. Assuming we’re here, whatever “here” means in the age of remote/hybrid/flexible work arrangements, then I expect we’ll be burning down backlogs, refactoring this or that, deploying some service, and making firm plans for something grand.

Using an M1 Mac for development work

Due to a battery issue with my work laptop (an Intel-based MacBook pro), I had an opportunity to try using a newer (ARM-based) M1 Mac to do development work. Since roughly a year had passed since these new machines had been introduced I assumed the kinks would have been generally worked out and I was excited to give my speedy new M1 Mac Mini a test run at some serious work. However, upon trying to do make some updates to a recent project (by the way, we launched our new staff directory!) I ran into many stumbling blocks.

M1 Mac Mini ensconced beneath multitudes of cables in my home office

My first step in starting with a new machine was to get my development environment setup. On my old laptop I’d typically use homebrew for managing packages and RVM (and previously rbenv) for ruby version management in different projects. I tried installing the tools normally and ran into multitudes of weirdness. Some guides suggested setting up a parallel version of homebrew (ibrew) using Rosetta (which is a translation layer for running Intel-native code). So I tried that – and then ran into all kinds of issues with managing Ruby versions. Oh and also apparently RVM / rbenv are no longer cool and you should be using chruby or asdf. So I tried those too, and ran into more problems. In the end, I stumbled on this amazing script by Moncef Belyamani. It was really simple to run and it just worked, plain and simple. Yay – working dev environment!

We’ve been using Docker extensively in our recent library projects over the past few years and the Staff Directory was setup to run inside a container on our local machines. So my next step was to get Docker up and running. The light research I’d done suggested that Docker was more or less working now with M1 macs so I dived in thinking things would go smoothly. I installed Docker Desktop (hopefully not a bad idea) and tried to build the project, but bundle install failed. The staff directory project is built in ruby on rails, and in this instance was using therubyracer gem, which embeds the V8 JS library. However, I learned that the particular version of the V8 library used by therubyracer is not compiled for ARM and breaks the build. And as you tend to do when running into questions like these, I went down a rabbit hole of potential work-arounds. I tried manually installing a different version of the V8 library and getting the bundle process to use that instead, but never quite got it working. I also explored using a different gem (like mini racer) that would correctly compile for ARM, or just using Node instead of V8, but neither was a good option for this project. So I was stuck.

Building the Staff Directory app in Docker

My text attempt at a solution was to try setting up a remote Docker host. I’ve got a file server at home running TrueNAS, so I was able to easily spin up a Ubuntu VM on that machine and setup Docker there. You could do something similar using Duke’s VCM service. I followed various guides, setup user accounts and permissions, generated ssh keys, and with some trial and error I was finally able to get things running correctly. You can setup a context for a Docker remote host and switch to it (something like: docker context use ubuntu), and then your subsequent Docker commands point to that remote making development work entirely seamless. It’s kind of amazing. And it worked great when testing with a hello-world app like whoami. Running docker run --rm -it -p 80:80 containous/whoami worked flawlessly. But anything that was more complicated, like running an app that used two containers as was the case with the Staff Dir app, seemed to break. So stuck again.

After consulting with a few of my brilliant colleagues, another option was suggested and this ended up being the best work around. Take my same ubuntu VM and instead of setting it up as a docker remote host, use it as the development server and setup a tunnel connection (something like: ssh -N -L localhost:8080:localhost:80 docker@ip.of.VM.machine) to it such that I would be able to view running webpages at localhost:8080. This approach requires the extra step of pushing code up to the git repository from the Mac and then pulling it back down on the VM, but that only takes a few extra keystrokes. And having a viable dev environment is well worth the hassle IMHO!

As apple moves away from Intel-based machines – rumors seem to indicate that the new MacBook Pros coming out this fall will be ARM-only – I think these development issues will start to be talked about more widely. And hopefully some smart people will be able to get everything working well with ARM. But in the meantime, running Docker on a Linux VM via a tunnel connection seems like a relatively painless way to ensure that more complicated Docker/Rails projects can be worked on locally using an M1 Mac.

Good News from the DPC: Digitization of Behind the Veil Tapes is Underway

This post was written by Jen Jordan, a graduate student at Simmons University studying Library Science with a concentration in Archives Management. She is the Digital Collections intern with the Digital Collections and Curation Services Department.  Jen will complete her masters degree in December 2021. 

The Digital Production Center (DPC) is thrilled to announce that work is underway on a 3-year long National Endowment for the Humanities (NEH) grant-funded project to digitize the entirety of Behind the Veil: Documenting African-American Life in the Jim Crow South, an oral history project that produced 1,260 interviews spanning more than 1,800 audio cassette tapes. Accompanying the 2,000 plus hours of audio is a sizable collection of visual materials (e.g.- photographic prints and slides) that form a connection with the recorded voices.

We are here to summarize the logistical details relating to the digitization of this incredible collection. To learn more about its historical significance and the grant that is funding this project, titled “Documenting African American Life in the Jim Crow South: Digital Access to the Behind the Veil Project Archive,” please take some time to read the July announcement written by John Gartrell, Director of the John Hope Franklin Research Center and Principal Investigator for this project. Co-Principal Investigator of this grant is Giao Luong Baker, Digital Production Services Manager.

Digitizing Behind the Veil (BTV) will require, in part, the services of outside vendors to handle the audio digitization and subsequent captioning of the recordings. While the DPC regularly digitizes audio recordings, we are not equipped to do so at this scale (while balancing other existing priorities). The folks at Rubenstein Library have already been hard at work double checking the inventory to ensure that each cassette tape and case are labeled with identifiers. The DPC then received the tapes, filling 48 archival boxes, along with a digitization guide (i.e. – an Excel spreadsheet) containing detailed metadata for each tape in the collection. Upon receiving the tapes, DPC staff set to boxing them for shipment to the vendor. As of this writing, the boxes are snugly wrapped on a pallet in Perkins Shipping & Receiving, where they will soon begin their journey to a digital format.

The wait has begun! In eight to twelve weeks we anticipate receiving the digital files, at which point we will perform quality control (QC) on each one before sending them off for captioning. As the captions are returned, we will run through a second round of QC. From there, the files will be ingested into the Duke Digital Repository, at which point our job is complete. Of course, we still have the visual materials to contend with, but we’ll save that for another blog! 

As we creep closer to the two-year mark of the COVID-19 pandemic and the varying degrees of restrictions that have come with it, the DPC will continue to focus on fulfilling patron reproduction requests, which have comprised the bulk of our work for some time now. We are proud to support researchers by facilitating digital access to materials, and we are equally excited to have begun work on a project of the scale and cultural impact that is Behind the Veil. When finished, this collection will be accessible for all to learn from and meditate on—and that’s what it’s all about. 

 

Notes from the Duke University Libraries Digital Projects Team