Resonance: the reinforcement or prolongation of sound by reflection from a surface or by the synchronous vibration of a neighboring object
Nearly 4 months have passed since I moved to Durham from my hometown Chicago to join Duke’s Digital Collections & Curation Services team. With feelings of reflection and nostalgia, I have been thinking on the stories and memories that journeys create.
I have always believed a library the perfect place to discover another’s story. Libraries and digital collections are dynamic storytelling channels that connect people through narrative and memory. What are libraries if not places dedicated to memories? Memory made incarnate in the turn of page, the capturing of an image.
Memory is sensation.
In my mind memory is ethereal – wispy and nebulous. Like trying to grasp mist or fog only to be left with the shimmer of dew on your hands. Until one focuses on a detail, then the vision sharpens. Such as the soothing warmth of a pet’s fur. A trace of familiar perfume in the air as a stranger walks by. Hearing the lilt of an accent from your hometown. That heavy, sticky feeling on a muggy summer day.
Memories are made of moments.
I do not recall the first time I visited a library. However, one day my parents took me to the library and I checked out 11 books on dinosaurs. As a child I was fascinated by them. Due to watching so much of The Land Before Time and Jurassic Park no doubt. One of the books had beautiful full-length pullout diagrams. I remember this.
Experiences tether individuals together across time and place. Place, like the telling of a story is subjective. It holds a finite precision which is absent in the vagueness and vastness of space. This personal aspect is what captures a person when a tale is well told. A corresponding chord is struck, and the story resounds as listeners see themselves reflected.
When a narrative reaches someone with whom it resonates, its impact can be amplified beyond any expectations.
Last week, it was brought to our attention that Duke Digital Collections recently passed 100,000 individual items found in the Duke Digital Repository! To celebrate, I want to highlight some of the most recent materials digitized and uploaded from our Section A project. In the past, Bitstreams has blogged about what Section A is and what it means, but it’s been a couple of years since that post, and a little refresher couldn’t hurt.
What is Section A?
In 2016, the staff of Rubenstein Research Services proposed a mass digitization project of Section A. This is the umbrella term for 175 boxes of different historic materials that users often request – manuscripts, correspondence, receipts, diaries, drawings, and more. These boxes contain around 3,900 small collections that all had their own workflows. Every box needs consultations from Rubenstein Research Services, review by Library Conservation Department staff, review by Technical Services, metadata updates, and more, all to make sure that the collections could be launched and hosted within the Duke Digital Repository.
In the 2 years since that blog post, so much has happened! The first 2 Section A collections had gone live as a sort of proof-of-concept, and as a way to define what the digitization project would be and what it would look like. We’ve added over 500 more collections from Section A since then. This somehow barely even scratches the surface of the entire project! We’re digitizing the collections in alphabetical order, and even after all the collections that have gone online, we are currently still only on the letter “C”!
Nonetheless, there is already plenty of materials to check out and enjoy. I was a student of history in college, so in this blog post, I want to particularly highlight some of the historic materials from the latter half of the 19th century.
Showing off some of Section A
In 1869, after her work as a nurse in the Civil War, Clara Barton traveled around Europe to Geneva, Switzerland and Corsica, France. Included in the Duke Digital Collections is her diary and calling cards from her time there. These pages detail where she visited and stayed throughout the year. She also wrote about her views on the different European countries, how Americans and Europeans compare, and more. Despite her storied career and her many travels that year, Miss Barton felt that “I have accomplished very little in a year”, and hoped that in 1870, she “may be accounted worthy once more to take my place among the workers of the world, either in my own country or in some other”.
Back in America, around 1900, the Rev. John Malachi Bowden began dictating and documenting his experiences as a Confederate soldier during the Civil War, one of many that a nurse like Miss Barton may have treated. Although Bowden says he was not necessarily a secessionist at the beginning of the Civil War, he joined the 2nd Georgia Regiment in August 1861 after Georgia had seceded. During his time in the regiment, he fought in the Battles of Fredericksburg, Gettysburg, Spotsylvania Court House, and more. In 1864, Union forced captured and held Bowden as a prisoner at Maryland’s Point Lookout Prison, where he describes in great detail what life was like as a POW before his eventual release. He writes that he was “so indignant at being in a Federal prison” that he refused to cut his hair. His hair eventually grew to be shoulder-length, “somewhat like Buffalo Bill’s.”
Speaking of whom, Duke Digital Collections also has some material from Buffalo Bill (William Frederick Cody), courtesy of the Section A initiative. A showman and entertainer who performed in cowboy shows throughout the latter half of the 19th century, Buffalo Bill was enormously popular wherever he went. In this collection, he writes to a Brother Miner about how he invited seventy-five of his “old Brothers” from Bedford, VA to visit him in Roanoke. There is also a brief itinerary of future shows throughout North Carolina and South Carolina. This includes a stop here in Durham, NC a few weeks after Bill wrote this letter.
Around this time, Walter Clark, associate justice of the North Carolina Supreme Court, began writing his own histories of North Carolina throughout the 18th and 19th centuries. Three of Clark’s articles prepared for the University Magazine of the University of North Carolina have been digitized as part of Section A. This includes an article entitled “North Carolina in War”, where he made note of the Generals from North Carolina engaged in every war up to that point. It’s possible that John Malachi Bowden was once on the battlefield alongside some of these generals mentioned in Clark’s writings. This type of synergy in our collection is what makes Section A so exciting to dive into.
As the new Still Image Digitization Specialist at the Duke Digital Production Center, seeing projects like this take off in such a spectacular way is near and dear to my heart. Even just the four collections I’ve highlighted here have been so informative. We still have so many more Section A boxes to digitize and host online. It’s so exciting to think of what we might find and what we’ll digitize for all the world to see. Our work never stops, so remember to stay updated on Duke Digital Collections to see some of these newly digitized collections as they become available.
Looking for something to keep you company on your Summer vacation? Why not direct your devices to a Duke Digital collections! Seriously! Here are a few of the compelling collections we debuted earlier this Spring, and we have have more coming in late June.
These maps and 2 volume report document Durham’s Hayti-Elizabeth st neighborhood infrastructure prior to the construction of the Durham Freeway, as well as the justifications for the redevelopment of the area. This is an excellent resource for folks studying Durham history and/or the urban renewal initiatives of the mid-20th century.
We launched 8 collections of photograph albums created by African American soldiers serving in the military across the world including Japan, Vietnam and Iowa. Together these albums help “document the complexity of the African American military experience” (Bennett Carpenter from his blog post, “War in Black and White: African American Soldiers’ Photograph Albums”).
This photograph album contains pictures taken by Sir Percy Moleworth Sykes during his travels in a mountainous region of Central Asia, now the Xinjiang Uyghur Autonomous Region of China, with his sister, Ella Sykes. According to the collection guide, the album’s “images are large, crisp, and rich with detail, offering views of a remote area and its culture during tensions in the decades following the Russo-Turkish War”.
Our work never stops, and we have several large projects in the works that are scheduled to launch by the end of June. They are the first batch of video recordings from the Memory Project. We are busy migrating the incredible photographs from the Sydney Gamble collection – into the digital repository. Finally there is one last batch of Radio Haiti recordings on the way.
Keeping in touch
We launch new digital collections just about every quarter, and have been investigating new ways to promote our collections as part of an assessment project. We are thinking of starting a newsletter – would you subscribe? What other ways would you like to keep in touch with Duke Digital Collections? Post a comment or contact me directly.
It takes a lot to build and publish digital collections as you can see from the variety and scope of the blog posts here on Bitstreams. We all have our internal workflows and tools we use to make our jobs easier and more efficient. The number and scale of activities going on behind the scenes is mind-boggling and we would never be able to do as much as we do if we didn’t continually refine our workflows and create tools and systems that help manage our data and work. Some of these tools are big, like the Duke Digital Repository (DDR), with its public, staff and backend interface used to preserve, secure, and provide access to digital resources, while others are small, like scripts built to transform ArchiveSpace output into a starter digitization guides. In the Digital Production Center (DPC) we use a homegrown tool that not only tracks production statistics but is also used to do project projections and to help isolate problems that occur during the digitization process. This tool is a relational database that is affectionately named the Daily Work Report and has collected over 9 years of data on nearly every project in that time.
A long time ago, in a newly minted DPC, supervisors and other Library staff often asked me, “How long will that take?”, “How many students will we need to digitize this collection?”, “What will the data foot print of this project be?”, “How fast does this scanner go?”, “How many scans did we do last year?”, “How many items is that?”. While I used to provide general information and anecdotal evidence to answer all of these questions, along with some manual hunting down of this information, it became more and more difficult to answer these questions as the number of projects multiplied, our services grew, the number of capture devices multiplied and the types of projects grew to include preservation projects, donor requests, patron request and exhibits. Answering these seemingly simple questions became more complicated and time consuming as the department grew. I thought to myself, I need a simple way to track the work being done on these projects that would help me answer these recurring common questions.
We were already using a FileMaker Pro database with a GUI interface as a checkout system to assign students batches of material to scan, but it was only tracking what student worked on what material. I decided I could build out this concept to include all of the data points needed to answer the questions above. I decided to use Microsoft Access because it was a common tool installed on every workstation in the department, I had used it before, and classes and instructional videos abound if I wanted to do anything fancy.
Enter the Daily Work Report (DWR). I created a number of discrete tables to hold various types of data: project names, digitization tasks, employee names and so on. These fields are connected to a datasheet represented as a form, which allowed for dropdown lists and auto filling for rapid and consistent entry of information.
At the end of each shift students and professionals alike fill out the DWR for each task they performed on each project and how long they worked on each task. These range from the obvious tasks of scanning and quality control to more minute tasks of derivative creation, equipment cleaning, calibration, documentation, material transfer, file movement, file renaming, ingest prep, and ingest.
Some of these tasks may seem minor and possibly too insignificant to record but they add up. They add up to ~30% of the time it takes to complete a project. When projecting the time it will take to complete a project we collect Scanning and Quality Control data from a similar project, calculate the time and add 30%.
Common Digitization Tasks
Overall % of project
Quality Control 1
Quality Control 2
Quality Control 3
New Project Estimates
Using the Daily Work Report’s Datasheet View, the database can be filtered by project, then by the “Scanning” task to get the total number of scans and the hours worked to complete those scans. The same can be done for the Quality Control task. With this information the average number of scans per hour can be calculated for the project and applied to the new project estimate.
Gather information from an existing project that is most similar to the project you are creating the estimate for. For example, if you need to develop an estimate for a collection of bound volumes that will be captured on the Zeutschel you should find a similar collection in the DWR to run your numbers.
Gather data from an existing project:
Number of scans = 3,473
Number of hours = 78.5
3,473/78.5 = 2/hr
Number of scans = 3,473
Number of hours = 52.75
3,473/52.75 = 8/hr
Apply the per hour rates to the new project:
Estimated number of scans: 7,800
Scanning: 7,800 / 44.2/hr = 176.5 hrs
QC: 7,800 / 68.8/hr = 113.4 hrs
Total: 290 hrs
+ 30%: 87 hrs
Grand Total: 377 hrs
Rolling Production Rate
When an update is required for an ongoing project the Daily Work Report can be used to see how much has been done and calculate how much longer it will take. The number of images scanned in a collection can be found by filtering by project then by the “Scanning” Task. That number can then be subtracted from the total number of scans in the project. Then, using a similar project to the one above you can calculate the production rate for the project and estimate the number of hours it will take to complete the project.
Number of scans in the project = 7,800
Number of scans completed = 4,951
Number of scans left to do = 7,800 – 4,951 = 2,849
Scanning time to completion
Number of scans left = 2,849
2,849/42.4/hr = 2 hrs
Number of files to QC in the project = 7,800
Number of files completed = 3,712
Number of files left to do = 7,800 – 3,712 = 4,088
QC hours to completion
Number of scans left to scan = 4,088
4,088/68.8 = 4 hrs
The amount of time left to complete the project
Scanning – 67.2 hrs
Quality Control – 59.4 hrs
Total = 126.2 hrs
+ 30% = 38
Grand Total = 164.2 hrs
Isolate an error
Errors inevitably occur during most digitization projects. The DWR can be used to identify how widespread the error is by using a combination of filtering, the digitization guide (which is an inventory of images captured along with other metadata about the capture process), and inspecting the images. As an example, a set of files may be found to have no color profile. The digitization guide can be used to identify the day the erroneous images were created and who created them. The DWR can be used to filter by the scanner operator and date to see if the error is isolated to a particular person, a particular machine or a particular day. This information can then be used to filter by the same variables across collections to see if the error exists elsewhere. The result of this search can facilitate retraining, recalibrating of capture devices and also identify groups of images that need to be rescanned without having to comb through an entire collection.
While I’ve only touched on the uses of the Daily Work Report, we have used this database in many different ways over the years. It has continued to answer those recurring questions that come up year after year. How many scans did we do last year? How many students worked on that multiyear project? How many patron requests did we complete last quarter? This database has helped us do our estimates, isolate problems and provide accurate updates over the years. For such a simple tool it sure does come in handy.
As one of the largest research libraries in the U.S., we have a whole lot of content on the web to consider.
Our website alone comprises over a thousand pages with more than fifty staff contributors. The library catalog interface displays records for over 13 million items at Duke and partner libraries. Our various digital repositories and digital exhibits platforms host hundreds of thousands of interactive digital objects of different types, including images, A/V, documents, datasets, and more. The list goes on.
Any attempt to take a full inventory of the library’s digital content reveals potentially several million web pages under the library’s purview, and all that content is managed and rendered via a dizzying array of technology platforms. We have upwards of a hundred web applications with public-facing interfaces. We built some of these ourselves, some are community-developed (with local customizations), and others we have licensed from vendors. Some interfaces are new, some are old. And some are really old, dating all the way back to the mid-90s.
Ensuring that this content is equally accessible to everyone is important, and it is indeed a significant undertaking. We must also be vigilant to ensure that it stays accessible over time.
With that as our context, I’d like to highlight a few recent efforts in the library to improve the accessibility of our digital resources.
Style Guide With Color Contrast Checks
In January 2019, we launched a new catalog, replacing a decade-old platform and its outdated interface. As we began developing the front-end, we knew we wanted to be consistent, constrained, and intentional in how we styled elements of the interface. We were especially focused on ensuring that any text in the UI had sufficient contrast with its background to be accessible to users with low vision or color-blindness.
This style guide is “living” in that it’s a real-time up-to-date reflection of how elements of the UI will appear when using particular color variable names and CSS classes. It helps to guide developers and other project team members to make good decisions about colors from our palette to stay in compliance with accessibility guidelines.
In the course of this assessment, we were able to identify (and then fix!) several accessibility issues in DukeSpace. I’ll share two strategies in particular from the guide that proved to be really effective. I highly recommend using them frequently.
The Keyboard Test
How easy is it to navigate your site using only your keyboard? Can you get where you want to go using TAB, ENTER, SPACE, UP, and DOWN? Is it clear which element of the page current has the focus?
If you’re a developer like me, chances are you already spend a lot of time using your browser’s Developer Tools pane to look under the hood of web pages, reverse-engineer UIs, mess with styles and markup, or troubleshoot problems.
The Deque Systems aXe Chrome Extension (also available for Firefox) integrates seamlessly into existing Dev Tools. It’s a remarkably useful tool to have in your toolset to help quickly find and fix accessibility issues. Its interface is clear and easy to understand. It finds and succinctly describes accessibility problems, and even tells you how to fix them in your code.
With aXe testing, we quickly learned we had some major issues to fix. The biggest problems revealed were missing form labels and page landmarks, and low contrast on color pairings. Again, these were not hard to fix since the tool explained what to do, and where.
Turning away from DSpace for a moment, see this example article published on a popular academic journal’s website. Note how it fares with an automated aXe accessibility test (197 violations of various types found). And if you were using a keyboard, you’d have to press Tab over 100 times in order to download a PDF of the article.
Libraries are increasingly becoming champions for open access to scholarly research. The overlap in aims between the open access movement and web accessibility in general is quite striking. It all boils down to removing barriers and making access to information as inclusive as possible.
Our open access repository UIs may never be able to match all the feature-rich bells and whistles present in many academic journal websites. But accessibility, well, that’s right up our alley. We can and should do better. It’s all about being true to our values, collaborating with our community of peers, and being vigilant in prioritizing the work.
Look for many more accessibility improvements throughout many of the library’s digital resources as the year progresses.
Hello! This is my first blog as the new Digital Production Service Manager, and I’d like to take this opportunity to take you, the reader, through my journey of discovering the treasures that the Duke Digital Collections program offers. To personalize this task, I explored the materials related to my family’s journey to the United States. First, I should contextualize. After migrating from south China in the mid-1800s, my family fled Vietnam in the late 1970s and we left with the bare necessities – mainly food, clothes, and essential documents. All I have now are a few family pictures from that era and vividly told stories from my parents to help me connect the dots of my family’s history.
When I started delving into Duke’s Digital Collections, it was heartening to find materials of China, Vietnam, and even anti-war materials in the U.S. The following are some materials and collections that I’d like to highlight.
The Sidney D. Gamble Photographs offer over 5,000 photographs of China in the early 20th century. Images of everyday life in China and landscapes are available in this collection.The above image from the Gamble collection, is that of a junk, or houseboat, photographed in the early 1900s. When my family fled Vietnam, fifty people crammed into a similar vessel and sailed in the dead of night along the Gulf of Tonkin. My parents spoke of how they were guided by the moonlight and how fearful they were of the junk catching fire from cooking rice.
The African American Soldier’s Vietnam War photograph album collection offers these gorgeous images of Vietnam. This is the country that was home for multiple generations for my family, and up until the war, it was a good life. I am astounded and grateful that these postcards were collected by an American soldier in the middle of war. Considering that I grew up in Los Angeles, California, I have no sense of the world that my parents inhabited, and these images help me appreciate their stories even more. On the other side of the planet, there were efforts to stop the war and it was intriguing to see a variety of digital collections depicting these perspectives through art and documentary photography. The image below is that of a poster from the Italian Cultural Posters collection depicting Uncle Sam and the Viet Cong.
In addition to capturing street scenes in London, the Ronald Reis Collection, includes images of Vietnam during the war and anti-war effort in the United States. The image below is that of a demonstration in Bryant Park in New York City. I recognize that the conflict was fought on multiple fronts and am grateful for these demonstrations, as they ultimately led to the end of the war.Lastly, the James Karales Photos collection depicts Vietnam during the war. The image below, titled “Soldiers leaving on helicopter” is one that reminds me of my uncle who left with the American soldiers and started a new life in the United States. In 1980, thanks to the Family Reunification Act, the aid of the American Red Cross, and my uncle’s sponsorship, we started a new chapter in America.
Perhaps this is typical of the immigrant experience, but it still is important to put into words. Not every community has the resources and the privilege to be remembered, and where there are materials to help piece those stories together, they are absolutely valued and appreciated. Thank you, Duke University Libraries, for making these materials available.
About four and a half years ago I wrote a blog post here on Bitstreams titled: “Digitization Details: Before We Push the “Scan” Button” in which I wrote about how we use color calibration, device profiling and modified viewing environments to produce “consistent results of a measurable quality” in our digital images. About two and a half years ago, I wrote a blog post adjacent to that subject titled “The FADGI Still Image standard: It isn’t just about file specs” about the details of the FADGI standard and how its guidelines go beyond ppi and bit depth to include information about UV light, vacuum tables, translucent material, oversized material and more. I’m surprised that I have never shared the actual process of digitizing a collection because that is what we do in the Digital Production Center.
Building digital collections is a complex endeavor that requires a cross-departmental team that analyzes project proposals, performs feasibility assessments, gathers project requirements, develops project plans, and documents workflows and guidelines in order to produce a consistent and scalable outcome in an efficient manner. We call our cross-departmental team the Digital Collections Implementation Team (DCIT) which includes representatives from the Conservation staff, Technical Services, Digital Production, Metadata Architects and Digital Collections UI developers, among others. By having representatives from each department participate, we are able to consider all perspectives including the sticking points, technical limitations and time constraints of each department. Over time, our understanding of each other’s workflows and sticking points has enabled us to refine our approach to efficiently hand off a project between departments.
I will not be going into the details of all the work other departments contribute to building digital collections (you can read just about any post on the blog for that). I will just dip my toe into what goes on in the Digital Production Center to digitize a collection.
Once the specifics of a project are nailed down, the scope of the project has been finalized, the material has been organized by Technical Services, Conservation has prepared the material for digitization, the material has been transferred to the Digital Production Center and an Assessment Checklist is filled out describing the type, condition, size and number of items in a collection, we are ready to begin the digitization process.
A starter digitization guide is created using output from ArchivesSpace and the DPC adds 16-20 fields to capture technical metadata during the digitization process. The digitization guide is an itemized list representing each item in a collection and is centrally stored for ease of access.
Cameras and monitors are calibrated with a spectrometer. A color profile is built for each capture device along with job settings in the capture software. This will produce consistent results from each capture device and produce an accurate representation of any items captured which in turn removes subjective evaluation from the scanning process.
Instructions are developed describing the scanning, quality control, and handling procedures for the project and students are trained.
Following instructions developed for each collection, the scanner operator will use the appropriate equipment, settings and digitization guide to digitize the collection. Benchmark tests are performed and evaluated periodically during the project. During the capture process the images are monitored for color fidelity and file naming errors. The images are saved in a structured way on the local drive and the digitization guide is updated to reflect the completion of an item. At the end of each shift the files are moved to a production server.
Quality Control 1
The Quality Control process is different depending on the device with which an item was captured and the nature of the material. All images are inspected for: correct file name, skew, clipping, banding, blocking, color fidelity, uniform crop, and color profile. The digitization guide is updated to reflect the completion of an item.
Quality Control 2
Images are cropped (leaving no background) and saved as JPEGs for online display. During the second pass of quality control each image is inspected for: image consistency from operator to operator and image to image, skew and other anomalies.
During this phase we compare the digitization guide against the item and file counts of the archival and derivative images on our production server. Discrepancies such as missing files, misnamed files and missing line items in the digitization guide and are resolved.
Create Checksums and dark storage
We then create a SHA1 checksum for each image file in the collection and push the collection into a staging area for ingest into the repository.
Sometimes this process is referred to simply as “scanning”.
Not only is this process in active motion for multiple projects at the same time, the Digital Production Center also participates in remediation of legacy projects for ingest into the Duke Digital Repository, multispectral imaging, audio digitization and video digitization for, preservation, patron and staff requests… it is quite a juggling act with lots of little details but we love our work!
Time to get back to it so I can get to a comfortable stopping point before the Thanksgiving break!
In anticipation of next Tuesday’s midterm elections, here is a photo gallery of voting-related images from Duke Digital Collections. Click on a photo to view more images from our collections dealing with political movements, voting rights, propaganda, activism, and more!
If you haven’t already taken advantage of early voting, we at Bitstreams encourage you to exercise your right on November 6!
When Duke professor and botanist Henry J. Oosting agreed to take part in an expedition to Greenland in the summer of 1937 his mission was to collect botanical samples and document the region’s native flora. The expedition, organized and led by noted polar explorer Louise Arner Boyd, included several other accomplished scientists of the day and its principal achievement was the discovery and charting of a submarine ridge off of Greenland’s eastern coast.
In a diary he kept during his trip titled “To Greenland in 105 Days, or Why did I ever leave home,” Oosting focuses little on the expedition’s scientific exploits. Instead, he offers a more intimate look into the mundane and, at times, amusing aspects of early polar exploration. Supplementing the diary in the recently published Henry J. Oosting papers digital collection are a handful of digitized nitrate negatives that add visual interest to his arctic (mis)adventures.
Oosting’s journey got off to an inauspicious start when he wrote in his opening entry on June 9, 1937: “Frankly, I’m not particularly anxious to go now that the time has come–adventure of any sort has never been my line–and the thought of the rolling sea gives me no great cheer.” What follows over the next 200 pages or so, by his own account, are the “inane mental ramblings of a simple-minded botanist,” complete with dozens of equally inane marginal doodles.
The Veslekari, the ship chartered by Louise Boyd for the expedition, first encountered sea ice on July 12 just off the east coast of Greenland. As the ship slowed to a crawl and boredom set in among the crew the following day, Oosting wrote in his diary that “Miss Boyd’s story of the polar bear is worth recording.” He then relayed a joke Boyd told the crew: “If you keep a private school and I keep a private school then why does a polar bear sit on a cake of ice…? To keep its privates cool, of course.” For clarification, Oosting added: “She says she has been trying for a long time to get just the right picture to illustrate the story but it’s either the wrong kind of bear or it won’t hold its position.”
When the expedition finally reached the Greenland coast at the end of July, Oosting spent several days exploring the Tyrolerfjord glacier, gathering plant specimens and drying them on racks in the ship’s engine room. On the glacier, Oosting observed an arctic hare, an ermine, and noted that “my plants are accumulating in such quantity.”
As the expedition wore on Oosting grew increasingly frustrated with the daily tedium and with Boyd’s unfailing enthusiasm for the enterprise. “In spite of everything…we are stopping at more or less regular intervals to see what B thinks is interesting,” Oosting wrote on August 19. “I didn’t go ashore this A.M. for a 15 min. stop even after she suggested it–have heard about it 10 times since…I’ll be obliged to go in every time now regardless or there will be no living with this woman. I am thankful, sincerely thankful, there are only 5 more days before we sail for I am thoroughly fed-up with this whole business.”
By late August, the Veslekari and crew headed back east towards Bergen, Norway and eventually Newcastle, England, where Oosting boarded a train for London on September 12. “This sleeping car is the silliest arrangement imaginable,” Oosting wrote, “my opinion of the English has gone down–at least my opinion of their ideas of comfort.” After a brief stint sightseeing around London, Oosting boarded another ship in Southampton headed for New York and eventually home to Durham. “It will be heaven to get back to the peace and quiet of Durham,” Oosting pined on September 14, “I’m developing a soft spot for the lousy old town.”
Oosting arrived home on September 21, where his diary ends. Despite his curmudgeonly tone throughout and his obsession with recording every inconvenience and impediment encountered along the way, it’s clear from other sources that Oosting’s work on the voyage made important contributions to our understanding of arctic plant life.
In The Coast of Northeast Greenland (1948), edited by Louise Boyd and published by the American Geographic Society, Oosting authored a chapter titled “Ecological Notes on the Flora,” in which he meticulously documented the specimens he collected in the arctic. The onset of World War II and concerns over national security delayed publication of Oosting’s findings, but when released, they provided valuable new information about plant communities in the region. While Oosting’s diary reveals a man with little appetite for adventure, his work endures. As the forward to Boyd’s 1948 volume attests: “When travelers can include significant contributions to science, then adventure becomes a notable achievement.”
In 2016, after we launched the first iteration of the Duke Chapel Recordings Digital Collection in the Duke Digital Repository (DDR), we began a collaborative project between Digital Collections and Curation Services, University Archives, and the Duke Divinity School to enhance the metadata. The original metadata was fairly basic and allowed users to identify individual written, audio, and video sermons based on speaker, date, title, and format. All good stuff, but it didn’t allow for discovery based on the intellectual content of the sermons themselves. So, it was decided that, at the same time Divinity School staff listened to and corrected machine-generated transcripts for each sermon, they would also capture information that is useful from a homileticperspective.
At the very beginning of the project, the Divinity School convened two focus groups of preachers from a variety of denominations and backgrounds to ask them how they would like to be able to discover and use a digital collection of sermons. These groups developed a set of terms/categories based on which they would like to be able to identify sermons. From there I worked with the project team to begin thinking about what kinds of fields they would want to capture, and determine whether or how those fields could map to the existing metadata application profile that we use in the DDR.
It quickly became clear that this project was going to require the creation of new metadata fields in the DDR application. I try to be really judicious about creating new fields (because otherwise, you end up doing this), but in this case, I felt that the need was justified: homiletic metadata is fairly specialized, and given Duke’s commitment to this collecting area, making adjustments to accommodate it seemed more than reasonable. Since I always like to work with best practices, I attempted to identify any extant metadata schemas that might already exist for working with biblical metadata. I felt pretty confident that I would find one, considering that the Bible is actually one of the oldest books out there. While I did find some resources, they were pretty old (think last-updated-in-2006), and all of them were oriented towards marking up actual Biblical texts, rather than the encoding of metadata about those texts.
With no established standards to work with, we set about determining what the fields should be, using the practice of homiletics itself as a guide. We also developed a workflow for the capturing of this metadata, using a google spreadsheet with conditional formatting and pre-developed drop-down lists to control and facilitate data entry. And starting from the set of terms/categories developed during the focus groups, we came up with a normalized set of Library of Congress Subject Headings (LCSH) for staff to choose from or add to, as needs arose.
Working with LCSH was in itself a challenge, as it required us to navigate the tension between the need to use a standardized set of headings while also include concepts that weren’t themselves well represented in the vocabulary. In some cases we diverged from LCSH in the interest of using terms that would be familiar, expected, and recognizable to practitioners of homiletics. One example of this is the term ‘Community’, which has a particular meaning in a Biblical context, but which, were we to have used the LCSH term ‘Communities’, loses its intent.
We rolled out the new metadata properties and values in early August so they could be available for use by attendees at the international homiletics conference, Societas Homiletica, which was held at Duke University August 3-8, 2018. Now, users of the digital collection can facet and browse by: Liturgical Calendar, Biblical Book, Chapter and Verse, and Subject. We’ve also added curated abstracts, and key quotations from the sermons, which are free-text searchable.
The enhanced metadata makes for a much more meaningful experience using the Duke Chapel Recordings, and future plans involve the inclusion of sermon transcripts, as well as the development of a complimentary website, maintained by the Duke Divinity School, to provide even more information about the speakers and their sermons. With these enrichments, we are well on our way to having an unparalleled free and open resource for the study of homiletics, and hopefully, in so doing, we will facilitate the discovery and study of preachers whose voices have traditionally been underheard.
Notes from the Duke University Libraries Digital Projects Team