We’ve written many posts on this blog that describe (in detail) how we build our digital collections at Duke, how we describe them, and how we make them accessible to researchers.
At a Rubenstein Library staff meeting this morning one of my colleagues–Sarah Carrier–gave an interesting report on how some of our researchers are actually using our digital collections. Sarah’s report focused specifically on permission-to-publish requests, that is, cases where researchers requested permission from the library to publish reproductions of materials in our collection in scholarly monographs, journal articles, exhibits, websites, documentaries, and any number of other creative works. To be clear, Sarah examined all of these requests, not just those involving digital collections. Below is a chart showing the distribution of the types of publication uses.
What I found especially interesting about Sarah’s report, though, is that nearly 76% of permission-to-publish requests did involve materials from the Rubenstein that have been digitized and are available in Duke Digital Collections. The chart below shows the Rubenstein collections that generate the highest percentage of requests. Notice that three of these in Duke Digital Collections were responsible for 40% of all permission-to-publish requests:
So, even though we’ve only digitized a small fraction of the Rubenstein’s holdings (probably less than 1%), it is this 1% that generates the overwhelming majority of permission-to-publish requests.
I find this stat both encouraging and discouraging at the same time. On one hand, it’s great to see that folks are finding our digital collections and using them in their publications or other creative output. On the other hand, it’s frightening to think that the remainder of our amazing but yet-to-be digitized collections are rarely if ever used in publications, exhibits, and websites.
I’m not suggesting that researchers aren’t using un-digitized materials. They certainly are, in record numbers. More patrons are visiting our reading room than ever before. So how do we explain these numbers? Perhaps research and publication are really two separate processes. Imagine you’ve just written a 400 page monograph on the evolution of popular song in America, you probably just want to sit down at your computer, fire up your web browser, and do a Google Image Search for “historic sheet music” to find some cool images to illustrate your book. Maybe I’m wrong, but if I’m not, we’ve got you covered. After it’s published, send us a hard copy. We’ll add it to the collection and maybe we’ll even digitize it someday.
[Data analysis and charts provided by Sarah Carrier – thanks Sarah!]
Everyone knows that Twitter limits each post to 140 characters. Early criticism has since cooled and most people agree it’s a helpful constraint, circumvented through clever (some might say better) writing, hyperlinks, and URL-shorteners. But as a reader of tweets, how do you know what lies at the other end of a shortened link? What entices you to click? The tweet author can rarely spare the characters to attribute the source site or provide a snippet of content, and can’t be expected to attach a representative image or screenshot.
Our webpages are much more than just mystery destinations for shortened URLs. Twitter agrees: its developers want help understanding what the share-worthy content from a webpage actually is in order to present it in a compelling way alongside the 140 characters or less. Enter two library hallmarks: vocabularies and metadata.
This week, we added Twitter Card metadata in the <head> of all of our digital collections pages and in our library blogs. This data instantly made all tweets and retweets linking to our pages far more interesting. Check it out!
For the blogs, tweets now display the featured image, post title, opening snippet, site attribution, and a link to the original post. Links to items from digital collections now show the image itself (along with some item info), while links to collections, categories, or search results now display a grid of four images with a description underneath. See these examples:
Why This Matters
In 2013-14, social media platforms accounted for 10.1% of traffic to our blogs (~28,000 visits in 2013-14, 11,300 via Twitter), and 4.3% of visits to our digital collections (~17,000 visits, 1,000 via Twitter). That seems low, but perhaps it’s because of the mystery link phenomenon. These new media-rich tweets have the potential to increase our traffic through these channels by being more interesting to look at and more compelling to click. We’re looking forward to finding out whether they do.
And regardless of driving clicks, there are two other benefits of Twitter Cards that we really care about in the library: context and attribution. We love it when our collections and blog posts are shared on Twitter. These tweets now automatically give some additional information and helpfully cite the source.
How to Get Your Own Twitter Cards
The Manual Way
If you’re manually adding tags like we’ve done in our Digital Collections templates, you can “View Source” on any of our pages to see what <meta> tags make the magic happen. Moz also has some useful code snippets to copy, with links to validator tools so you can make sure you’re doing it correctly.
Since our blogs run on WordPress, we were able to use the excellent WordPress SEO plugin by Yoast. It’s helpful for a lot of things related to search engine optimization, and it makes this social media optimization easy, too.
Once your tags are in place, you just need to validate an example from your domain using the Twitter Card Validator before Twitter will turn on the media-rich tweets. It doesn’t take long at all: ours began appearing within a couple hours. The cards apply retroactively to previous tweets, too.
Our addition of Twitter Card data follows similar work we have done using semantic markup in our Digital Collections site using the Open Graph and Schema.org vocabularies. Open Graph is a standard developed by Facebook. Similar to Twitter Card metadata, OG tags inform Facebook what content to highlight from a linked webpage. Schema.org is a vocabulary for describing the contents of web pages in a way that is helpful for retrieval and representation in Google and other search engines.
All of these tools use RDFa syntax, a key cornerstone of Linked Data on the web that supports the description of resources using whichever vocabularies you choose. Google, Twitter, Facebook, and other major players in our information ecosystem are now actively using this data, providing clear incentive for web authors to provide it. We should keep striving to play along.
Back in February 2014, we wrapped up the CCC project, a collaborative three year IMLS-funded digitization initiative with our partners in the Triangle Research Libraries Network (TRLN). The full title of the project is a mouthful, but it captures its essence: “Content, Context, and Capacity: A Collaborative Large-Scale Digitization Project on the Long Civil Rights Movement in North Carolina.”
So how large is “large-scale”? By comparison, when the project kicked off in summer 2011, we had a grand total of 57,000 digitized objects available online (“published”), collectively accumulated through sixteen years of digitization projects. That number was 69,000 by the time we began publishing CCC manuscripts in June 2012. Putting just as many documents online in three years as we’d been able to do in the previous sixteen naturally requires a much different approach to creating digital collections.
Individual items identified during scanning
No item-level identification: entire folders scanned
Descriptive metadata applied to each item
Archival description only (e.g., at the folder level)
CCC staff completed qualitative and quantitative evaluations of this large-scale digitization approach during the course of the project, ranging from conducting user focus groups and surveys to analyzing the impact on materials prep time and image quality control. Researcher assessments targeted three distinct user groups: 1) Faculty & History Scholars; 2) Undergraduate Students (in research courses at UNC & NC State); 3) NC Secondary Educators.
Ease of Use. Faculty and scholars, for the most part, found it easy to use digitized content presented this way. Undergraduates were more ambivalent, and secondary educators had the most difficulty.
To Embed or Not to Embed. In 2012, Duke was the only library presenting the image thumbnails embedded directly within finding aids and a lightbox-style image navigator. Undergrads who used Duke’s interface found it easier to use than UNC or NC Central’s, and Duke’s collections had a higher rate of images viewed per folder than the other partners. UNC & NC Central’s interfaces now use a similar convention.
Potential for Use. Most users surveyed said they could indeed imagine themselves using digitized collections presented in this way in the course of their research. However, the approach falls short in meeting key needs for secondary educators’ use of primary sources in their classes.
Desired Enhancements. The top two most desired features by faculty/scholars and undergrads alike were 1) the ability to search the text of the documents (OCR), and 2) the ability to explore by topic, date, document type (i.e., things enabled by item-level metadata). PDF download was also a popular pick.
Impact on Duke Digitization Projects
Since the moment we began putting our CCC manuscripts online (June 2012), we’ve completed the eight CCC collections using this large-scale strategy, and an additional eight manuscript collections outside of CCC using the same approach. We have now cumulatively put more digital objects online using the large-scale method (96,000) than we have via traditional means (75,000). But in that time, we have also completed eleven digitization projects with traditional item-level identification and description.
We see the large-scale model for digitization as complementary to our existing practices: a technique we can use to meet the publication needs of some projects.
Do people actually use the collections when presented in this way? Some interesting figures:
Views / item in 2013-14 (traditional digital object; item-level description): 13.2
Views / item in 2013-14 (digitized image within finding aid; folder-level description): 1.0
Views / folder in 2013-14 (digitized folder view in finding aid): 8.5
It’s hard to attribute the usage disparity entirely to the publication method (they’re different collections, for one). But it’s reasonable to deduce (and unsurprising) that bypassing item-level description generally results in less traffic per item.
The takeaway is, sometimes having interesting, important, and timely content available for use online is more important than the features enabled or the process by which it all gets there.
We’ll keep pushing ahead with evolving our practices for putting digitized materials online. We’ve introduced many recent enhancements, like fulltext searching, a document viewer, and embedded HTML5 video. Inspired by the CCC project, we’ll continue to enhance our finding aids to provide access to digitized objects inline for context (e.g., The Jazz Loft Project Records). Our TRLN partners have also made excellent upgrades to the interfaces to their CCC collections (e.g., at UNC, at NC State) and we plan, as usual, to learn from them as we go.
We try to keep our posts pretty focussed on the important work at hand here at Bitstreams central, but sometimes even we get distracted (speaking of, did you know that you can listen to the Go-Gos for hours and hours on Spotify?). With most of our colleagues in the library leaving for or returning from vacation, it can be difficult to think about anything but exotic locations and what to do with all the time we are not spending in meetings. So this week, dear reader, we give you a few snapshots of vacation adventures told through Duke Digital Collections.
Many of Duke’s librarians (myself included) head directly East for a few days of R/R at the one of many beautiful North Carolina beaches. Who can blame them? It seems like everyone loves the beach including William Gedney, Deena Stryker, Paul Kwilecki and even Sydney Gamble. Lucky for North Carolina, the beach is only a short trip away, but of course there are essentials that you must not forget even on such a short journey.
Of course many colleagues have ventured even farther afield to West Virginia, Minnesota, Oregon, Maine and even Africa!! Wherever our colleagues are, we hope they are enjoying some well deserved time-off. For those of us who have already had our time away or are looking forward to next time, we will just have to live vicariously through our colleagues’ and our collections’ adventures.
The audio tapes in the recently acquired Radio Haiti collection posed a number of digitization challenges. Some of these were discussed in this video produced by Duke’s Rubenstein Library:
In this post, I will use a short audio clip from the collection to illustrate some of the issues that we face in working with this particular type of analog media.
First, I present the raw digitized audio, taken from a tape labelled “Tambour Vaudou”:
As you can hear, there are a number of confusing and disorienting things going on there. I’ll attempt to break these down into a series of discrete issues that we can diagnose and fix if necessary.
Analog tape machines typically offer more than one speed for recording, meaning that you can change the rate at which the reels turn and the tape moves across the record or playback head. The faster the speed, the higher the fidelity of the result. On the other hand, faster speeds use more tape (which is expensive). Tape speed is measured in “ips” (inches per second). The tapes we work with were usually recorded at speeds of 3.75 or 7.5 ips, and our playback deck is set up to handle either of these. We preview each tape before digitizing to determine what the proper setting is.
In the audio example above, you can hear that the tape speed was changed at around 10 seconds into the recording. This accounts for the “spawn of Satan” voice you hear at the beginning. Shifting the speed in the opposite direction would have resulted in a “chipmunk voice” effect. This issue is usually easy to detect by ear. The solution in this case would be to digitize the first 10 seconds at the faster speed (7.5 ips), and then switch back to the slower playback speed (3.75 ips) for the remainder of the tape.
Volume Level and Background Noise
The tapes we work with come from many sources and locations and were recorded on a variety of equipment by people with varying levels of technical knowledge. As a result, the audio can be all over the place in terms of fidelity and volume. In the audio example above, the volume jumps dramatically when the drums come in at around 00:10. Then you hear that the person making the recording gradually brings the level down before raising it again slightly. There are similar fluctuations in volume level throughout the audio clip. Because we are digitizing for archival preservation, we don’t attempt to make any changes to smooth out the sometimes jarring volume discrepancies across the course of a tape. We simply find the loudest part of the content, and use that to set our levels for capture. The goal is to get as much signal as possible to our audio interface (which converts the analog signal to digital information that can be read by software) without overloading it. This requires previewing the tape, monitoring the input volume in our audio software, and adjusting accordingly.
This recording happens to be fairly clean in terms of background noise, which is often not the case. Many of the oral histories that we work with were recorded in noisy public spaces or in homes with appliances running, people talking in the background, or the subject not in close enough proximity to the microphone. As a result, the content can be obscured by noise. Unfortunately there is little that can be done about this since the problem is in the recording itself, not the playback. There are a number of hum, hiss, and noise removal tools for digital audio on the market, but we typically don’t use these on our archival files. As mentioned above, we try to capture the source material as faithfully as possible, warts and all. After each transfer, we clean the tape heads and all other surfaces that the tape touches with a Q-tip and denatured alcohol. This ensures that we’re not introducing additional noise or signal loss on our end.
While cleaning the Radio Haiti tapes (as detailed in the video above), we discovered that many of the tapes were comprised of multiple sections of tape spliced together. A splice is simply a place where two different pieces of audio tape are connected by a piece of sticky tape (much like the familiar Scotch tape that you find in any office). This may be done to edit together various content into a seamless whole, or to repair damaged tape. Unfortunately, the sticky tape used for splicing dries out over time, becomes brittle, and loses it’s adhesive qualities. In the course of cleaning and digitizing the Radio Haiti tapes, many of these splices came undone and had to be repaired before our transfers could be completed.
Our playback deck includes a handy splicing block that holds the tape in the correct position for this delicate operation. First I use a razor blade to clean up any rough edges on both ends of the tape and cut it to the proper 45 degree angle. The splicing block includes a groove that helps to make a clean and accurate cut. Then I move the two pieces of tape end to end, so that they are just touching but not overlapping. Finally I apply the sticky splicing tape (the blue piece in the photo below) and gently press on it to make sure it is evenly and fully attached to the audio tape. Now the reel is once again ready for playback and digitization. In the “Tambour Vaudou” audio clip above, you may notice three separate sections of content: the voice at the beginning, the drums in the middle, and the singing at the end. These were three pieces of tape that were spliced together on the original reel and that we repaired right here in the library’s Digital Production Center.
These are just a few of many issues that can arise in the course of digitizing a collection of analog open reel audio tapes. Fortunately, we can solve or mitigate most of these problems, get a clean transfer, and generate a high-quality archival digital file. Until next time…keep your heads clean, your splices intact, and your reels spinning!
The era in which libraries have digitized their collections and published them on the Internet is less than two decades old. As an observer and participant during this time, I’ve seen some great projects come online. For me, one stands out for its impact and importance – the Farm Security Administration/Office of War Information Black-and-White Negatives, which is Library of Congress’ collection of 175,000 photographs taken by employees of the US government in the 1930s and 40s.
The FSA photographers produced some of the most iconic images of the past century. In the decades following the program, they became known via those who journeyed to D.C. to select, reproduce, and publish in monographs, or display in exhibits. But the entire collection, funded by the federal government, was as public as public domain gets. When the LoC took on the digitization of the collection, it became available in mass. All those years, it had been waiting for the Internet.
The FSA photographers covered the US. This wonderful site built by a team from Yale can help you determine whether they passed through your hometown. Between 1939 and 1940, Dorothea Lange, Marion Post Wolcott, and Jack Delano traveled through the town and the county where I live, and some 73 of their photos are now online. I’ve studied them, and also witnessed the wonderment of my friends and neighbors when they happen upon the pictures. The director of the FSA program, Roy Stryker, was one of the visionaries of the Twentieth Century, but it took the digital collection to make the scope and reach of his vision apparent.
Photography has been an emphasis of our own digital collections program over the years. At the same time that the FSA traveled to rural Chatham County on their mission of “introducing America to Americans,” anonymous photographers employed by the RC Maxwell Company shot their outdoor advertising installations in places like Atlantic City, New Jersey and Richmond, Virginia. Maybe they were merely “introducing advertising to advertisers,” but I like to think of them as our own mini-Langes and mini-Wolcotts, freezing scenes that others cruised past in their Studebakers.
Certainly the most important traveling photographer we’ve published has been Sidney Gamble, an American who visited Asia, particularly China, on four occasions between 1908 and 1932. As with the FSA photos, I’ve spent time studying the scenes of places known to me. I’ve never been to China or Siberia, but I did live in Japan for a while some years ago, and come back to photos of a few places I visited – or maybe didn’t – while I was there.
The first place is the Great Buddha at Kamakura. It’s a popular tourist site south of Tokyo; I visited with some friends in 1990. Our collection has four photographs by Gamble of the Daibutsu. I don’t find anything particular of interest in Gamble’s shots, just the unmistakable calm and grandeur of the same scene I saw 60+ years later.
More intriguing for me, however, is the photo that Gamble took of the YMCA* in Yokohama, probably in 1917. For a while during my stay in Japan, I lived a few train stops from Yokohama, and got involved in a weekly game of pickup basketball at the Y there. I don’t remember much about the exterior of the building, but I recall the interior as somewhat funky, with lots of polished wood and a sweet wooden court. It was very distinctive for Tokyo and environs – a city where most of the architecture is best described as transient and flimsy, designed to have minimum impact when flattened by massive forces like earthquakes or bombers. I’ve always wondered if the building in Gamble’s photo was the same that I visited.
So I began to construct a response to this question based entirely on my own fading memories, some superficial research, and a fractional comprehension of a series of youtube videos on the history of the YMCA in Yokohama. To begin with, a screenshot of Google street view of the Yokohama YMCA in 2011 shows a building quite different from the original.
The youtube video includes a photograph of a building, clearly the same as the one in Gamble’s photograph, that was built in 1884. There are shots of people playing basketball and table tennis, and the few details of the interior look a lot like the place I remember. Could it be the same?
But then we see the building damaged from the Great Kanto Earthquake of 1923. That the structure was standing at all would have been remarkable. You can easily search and find images of the astonishing devastation of that event, but I’ll let these harrowing words from a correspondent of The Atlantic convey the scale of it.
Yokohama, the city of almost half a million souls, had become a vast plain of fire, of red, devouring sheets of flame which played and flickered. Here and there a remnant of a building, a few shattered walls, stood up like rocks above the expanse of flame, unrecognizable. There seemed to be nothing left to burn. It was as if the very earth were now burning.
According to my understanding of the video, the YMCA moved into another building in 1926. Based on the photos of the interior, my guess is that it was the same building where I visited in the early 1990s. The shots of basketball and table tennis from earlier might have been taken inside this building, even if the members of the Y engaged in those activities in the original.
Still, I couldn’t help but ask – would the Japanese have played basketball in the original building, between the game’s invention in 1891 and the earthquake in 1923? It seemed anachronistic to me, until I looked into it a little further.
It’s not hard to imagine Ishikawa making a beeline from the ship when it docked at Yokohama to the YMCA. If so, it makes the building that Gamble shot one of the sanctified sites of the sport, like many shrines since ruined but replaced. Sure it was impressive to gaze up at a Giant Buddha cast in bronze some 800 years prior, but what I really like to think about is how that sweet court I played on in Yokohama bears a direct line of descent from the origins of the game.
So much work to do, so little time. But what keeps us focused as we work to make a wealth of resources available via the web? It often comes down to a willingness to collaborate and a commitment to a common vision.
Staying focused through vision and values
When Duke University Libraries embarked on our 2012-2013 website redesign, we created a vision and values statement that became a guidepost during our decision making. It worked so well for that single project, that we later decided to apply it to current and future web projects. You can read the full statement on our website, but here are just a few of the key ideas:
Put users first.
Verify data and information, perpetually remove outdated or inaccurate data and content, & present relevant content at the point of need.
Strengthen our role as essential partners in research, teaching, and scholarly communication: be a center of intellectual life at Duke.
Maintain flexibility in the site to foster experimentation, risk-taking, and future innovation.
As we decide which projects to undertake, what our priorities should be, and how we should implement these projects, we often consider what aligns well with our vision and values. And when something doesn’t fit well, it’s often time to reconsider.
Team work, supporting and balancing one another
Vision counts, but having people who collaborate well is what really enables us to maintain focus and to take a coherent approach to our work.
A number of cross-departmental teams within Duke University Libraries consider which web-based projects we should undertake, who should implement them, when, and how. By ensuring that multiple voices are at the table, each bringing different expertise, we make use of the collective wisdom from within our staff.
The Web Experience Team (WebX) is responsible for the overall visual consistency and functional integrity of our web interfaces. It not only provides vision for our website, but actively leads or contributes to the implementation of numerous projects. Sample projects include:
The introduction of a new eBook service called Overdrive
The development of a new, Bento-style, version of our search portal to be released in August
Members of WebX are Aaron Welborn, Emily Daly, Heidi Madden, Jacquie Samples, Kate Collins, Michael Peper, Sean Aery, and Thomas Crichlow.
While we love to see the research community using our collections within our reading rooms, we understand the value in making these collections available online. The Advisory Committee for Digital Collections (ACDC) decides which collections of rare material will be published online. Members of ACDC are Andy Armacost, David Pavelich, Jeff Kosokoff, Kat Stefko, Liz Milewicz, Molly Bragg, Naomi Nelson, Valerie Gillispie, and Will Sexton.
The Digital Collections Implementation Team (DCIT) both guides and undertakes much of the work needed to digitize and publish our unique online collections. Popular collections DCIT has published include:
Members of DCIT are Erin Hammeke, Mike Adamo, Molly Bragg, Noah Huffman, Sean Aery, and Will Sexton.
These groups have their individual responsibilities, but they also work well together. The teamwork extends beyond these groups as each relies on individuals and departments throughout Duke Libraries and beyond to ensure the success of our projects.
Most importantly, it helps that we like to work together, we value each other’s viewpoints, and we remain connected to a common vision.
A unified search results page, commonly referred to as the “Bento Box” approach, has been an increasingly popular method to display search results on library websites. This method helps users gain quick access to a limited result set across a variety of information scopes while providing links to the various silos for the full results. NCSU’s QuickSearch implementation has been in place since 2005 and has been extremely influential on the approach taken by other institutions.
Way back in December of 2012, the DUL began investigating and planning for implementing a Bento search results layout on our website. Extensive testing revealed that users favor searching from a single box — as is their typical experience conducting web searches via Google and the like. Like many libraries, we’ve been using Summon as a unified discovery layer for articles, books, and other resources for a few years, providing an ‘All’ tab on our homepage as the entry point. Summon aggregates these various sources into a common index, presented in a single stream on search results pages. Our users often find this presentation overwhelming or confusing and prefer other search tools. As such, we’ve demoted the our ‘All’ search on our homepage — although users can set it as the default thanks to the very slick Default Scope search tool built by Sean Aery (with inspiration from the University of Notre Dame’s Hesburgh Libraries website):
The library’s Web Experience Team (WebX) proposed the Bento project in September of 2013. Some justifications for the proposal were as follows:
Bento boxing helps solve these problems:
We won’t have to choose which silo should be our default search scope (in our homepage or masthead)
Synthesizing relevance ranking across very different resources is extremely challenging, e.g., articles get in the way of books if you’re just looking for books (and vice-versa).
We need to move from “full collection discovery to full library discovery” – in the same search, users discover expertise, guides/experts, other library provisions alongside items from the collections. 1
“A single search box communicates confidence to users that our search tools can meet their information needs from a single point of entry.” 2
Sean also developed this mockup of what Bento results could look like on our website and we’ve been using it as the model for our project going forward:
For the past month our Bento project team has been actively developing our own implementation. We have had the great luxury of building upon work that was already done by brilliant developers at our sister institutions (NCSU and UNC) — and particular thanks goes out to Tim Shearer at UNC Libraries who provided us with the code that they are using on their Bento results page, which in turn was heavily influenced by the work done at NCSU Libraries.
Our approach includes using results from Summon, Endeca, Springshare, and Google. We’re building this as a Drupal module which will make it easy to integrate into our site. We’re also hosting the code on GitHub so others can gain from what we’ve learned — and to help make our future enhancements to the module even easier to implement.
Our plan is to roll out Bento search in August, so stay tuned for the official launch announcement!
PS — as the 4th of July holiday is right around the corner, here are some interesting items from our digital collections related to independence day:
Fifty years ago, hundreds of student volunteers headed south to join the Student Nonviolent Coordinating Committee’s (SNCC) field staff and local people in their fight against white supremacy in Mississippi. This week, veterans of Freedom Summer are gathering at Tougaloo College, just north of Jackson, Mississippi, to commemorate their efforts to remake American democracy.
The 50th anniversary events, however, aren’t only for movement veterans. Students, young organizers, educators, historians, archivists, and local Mississippians make up the nearly one thousand people flocking to Tougaloo’s campus this Wednesday through Saturday. We here at Duke Libraries, as well as members of the SNCC Legacy Project Editorial Board, are in the mix, making connections with both activists and archivists about our forthcoming website, One Person, One Vote: The Legacy of SNCC and the Fight for Voting Rights.
This site will bring together material created in and around SNCC’s struggle for voting rights in the 1960s and pair it with new interpretations of that history by the movement veterans themselves. To pull this off, we’ll be drawing on Duke’s own collection of SNCC-related material, as well as incorporating the wealth of material already digitized by institutions like the University of Southern Mississippi, the Wisconsin Historical Society’s Freedom Summer Collection, the Mississippi Department of Archives and History, as well as others.
What becomes clear while circling through the panels, films, and hallway conversations at Freedom Summer 50th events is how the fight for voting rights is really a story of thousands of local people. The One Person, One Vote site will feature these everyday people – Mississippians like Peggy Jean Connor, Fannie Lou Hamer, Vernon Dahmer, and SNCC workers like Hollis Watkins, Bob Moses, and Charlie Cobb. And the list goes on. It’s not everyday that so many of these people come together under one roof, and we’re doing our share of listening to and connecting with the people whose stories will make up the One Person, One Vote site.
Many of us here at Duke have been excited about the Digital Public Library of America (DPLA) since their launch in April of 2013. DPLA’s mission is to bring together America’s cultural riches into one portal. Additionally, they provide a platform for accessing and sharing library data in technologically innovative and impactful ways via the DPLA API. If you are not familiar with DPLA, be sure to take a look at their website and watch their introductory video.
The North Carolina Digital Heritage Center (NCDHC) is our local service hub for DPLA and we met with them to understand requirements for contributing metadata as well as the nuts and bolts of exposing our records for harvesting. They have a system in place that is really easy for contributing libraries around the state, and we are very thankful for their efforts. On our side, we chose our first collection to share, updated rights statements for the items in that collection and contacted NCDCH to let them know where to find our metadata (admittedly these tasks involved a bit more nitty gritty work than I am describing here, but it was still a relatively simple process).
In mid-June, NCDHC harvested metadata from our Broadsides and Ephemera digital collection and shortly thereafter, voila the records are available through DPLA!!
We plan to continue making more collections available to DPLA, but are still selecting materials. What collections do you think we should share? Let us know in the comments below or through Twitter or Facebook.
Thanks again to NCDHC for the wonderful work they do in helping us and other libraries across North Carolina participate in the ambitious mission of the Digital Public Library of America!
Notes from the Duke University Libraries Digital Projects Team