My recent posts have touched on endangered analog audio formats (open reel tape and compact cassette) and the challenges involved in digitizing and preserving them. For this installment, we’ll enter the dawn of the digital and Internet age and take a look at the first widely available consumer digital audio format: the DAT (Digital Audio Tape).
The DAT was developed by consumer electronics juggernaut Sony and introduced to the public in 1987. While similar in appearance to the familiar cassette and also utilizing magnetic tape, the DAT was slightly smaller and only recorded on one “side.” It boasted lossless digital encoding at 16 bits and variable sampling rates maxing out at 48 kHz–better than the 44.1 kHz offered by Compact Discs. During the window of time before affordable hard disk recording (roughly, the 1990s), the DAT ruled the world of digital audio.
The format was quickly adopted by the music recording industry, allowing for a fully digital signal path through the recording, mixing, and mastering stages of CD production. Due to its portability and sound quality, DAT was also enthusiastically embraced by field recordists, oral historians & interviewers, and live music recordists (AKA “tapers”):
[Conway, Michael A., “Deadheads in the Taper’s section at an outside venue,” Grateful Dead Archive Online, accessed October 10, 2014, http://www.gdao.org/items/show/834556.]
However, the format never caught on with the public at large, partially due to the cost of the players and the fact that few albums of commercial music were issued on DAT [bonus trivia question: what was the first popular music recording to be released on DAT? see below for answer]. In fact, the recording industry actively sought to suppress public availability of the format, believing that the ability to make perfect digital copies of CDs would lead to widespread piracy and bootlegging of their product. The Recording Industry Association of America (RIAA) lobbied against the DAT format and attempted to impose restrictions and copyright detection technology on the players. Once again (much like the earlier brouhaha over cassette tapes and subsequent battle over mp3’s and file sharing) “home taping” was supposedly killing music.
By the turn of the millennium, CD burning technology had become fairly ubiquitous and hard disk recording was becoming more affordable and portable. The DAT format slowly faded into obscurity, and in 2005, Sony discontinued production of DAT players.
In 2014, we are left with a decade’s worth of primary source audio tape (oral histories, interviews, concert and event recordings) that is quickly degrading and may soon be unsalvageable. The playback decks (and parts for them) are no longer produced and there are few technicians with the knowledge or will to repair and maintain them. The short-term answer to these problems is to begin stockpiling working DAT machines and doing the slow work of digitizing and archiving the tapes one by one. For example, the Libraries’ Jazz Loft Project Records collection consisted mainly of DAT tapes, and now exists as digital files accessible through the online finding aid: http://library.duke.edu/rubenstein/findingaids/jazzloftproject/. A long-term approach calls for a survey of library collections to identify the number and condition of DAT tapes, and then for prioritization of these items as it may be logistically impossible to digitize them all.
And now, the answer to our trivia question: in May 1988, post-punk icons Wire released The Ideal Copy on DAT, making it the first popular music recording to be issued on the new format.
Back in February 2014, we wrapped up the CCC project, a collaborative three year IMLS-funded digitization initiative with our partners in the Triangle Research Libraries Network (TRLN). The full title of the project is a mouthful, but it captures its essence: “Content, Context, and Capacity: A Collaborative Large-Scale Digitization Project on the Long Civil Rights Movement in North Carolina.”
So how large is “large-scale”? By comparison, when the project kicked off in summer 2011, we had a grand total of 57,000 digitized objects available online (“published”), collectively accumulated through sixteen years of digitization projects. That number was 69,000 by the time we began publishing CCC manuscripts in June 2012. Putting just as many documents online in three years as we’d been able to do in the previous sixteen naturally requires a much different approach to creating digital collections.
Individual items identified during scanning
No item-level identification: entire folders scanned
Descriptive metadata applied to each item
Archival description only (e.g., at the folder level)
CCC staff completed qualitative and quantitative evaluations of this large-scale digitization approach during the course of the project, ranging from conducting user focus groups and surveys to analyzing the impact on materials prep time and image quality control. Researcher assessments targeted three distinct user groups: 1) Faculty & History Scholars; 2) Undergraduate Students (in research courses at UNC & NC State); 3) NC Secondary Educators.
Ease of Use. Faculty and scholars, for the most part, found it easy to use digitized content presented this way. Undergraduates were more ambivalent, and secondary educators had the most difficulty.
To Embed or Not to Embed. In 2012, Duke was the only library presenting the image thumbnails embedded directly within finding aids and a lightbox-style image navigator. Undergrads who used Duke’s interface found it easier to use than UNC or NC Central’s, and Duke’s collections had a higher rate of images viewed per folder than the other partners. UNC & NC Central’s interfaces now use a similar convention.
Potential for Use. Most users surveyed said they could indeed imagine themselves using digitized collections presented in this way in the course of their research. However, the approach falls short in meeting key needs for secondary educators’ use of primary sources in their classes.
Desired Enhancements. The top two most desired features by faculty/scholars and undergrads alike were 1) the ability to search the text of the documents (OCR), and 2) the ability to explore by topic, date, document type (i.e., things enabled by item-level metadata). PDF download was also a popular pick.
Impact on Duke Digitization Projects
Since the moment we began putting our CCC manuscripts online (June 2012), we’ve completed the eight CCC collections using this large-scale strategy, and an additional eight manuscript collections outside of CCC using the same approach. We have now cumulatively put more digital objects online using the large-scale method (96,000) than we have via traditional means (75,000). But in that time, we have also completed eleven digitization projects with traditional item-level identification and description.
We see the large-scale model for digitization as complementary to our existing practices: a technique we can use to meet the publication needs of some projects.
Do people actually use the collections when presented in this way? Some interesting figures:
Views / item in 2013-14 (traditional digital object; item-level description): 13.2
Views / item in 2013-14 (digitized image within finding aid; folder-level description): 1.0
Views / folder in 2013-14 (digitized folder view in finding aid): 8.5
It’s hard to attribute the usage disparity entirely to the publication method (they’re different collections, for one). But it’s reasonable to deduce (and unsurprising) that bypassing item-level description generally results in less traffic per item.
The takeaway is, sometimes having interesting, important, and timely content available for use online is more important than the features enabled or the process by which it all gets there.
We’ll keep pushing ahead with evolving our practices for putting digitized materials online. We’ve introduced many recent enhancements, like fulltext searching, a document viewer, and embedded HTML5 video. Inspired by the CCC project, we’ll continue to enhance our finding aids to provide access to digitized objects inline for context (e.g., The Jazz Loft Project Records). Our TRLN partners have also made excellent upgrades to the interfaces to their CCC collections (e.g., at UNC, at NC State) and we plan, as usual, to learn from them as we go.
The audio tapes in the recently acquired Radio Haiti collection posed a number of digitization challenges. Some of these were discussed in this video produced by Duke’s Rubenstein Library:
In this post, I will use a short audio clip from the collection to illustrate some of the issues that we face in working with this particular type of analog media.
First, I present the raw digitized audio, taken from a tape labelled “Tambour Vaudou”:
As you can hear, there are a number of confusing and disorienting things going on there. I’ll attempt to break these down into a series of discrete issues that we can diagnose and fix if necessary.
Analog tape machines typically offer more than one speed for recording, meaning that you can change the rate at which the reels turn and the tape moves across the record or playback head. The faster the speed, the higher the fidelity of the result. On the other hand, faster speeds use more tape (which is expensive). Tape speed is measured in “ips” (inches per second). The tapes we work with were usually recorded at speeds of 3.75 or 7.5 ips, and our playback deck is set up to handle either of these. We preview each tape before digitizing to determine what the proper setting is.
In the audio example above, you can hear that the tape speed was changed at around 10 seconds into the recording. This accounts for the “spawn of Satan” voice you hear at the beginning. Shifting the speed in the opposite direction would have resulted in a “chipmunk voice” effect. This issue is usually easy to detect by ear. The solution in this case would be to digitize the first 10 seconds at the faster speed (7.5 ips), and then switch back to the slower playback speed (3.75 ips) for the remainder of the tape.
Volume Level and Background Noise
The tapes we work with come from many sources and locations and were recorded on a variety of equipment by people with varying levels of technical knowledge. As a result, the audio can be all over the place in terms of fidelity and volume. In the audio example above, the volume jumps dramatically when the drums come in at around 00:10. Then you hear that the person making the recording gradually brings the level down before raising it again slightly. There are similar fluctuations in volume level throughout the audio clip. Because we are digitizing for archival preservation, we don’t attempt to make any changes to smooth out the sometimes jarring volume discrepancies across the course of a tape. We simply find the loudest part of the content, and use that to set our levels for capture. The goal is to get as much signal as possible to our audio interface (which converts the analog signal to digital information that can be read by software) without overloading it. This requires previewing the tape, monitoring the input volume in our audio software, and adjusting accordingly.
This recording happens to be fairly clean in terms of background noise, which is often not the case. Many of the oral histories that we work with were recorded in noisy public spaces or in homes with appliances running, people talking in the background, or the subject not in close enough proximity to the microphone. As a result, the content can be obscured by noise. Unfortunately there is little that can be done about this since the problem is in the recording itself, not the playback. There are a number of hum, hiss, and noise removal tools for digital audio on the market, but we typically don’t use these on our archival files. As mentioned above, we try to capture the source material as faithfully as possible, warts and all. After each transfer, we clean the tape heads and all other surfaces that the tape touches with a Q-tip and denatured alcohol. This ensures that we’re not introducing additional noise or signal loss on our end.
While cleaning the Radio Haiti tapes (as detailed in the video above), we discovered that many of the tapes were comprised of multiple sections of tape spliced together. A splice is simply a place where two different pieces of audio tape are connected by a piece of sticky tape (much like the familiar Scotch tape that you find in any office). This may be done to edit together various content into a seamless whole, or to repair damaged tape. Unfortunately, the sticky tape used for splicing dries out over time, becomes brittle, and loses it’s adhesive qualities. In the course of cleaning and digitizing the Radio Haiti tapes, many of these splices came undone and had to be repaired before our transfers could be completed.
Our playback deck includes a handy splicing block that holds the tape in the correct position for this delicate operation. First I use a razor blade to clean up any rough edges on both ends of the tape and cut it to the proper 45 degree angle. The splicing block includes a groove that helps to make a clean and accurate cut. Then I move the two pieces of tape end to end, so that they are just touching but not overlapping. Finally I apply the sticky splicing tape (the blue piece in the photo below) and gently press on it to make sure it is evenly and fully attached to the audio tape. Now the reel is once again ready for playback and digitization. In the “Tambour Vaudou” audio clip above, you may notice three separate sections of content: the voice at the beginning, the drums in the middle, and the singing at the end. These were three pieces of tape that were spliced together on the original reel and that we repaired right here in the library’s Digital Production Center.
These are just a few of many issues that can arise in the course of digitizing a collection of analog open reel audio tapes. Fortunately, we can solve or mitigate most of these problems, get a clean transfer, and generate a high-quality archival digital file. Until next time…keep your heads clean, your splices intact, and your reels spinning!
The technology for digitizing analog videotape is continually evolving. Thanks to increases in data transfer-rates and hard drive write-speeds, as well as the availability of more powerful computer processors at cheaper price-points, the Digital Production Center recently decided to upgrade its video digitization system. Funding for the improved technology was procured by Winston Atkins, Duke Libraries Preservation Officer. Of all the materials we work with in the Digital Production Center, analog videotape has one of the shortest lifespans. Thus, it is high on the list of the Library’s priorities for long-term digital preservation. Thanks, Winston!
Due to innovative design, ease of use, and dominance within the video and filmmaking communities, we decided to go with a combination of products designed by Apple Inc., and Blackmagic Design. A new computer hardware interface recently adopted by Apple and Blackmagic, called Thunderbolt, allows the the two companies’ products to work seamlessly together at an unprecedented data-transfer speed of 10 Gigabits per second, per channel. This is much faster than previously available interfaces such as Firewire and USB. Because video content incorporates an enormous amount of data, the improved data-transfer speed allows the computer to capture the video signal in real time, without interruption or dropped frames.
Our new data stream works as follows. Once a tape is playing on an analog videotape deck, the output signal travels through an Analog to SDI (serial digital interface) converter. This converts the content from analog to digital. Next, the digital signal travels via SDI cable through a Blackmagic SmartScope monitor, which allows for monitoring via waveform and vectorscope readouts. A veteran television engineer I know will talk to you for days regarding the physics of this, but, in layperson terms, these readouts let you verify the integrity of the color signal, and make sure your video levels are not too high (blown-out highlights) or too low (crushed shadows). If there is a problem, adjustments can be made via analog video signal processor or time-base corrector to bring the video signal within acceptable limits.
Next, the video content travels via SDI cable to a Blackmagic Ultrastudio interface, which converts the signal from SDI to Thunderbolt, so it can now be recognized by a computer. The content then travels via Thunderbolt cable to a 27″ Apple iMac utilizing a 3.5 GHz Quad-core processor and NVIDIA GeForce graphics processor. Blackmagic’s Media Express software writes the data, via Thunderbolt cable, to a G-Drive Pro external storage system as a 10-bit, uncompressed preservation master file. After capture, editing can be done using Apple’s Final Cut Pro or QuickTime Pro. Compressed Mp4 access derivatives are then batch-processed using Apple’s Compressor software, or other utilities such as MPEG-Streamclip. Finally, the preservation master files are uploaded to Duke’s servers for long-term storage. Unless there are copyright restrictions, the access derivatives will be published online.
This past week, we were excited to be able to publish a rare 1804 manuscript copy of the Haitian Declaration of Independence in our digital collections website. We used the project as a catalyst for improving our document-viewing user experience, since we knew our existing platforms just wouldn’t cut it for this particular treasure from the Rubenstein Library collection. In order to present the declaration online, we decided to implement the open-source Diva.js viewer. We’re happy with the results so far and look forward to making more strides in our ability to represent documents in our site as the year progresses.
Challenges to Address
We have had two glaring limitations in providing access to digitized collections to date: 1) a less-than-stellar zoom & pan feature for images and 2) a suboptimal experience for navigating documents with multiple pages. For zooming and panning (see example), we use software called OpenLayers, which is primarily a mapping application. And for paginated items we’ve used two plugins designed to showcase image galleries, Galleria (example) and Colorbox (example). These tools are all pretty good at what they do, but we’ve been using them more as stopgap solutions for things they weren’t really created to do in the first place. As the old saying goes, when all you have is a hammer, everything looks like a nail.
Big (OR Zoom-Dependent) Things
Traditionally as we digitize images, whether freestanding or components of a multi-page object, at the end of the process we generate three JPG derivatives per page. We make a thumbnail (helpful in search results or other item sets), medium image (what you see on an item’s webpage), and large image (same dimensions as the preservation master, viewed via the ‘all sizes’ link). That’s a common approach, but there are several places where that doesn’t always work so well. Some things we’ve digitized are big, as in “shoot them in sections with a camera and stitch the images together” big. And we’ve got several more materials like this waiting in the wings to make available. A medium image doesn’t always do these things justice, but good luck downloading and navigating a giant 28MB JPG when all you want to do is zoom in a little bit.
Likewise, an object doesn’t have to be large to really need easy zooming to be part of the viewing experience. You might want to read the fine print on that newspaper ad, see the surgeon general’s warning on that billboard, or inspect the brushstrokes in that beautiful hand-painted glass lantern slide.
And finally, it’s not easy to anticipate the exact dimensions at which all our images will be useful to a person or program using them. Using our data to power an interactive display for a media wall? A mobile app? A slideshow on the web? You’ll probably want images that are different dimensions than what we’ve stored online. But to date, we haven’t been able to provide ways to specify different parameters (like height, width, and rotation angle) in the image URLs to help people use our images in environments beyond our website.
We do love our documentary photography collections, but a lot of our digitized objects are represented by more than just a single image. Take an 11-page piece of sheet music or a 127-page diary, for example. Those aren’t just sequences or collections of images. Their paginated orientation is pretty essential to their representation online, but a lot of what characterizes those materials is unfortunately lost in translation when we use gallery tools to display them.
The Intersection of (Big OR Zoom-Dependent) AND Paginated
Here’s where things get interesting and quite a bit more complicated: when zooming, panning, page navigation, and system performance are all essential to interacting with a digital object. There are several tools out there that support these various aspects, but very few that do them all AND do them well. We knew we needed something that did.
Our Solution: Diva.js
Setting up Diva.js required us to add a few new pieces to our infrastructure. The most significant was an image server (in our case, IIPImage) that could 1) deliver parts of a digital image upon request, and 2) deliver complete images at whatever size is requested via URL parameters.
Our Interface: How it Works
By default, we present a document in our usual item page template that provides branding, context, and metadata. You can scroll up and down to navigate pages, use Page Up or Page Down keys, or enter a page number to jump to a page directly. There’s a slider to zoom in or out, or alternatively you can double-click to zoom in / Ctrl-double-click to zoom out. You can toggle to a grid view of all pages and adjust how many pages to view at once in the grid. There’s a really handy full-screen option, too.
It’s optimized for performance via AJAX-driven “lazy loading”: only the page of the document that you’re currently viewing has to load in your browser, and likewise only the visible part of that page image in the viewer must load (via square tiles). You can also download a complete JPG for a page at the current resolution by clicking the grey arrow.
We extended Diva.js by building a synchronized fulltext pane that displays the transcript of the current page alongside the image (and beneath it in full-screen view). That doesn’t come out-of-the-box, but Diva.js provides some useful hooks into its various functions to enable developing this sort of thing. We also slightly modified the styles.
Behind the scenes, we have pyramid TIFF images (one for each page), served up as JPGs by IIPImage server. These files comprise arrays of 256×256 JPG tiles for each available zoom level for the image. Let’s take page 1 of the declaration for example. At zoom level 0 (all the way zoomed out), there’s only one image tile: it’s under 256×256 pixels; level 1 is 4 tiles, level 2 is 12, level 3 is 48, level 4 is 176. The page image at level 5 (all the way zoomed in) includes 682 tiles (example of one), which sounds like a lot, but then again the server only has to deliver the parts that you’re currently viewing.
Every item using Diva.js also needs to load a JSON stream including the dimensions for each page within the document, so we had to generate that data. If there’s a transcript present, we store it as a single HTML file, then use AJAX to dynamically pull in the part of that file that corresponds to the currently-viewed page in the document.
Diva.js & IIPImage Limitations
It’s a good interface, and is the best document representation we’ve been able to provide to date. Yet it’s far from perfect. There are several areas that are limiting or that we want to explore more as we look to make more documents available in the future.
Out of the box, Diva.js doesn’t support page metadata, transcriptions, or search & retrieval within a document. We do display a synchronized transcript, but there’s currently no mapping between the text and the location within each page where each word appears, nor can you perform a search and discover which pages contain a given keyword. Other folks using Diva.js are working on robust applications that handle these kinds of interactions, but the degree to which they must customize the application is high. See for example, the Salzinnes Antiphonal: a 485-page liturgical manuscript w/text and music or a prototype for the Liber Usualis: a 2,000+ page manuscript using optical music recognition to encode melodic fragments.
Diva.js also has discrete zooming, which can feel a little jarring when you jump between zoom levels. It’s not the smooth, continuous zoom experience that is becoming more commonplace in other viewers.
With the IIPImage server, we’ll likely re-evaluate using Pyramid TIFFs vs. JPEG2000s to see which file format works best for our digitization and publication workflow. In either case, there are several compression and caching variables to tinker with to find an ideal balance between image quality, storage space required, and system performance. We also discovered that the IIP server unfortunately strips out the images’ ICC color profiles when it delivers JPGs, so users may not be getting a true-to-form representation of the image colors we captured during digitization.
Launching our first project using Diva.js gives us a solid jumping-off point for expanding our ability to provide useful, compelling representations of our digitized documents online. We’ll assess how well this same approach would scale to other potential projects and in the meantime keep an eye on the landscape to see how things evolve. We’re better equipped now than ever to investigate alternative approaches and complementary tools for doing this work.
We’ll also engage more closely with our esteemed colleagues in the Duke Collaboratory for Classics Computing (DC3), who are at the forefront of building tools and services in support of digital scholarship. Well beyond supporting discovery and access to documents, their work enables a community of scholars to collaboratively transcribe and annotate items (an incredible–and incredibly useful–feat!). There’s a lot we’re eager to learn as we look ahead.
The Digital Production Center at the Perkins Library has a clearly stated mission to “create digital captures of unique, valuable, or compelling primary resources for the purpose of preservation, access, and publication.” Our mission statement goes on to say, “Our operating principle is to achieve consistent results of a measurable quality. We plan and perform our work in a structured and scalable way, so that our results are predictable and repeatable, and our digital collections are uniform.”
That’s a mouthful!
What it means is the images have to be consistent not only from image to image within a collection but also from collection to collection over time. And if that isn’t complex enough this has to be done using many different capture devices. Each capture device has its own characteristics, which record and reproduce color in different ways.
How do we produce consistent images?
There are many variables to consider when solving the puzzle of “consistent results of a measurable quality.” First, we start with the viewing environment, then move to monitor calibration and profiling, and end with capture device profiling. All of these variables play a part in producing consistent results.
Full spectrum lighting is used in the Digital Production Center to create a neutral environment for viewing the original material. Lighting that is not full spectrum often has a blue, magenta, green or yellow color shift, which we often don’t notice because our eyes are able to adjust effortlessly. In the image below you can see the difference between tungsten lighting and neutral lighting.
Our walls are also painted 18 percent gray, which is neutral, so that no color is reflected from the walls onto the image while comparing it to the digital image.
Now that we have a neutral viewing environment, the next variable to consider is the computer monitors used to view our digitized images. We use a spectrophotometer (straight out of the Jetsons, right?) made by xrite to measure the color accuracy, luminance and contrast of the monitor. This hardware/software combination uses the spectrophotometer as it’s attached to the computer screen to read the brightness (luminance), contrast, white point and gamma of your monitor and makes adjustments for optimal viewing. This is called monitor calibration. The software then displays a series of color patches with known RGB values which the spectrophotometer measures and records the difference. The result is an icc display profile. This profile is saved to your operating system and is used to translate colors from what your monitor natively produces to a more accurate color representation.
Now our environment is neutral and our monitor is calibrated and profiled. The next step in the process is to profile your capture device, whether it is a high-end digital scan back like the Phase One or BetterLight or an overhead scanner like a Zeutschel. From Epson flatbed scanners to Nikon slide scanners, all of these devices can be calibrated in the same way. With all of the auto settings on your scanner turned off, a color target is digitized on the device you wish to calibrate. The swatches on the color target are known values similar to the series of color patches used for profiling the monitor. The digitized target is fed to the profiling software. Each patch is measured and compared against its known value. The differences are recorded and the result is an icc device profile.
Now that we have a neutral viewing environment for viewing the original material, our eyes don’t need to compensate for any color shift from the overhead lights or reflection from the walls. Our monitors are calibrated/profiled so that the digitized images display correctly and our devices are profiled so they are able to produce consistent images regardless of what brand or type of capture device we use.
During our daily workflow we a Gretag Macbeth color checker to measure the output of the capture devices every day before we begin digitizing material to verify that the device is still working properly.
All of this work is done before we push the “scan” button to ensure that our results are predictable and repeatable, measurable and scalable. Amen.
A group from Duke Libraries recently visited Dr. Maurizio Forte’s Digital Archaeology Initiative (a.k.a. “Dig@Lab”) to learn more about digital imaging of three-dimensional objects and to explore opportunities for collaboration between the lab and the library.
Dr. Forte (a Professor of Classical Studies, Art, and Visual Studies) and his colleagues were kind enough to demonstrate how they are using 3D imaging technology to “dig for information” in simulated archaeological sites and objects. Their lab is a fascinating blend of cutting-edge software and display interfaces, such as the Unity 3D software being used in the photo above, and consumer video gaming equipment (recognize that joystick?).
Using the goggles and joystick above, we took turns exploring the streets and buildings of the ancient city of Regium Lepedi in Northern Italy. The experience was extremely immersive and somewhat disorienting, from getting lost in narrow alleys to climbing winding staircases for an overhead view of the entire landscape. The feeling of vertigo from the roof was visceral. None of us took the challenge to jump off of the roof, which apparently you can do (and which is also very scary according to the lab researchers). After taking the goggles off, I felt a heaviness and solidity return to my body as I readjusted to the “real world” around me, similar to the sensation of gravity after stepping off a trampoline.
The Libraries and Digital Projects team look forward to working more with Dr. Forte and bringing 3D imaging into our digital collections.
More information about the lab’s work can be found at:
Recently, I digitized 123 videotapes from the Duke University Living History Program. Beginning in the early 1970’s, Duke University faculty members conducted interviews with prominent world leaders, politicians and activists. The first interviews were videotaped in Perkins Library at a time when video was groundbreaking technology, almost a decade before consumer-grade VCRs starting showing up in people’s living rooms. Some of the interviews begin with a visionary introduction by Jay Rutherfurd, who championed the program:
“At the W. R. Perkins library, in Duke University, we now commit this exciting experiment in electronic journalism into your future. May it illuminate well, educate wisely, and relate meaningfully, for future generations.”
Clearly, the “future” that Mr. Rutherfurd envisioned has arrived. Thanks to modern technology, we can now create digital surrogates of these videotaped interviews for long-term preservation and access. The subjects featured in this collection span a variety of generations, nationalities, occupations and political leanings. Interviewees include Les Aspin, Ellsworth Bunker, Dr. Samuel DuBois Cook, Joseph Banks Rhine, Jesse Jackson, Robert McNamara, Dean Rusk, King Mihai of Romania, Terry Sanford, Judy Woodruff, Angier Biddle Duke and many more. The collection also includes videotapes of speeches given on the Duke campus by Ronald Reagan, Abbie Hoffman, Bob Dole, Julian Bond and Elie Wiesel.
Many of the interviews were recorded on 3/4″ videotape, also called “U-matic.” Invented by Sony in 1969, the U-matic format was the first videotape to be housed inside a plastic cassette for portability, and would soon replace film as the primary television news-gathering format. Unfortunately, most U-matic tapes have not aged well. After decades in storage, many of the videotapes in our collection now have sticky-shed syndrome, a condition in which the oxide that holds the visual content is literally flaking off the polyester tape base, and is gummy in texture. When a videotape has sticky-shed, not only will it not play correctly, the residue can also clog up the tape heads in the U-matic playback deck, then transfer the contaminant to other tapes played afterwards in the same deck. A U-matic videotape player in good working order is now an obsolete collector’s item, and our tapes are fragile, so we came up with a solution: throw those tapes in the oven!
At first that may sound reckless, but baking audio and videotapes at relatively low temperatures for an extended period of time is a well-tested method for minimizing the effects of sticky-shed syndrome. The Digital Production Center recently acquired a scientific oven, and after initial testing, we baked each Duke Living History U-matic videotape at 52 celsius (125 fahrenheit) for about 10 hours. Baking the videotapes temporarily removed the moisture that had accumulated in the binder, and made them playable for digitization. About 90% of our U-matic tapes played well after baking. Many of them were unplayable beforehand.
After giving the videotapes time to cool down, we digitize each tape, in real time, as an uncompressed file (.mov) for long-term preservation. Afterwards, we make a smaller, compressed version (.mp4) of the same recording, which is our access copy. Our U-matic decks are housed in an efficiently-designed rack system, which also includes other obsolete videotape formats like VHS, Betacam and Hi8. Centralized audio and video routers allow us to quickly switch between formats while ensuring a clean, balanced and accurate conversion from analog to digital. Combining the art of analog tape baking with modern video digitization, the Digital Production Center is able to rescue the content from the videotapes, before the magnetic tape ages and degrades any further. While the U-matic tapes are nearing the end of their life-span, the digital surrogates will potentially last for centuries to come. We are able to benefit from Mr. Rutherfurd’s exciting experiment into our future, and carry it forward… into your future. May it illuminate well, educate wisely, and relate meaningfully, for future generations.
I have worked in the Digital Production Center since March of 2005 and I’ve seen a lot of digital collections published in my time here. I have seen so many images that sometimes it is difficult to say which collection is my favorite but the Sidney D. Gamble Photographs have always been near the top.
The Sidney D. Gamble Photographs are an amazing collection of black and white photographs of daily life in China taken between 1908 and 1932. These documentary style images of urban and rural life, public events, architecture, religious statuary, and the countryside really resonate with me for their unopposed moment in time feel. Recently the Digital Collections Implementation Team was tasked with digitizing a subset of lantern slides from this collection. What is a lantern slide you might ask?
A lantern slide is a photographic transparency which is glass-mounted and often hand-colored for projection by a “magic lantern.” The magic lantern was the earliest form of slide projector which, in its earliest incarnation, used candles to project painted slides onto a wall or cloth screen. The projectionist was often hidden from the audience making it seem more magical. By the time the 1840s rolled around photographic processes had been developed by William and Frederick Langenheim that enabled a glass plate negative to be printed onto another glass plate by a contact method creating a positive. These positives were then painted in the same fashion that the earlier slides were painted (think Kodachrome). The magic lantern predates the school slate and the chalkboard for use in a classroom.
After working with and enjoying the digitization of the nitrate negatives from the Sidney D. Gamble Photographs it has been icing on the cake to work with the lantern slides from the same collection so many years later. While the original black and white images resonate with me the lantern slides have added a whole new dimension to the experience. On one hand the black and white images lend a sense of history and times passed and on the other, the vivid colors of the lantern slides draw me into the scene as if it were the present.
I am in awe of the amount of work and the variety of skill sets required to create a collection such as this. Sidney D. Gamble, an amateur photographer, to trek across China over 4 trips spanning 24 years, photographing and processing nitrate negatives in the field without a traditional darkroom, all the while taking notes and labeling the negatives. Then to come home and create the glass plate positives and hand color over 500 of them. For being an “amateur photographer” Gamble’s images are striking. The type of camera he used takes skill and knowledge to create a reasonably correct exposure. Processing the film is technically challenging in a traditional darkroom and is made much more difficult in the field. Taking enough notes while shooting, processing and traveling so they make sense as a collection is a feat in itself. The transfer from negative film to positive glass plates on such a scale is a tedious and technical venture. Then to hand paint all of the slides takes additional skill and tools. All of this makes digitization of the material look like child’s play.
An inventory of the hand-colored slides was created before digitization began. Any hand-colored slides with existing black and white negatives were identified so they can be displayed together online. A color-balanced light box was used to illuminate the lantern slides and a Phase One P65 Reprographic camera was used in conjunction with a precision Kaiser copy stand to capture them. All of the equipment used in the Digital Production Center is color-calibrated and profiled so consistent results can be achieved from capture to capture. This removes the majority of the subjective decision making from the digitization process. Sidney D. Gamble had many variables to contend with to produce the lantern slides much like the Digital Collections Implementation Team deals with many variables when publishing a digital collection. From conservation of the physical material, digitization, metadata, interface design to the technology used to deliver the images online and the servers and network that connect everything to make it happen, there are plenty of variables. They are just different variables.
Nowadays we photograph and share the minutia of our lives. When Sidney Gamble took his photographs he had to be much more deliberate. I appreciate his deliberateness as much as I appreciate all the people involved in publishing collections. I look forward to publication of the Sidney D. Gamble lantern slides in the near future and hope you will enjoy this collection as much as I have over the years.
The 310 oral histories that comprise the newly published additions to the Behind the Veil digital collection were originally recorded in the 1990’s to the now (nearly) obsolete compact cassette format—what were commonly called “tapes”. The beauty of the compact cassette format was that it was small and portable (especially compared to the earlier reel-to-reel tape format), relatively durable due to its hard plastic outer shell, and most of all—could easily be recorded to at home by non-professional users. This made it perfect for oral historians who needed to be able to record interviews in the field at low cost with minimal hassle.
Unfortunately, the compact cassette format hasn’t aged particularly well. Due to cheap materials, poor storage conditions, and normal mechanical wear and tear, many of these tapes are already borderline unplayable a short 40 years after their first introduction. This introduces a number of challenges to our process of converting the audio information on the tapes into a digital file format that can easily be accessed online by patrons. I won’t exhaustively detail our digitization process here, but only touch on a few issues and how we dealt with them.
Physical degradation and damage to tapes: We visually inspected each tape prior to digitization. Any that were visibly broken or had twisted or jammed tape were rehoused in new outer shells. At least with this collection, rehousing allowed us to successfully play back all of the tapes.
Poor quality of original recordings: We also did a brief audio inspection of each tape before digitization. This allowed us to identify issues with audio quality. We found that the interviews were done in a wide variety of locations, often with background traffic, television, appliance and conversation noise bleeding into the recording. There was no easy fix for this, as these issues are inherent in the recording. Our solution was to provide the best possible playback on a high-quality cassette deck, a direct and balanced signal path, and high quality analog-to-digital conversion at the preservation standard of 24 bits, 96.1 kHz. This ensured that the digital copy faithfully reproduced the audio material on the cassette, warts and all.
Other errors in original recordings: There were some issues in the original recordings that we opted to fix via digital editing or processing in our files for patron use (while retaining the unaltered preservation files).
In cases where there was a significant gap of silence in the middle of a tape, we edited out the silence for continuity’s sake.
In cases where there were loud and abrasive clicks, pops, or microphone noise at the beginning or end of a tape side, we edited out these noises.
Several tapes were apparently recorded at the wrong speed, resulting in a “chipmunk voice” effect. I used a Speed/Pitch function in our audio capture software to electronically slow these files down so that they play back intelligibly and as intended.
Another challenge, common to all time-based analog media, is the cassette tape’s “real-time” nature. Unlike a digital file that can be copied nearly instantaneously, a 90-minute cassette tape actually takes 90 minutes to make a digital copy. Currently we run two cassette decks simultaneously, allowing us to double our throughput.
As you can see, audio cassette digitization is more than just a matter of pressing “play”!