Category Archives: Digitization Expertise

Multispectral Imaging Through Collaboration

I am sure you have all been following the Library’s exploration into Multispectral Imaging (MSI) here on Bitstreams, Preservation Underground and the News & Observer.  Previous posts have detailed our collaboration with R.B. Toth Associates and the Duke Eye Center, the basic process and equipment, and the wide range of departments that could benefit from MSI.  In early December of last year (that sounds like it was so long ago!), we finished readying the room for MSI capture, installed the equipment, and went to MSI boot camp.

Obligatory before and after shot. In the bottom image, the new MSI system is in the background on the left with the full spectrum system that we have been using for years on the right. Other additions to the room are blackout curtains, neutral gray walls and black ceiling tiles all to control light spill between the two camera systems. Full spectrum overhead lighting and a new tile floor were installed which is standard for an imaging lab in the Library.

Well, boot camp came to us. Meghan Wilson, an independent contractor who has worked with R.B. Toth Associates for many years, started our training with an overview of the equipment and the basic science behind it. She covered the different lighting schemes and when they should be used.  She explained MSI applications for identifying resins, adhesives and pigments and how to use UV lighting and filters to expose obscured text.   We quickly went from talking to doing.  As with any training session worth its salt, things went awry right off the bat (not Meghan’s fault).  We had powered up the equipment but the camera would not communicate with the software and the lights would not fire when the shutter was triggered.  This was actually a good experience because we had to troubleshoot on the spot and figure out what was going on together as a team.  It turns out that there are six different pieces of equipment that have to be powered-up in a specific sequence in order for the system to communicate properly (tee up Apollo 13 soundtrack). Once we got the system up and running we took turns driving the software and hardware to capture a number of items that we had pre-selected.  This is an involved process that produces a bunch of files that eventually produce an image stack that can be manipulated using specialized software.  When it’s all said and done, files have been converted, cleaned, flattened, manipulated and variations produced that are somewhere in the neighborhood of 300 files. Whoa!

This is not your parents’ point and shoot—not the room, the lights, the curtains, the hardware, the software, the pricetag, none of it. But it is different in another more important way too. This process is team-driven and interdisciplinary. Our R&D working group is diverse and includes representatives from the following library departments.

  • The Digital Production Center (DPC) has expertise in high-end, full spectrum imaging for cultural heritage institutions along with a deep knowledge of the camera and lighting systems involved in MSI, file storage, naming and management of large sets of files with complex relationships.
  • The Duke Collaboratory for Classics Computing (DC3) offers a scholarly and research perspective on papyri, manuscripts, etc., as well as  experience with MSI and other imaging modalities
  • The Conservation Lab brings expertise in the Libraries’ collections and a deep understanding of the materiality and history of the objects we are imaging.
  • Duke Libraries’ Data Visualization Services (DVS) has expertise in the processing and display of complex data.
  • The Rubenstein Library’s Collection Development brings a deep understanding of the collections, provenance and history of materials, and valuable contacts with researchers near and far.

To get the most out of MSI we need all of those skills and perspectives. What MSI really offers is the ability to ask—and we hope answer—strings of good questions. Is there ink beneath that paste-down or paint? Is this a palimpsest? What text is obscured by that stain or fire-damage or water damage? Can we recover it without having to intervene physically? What does the ‘invisible’ text say and what if anything does this tell us about the object’s history? Is the reflectance signature of the ink compatible with the proposed date or provenance of the object? That’s just for starters. But you can see how even framing the right question requires a range of perspectives; we have to understand what kinds of properties MSI is likely to illuminate, what kinds of questions the material objects themselves suggest or demand, what the historical and scholarly stakes are, what the wider implications for our and others’ collections are, and how best to facilitate human interface with the data that we collect. No single person on the team commands all of this.

Working in any large group can be a challenge. But when it all comes together, it is worth it. Below is a page from Jantz 723, one processed as a black and white image and the other a Principal Component Analysis produced by the MSI capture and processed using ImageJ and a set of tools created by Bill Christens-Barry of R.B. Toth Associates with false color applied using Photoshop. Using MSI we were able to better reveal this watermark which had previously been obscured.

Jantz 723

I think we feel like 16-year-old kids with newly minted drivers’ licenses who have never driven a car on the highway or out of town. A whole new world has just opened up to us, and we are really excited and a little apprehensive!

What now?

Practice, experiment, document, refine. Over the next 12 (16? 18) months we will work together to hone our collective skills, driving the system, deepening our understanding of the scholarly, conservation, and curatorial use-cases for the technology, optimizing workflow, documenting best practices, getting a firm grip on scale, pace, and cost of what we can do. The team will assemble monthly, practice what we have learned, and lean on each other’s expertise to develop a solid workflow that includes the right expertise at the right time.  We will select a wide variety of materials so that we can develop a feel for how far we can push the system and what we can expect day to day. During all of this practice, workflows, guidelines, policies and expectations will come into sharper focus.

As you can tell from the above, we are going to learn a lot over the coming months.  We plan to share what we learn via regular posts here and elsewhere.  Although we are not prepared yet to offer MSI as a standard library service, we are interested to hear your suggestions for Duke Library collection items that may benefit from MSI imaging.  We have a long queue of items that we would like to shoot, and are excited to add more research questions, use cases, and new opportunities to push our skills forward.   To suggest materials, contact Molly Bragg, Digital Collections Program Manager (molly.bragg at Duke.edu), Joshua Sosin, Associate Professor in Classical Studies & History (jds15 at Duke.edu) or Curator of Collections (andrew.armacost at Duke.edu).

Want to learn even more about MSI at DUL?

Cutting Through the Noise

Noise is an inescapable part of our sonic environment.  As I sit at my quiet library desk writing this, I can hear the undercurrent of the building’s pipes and HVAC systems, the click-clack of the Scribe overhead book scanner, footsteps from the floor above, doors opening and closing in the hallway, and the various rustlings of my own fidgeting.  In our daily lives, our brains tune out much of this extraneous noise to help us focus on the task at hand and be alert to sounds conveying immediately useful information: a colleagues’s voice, a cell-phone buzz, a fire alarm.

When sound is recorded electronically, however, this tuned-out noise is often pushed to the foreground.  This may be due to the recording conditions (e.g. a field recording done on budget equipment in someone’s home or outdoors) or inherent in the recording technology itself (electrical interference, mechanical surface noise).  Noise is always present in the audio materials we digitize and archive, many of which are interviews, oral histories, and events recorded to cassette or open reel tape by amateurs in the field.  Our first goal is to make the cleanest and most direct analog-to-digital transfer possible, and then save this as our archival master .wav file with no alterations.  Once this is accomplished, we have some leeway to work with the digital audio and try to create a more easily listenable and intelligible access copy.

img_2190

I recently started experimenting with Steinberg WaveLab software to clean up digitized recordings from the Larry Rubin Papers.  This collection contains some amazing documentation of Rubin’s work as a civil rights organizer in the 1960s, but the ever-present hum & hiss often threaten to obscure the content.  I worked with two plug-ins in WaveLab to try to mitigate the noise while leaving the bulk of the audio information intact.

plugin1

Even if you don’t know it by name, anyone who has used electronic audio equipment has probably heard the dreaded 60 Cycle Hum.  This is a fixed low-frequency tone that is related to our main electric power grid operating at 120 volts AC in the United States.  Due to improper grounding and electromagnetic interference from nearby wires and appliances, this current can leak into our audio signals and appear as the ubiquitous 60 Hz hum (disclaimer–you may not be able to hear this as well on tiny laptop speakers or earbuds).  Wavelab’s De-Buzzer plug-in allowed me to isolate this troublesome frequency and reduce its volume level drastically in relation to the interview material.  Starting from a recommended preset, I adjusted the sensitivity of the noise reduction by ear to cut unwanted hum without introducing any obvious digital artifacts in the sound.

plugin2

Similarly omnipresent in analog audio is High-Frequency Hiss.  This wash of noise is native to any electrical system (see Noise Floor) and is especially problematic in tape-based media where the contact of the recording and playback heads against the tape introduces another level of “surface noise.”  I used the De-Noiser plug-in to reduce hiss while being careful not to cut into the high-frequency content too much.  Applying this effect too heavily could make the voices in the recording sound dull and muddy, which would be counterproductive to improving overall intelligibility.

Listen to the before & after audio snippets below.  While the audio is still far from perfect due to the original recording conditions, conservative application of the noise reduction tools has significantly cleaned up the sound.  It’s possible to cut the noise even further with more aggressive use of the effects, but I felt that would do more harm than good to the overall sound quality.

BEFORE:

AFTER:

 

I was fairly pleased with these results and plan to keep working with these and other software tools in the future to create digital audio files that meet the needs of archivists and researchers.  We can’t eliminate all of the noise from our media-saturated lives, but we can always keep striving to keep the signal-to-noise ratio at manageable and healthy levels.

 

img_2187

Color Bars & Test Patterns

In the Digital Production Center, many of the videotapes we digitize have “bars and tone” at the beginning of the tape. These are officially called “SMPTE color bars.” SMPTE stands for The Society of Motion Picture and Television Engineers, the organization that established the color bars as the North American video standard, beginning in the 1970s. In addition to the color bars presented visually, there is an audio tone that is emitted from the videotape at the same time, thus the phrase “bars and tone.”

color_bars
SMPTE color bars

The purpose of bars and tone is to serve as a reference or target for the calibration of color and audio levels coming from the videotape during transmission. The color bars are presented at 75% intensity. The audio tone is a 1kHz sine wave. In the DPC, we can make adjustments to the incoming signal, in order to bring the target values into specification. This is done by monitoring the vectorscope output, and the audio levels. Below, you can see the color bars are in proper alignment on the DPC’s vectorscope readout, after initial adjustment.

vectorscope
Color bars in proper alignment with the Digital Production Center’s vectorscope readout. Each letter stands for a color: red, magenta, blue, cyan, green and yellow.

We use Blackmagic Design’s SmartView monitors to check the vectorscope, as well as waveform and audio levels. The SmartView is an updated, more compact and lightweight version of the older, analog equipment traditionally used in television studios. The Smartview monitors are integrated into our video rack system, along with other video digitization equipment, and numerous videotape decks.

dpc_video_rack
The Digital Production Center’s videotape digitization system.

If you are old enough to have grown up in the black and white television era, you may recognize this old TV test pattern, commonly referred to as the “Indian-head test pattern.” This often appeared just before a TV station began broadcasting in the morning, and again right after the station signed off at night. The design was introduced in 1939 by RCA. The “Indian-head” image was integrated into a pattern of lines and shapes that television engineers used to calibrate broadcast equipment. Because the illustration of the Native American chief contained identifiable shades of gray, and had fine detail in the feathers of the headdress, it was ideal for adjusting brightness and contrast.

indian_head
The Indian-head test pattern was introduced by RCA in 1939.

When color television debuted in the 1960’s, the “Indian-head test pattern” was replaced with a test card showing color bars, a precursor to the SMPTE color bars. Today, the “Indian-head test pattern” is remembered nostalgically, as a symbol of the advent of television, and as a unique piece of Americana. The master art for the test pattern was discovered in an RCA dumpster in 1970, and has since been sold to a private collector.  In 2009, when all U.S. television stations were required to end their analog signal transmission, many of the stations used the Indian-head test pattern as their final analog broadcast image.

The FADGI Still Image standard: It isn’t just about file specs

In previous posts I have referred to the FADGI standard for still image capture when describing still image creation in the Digital Production Center in support of our Digital Collections Program.  We follow this standard in order to create archival files for preservation, long-term retention and access to our materials online.  These guidelines help us create digital content in a consistent, scalable and efficient way.  The most common cited part of the standard is the PPI guidelines for capturing various types of material.  It is a collection of charts that contain various material types, physical dimensions and recommended capture specifications.  The charts are very useful and relatively easy to read and understand.  But this standard includes 93 “exciting” pages of all things still image capture including file specifications, color encoding, data storage, physical environment, backup strategies, metadata and workflows.  Below I will boil down the first 50 or so pages.

The FADGI standard was built using the NARA Technical Guideline for Digitizing Archival Materials for Electronic Access: Creation of Production Master Files – Raster Images which was established in 2004.  The FADGI standard for still image capture is meant to be a set of best practices for cultural heritage institutions and has been recently updated to include new advances in the field of still image capture and contains more approachable language than its predecessor. FADGI1

Full disclosure. Perkins Library and our digitization program didn’t start with any part of these guidelines in place.  In fact, these guidelines didn’t exist at the time of our first attempt at in-house digitization in 1993.  We didn’t even have an official digitization lab until early 2005.  We started with one Epson flatbed scanner and one high end CRT monitor.  As our Digital Collections Program has matured, we have been able to add equipment and implement more of the standard starting with scanner and monitor calibration and benchmark testing of capture equipment before purchase.  We then established more consistent workflows and technical metadata capture, developed a more robust file naming scheme, file movement and data storage strategies.  We now work hard to synchronize our efforts between all of the departments involved in our Digital Collections Program.  We are always refining our workflows and processes to become more efficient at publishing and preserving Digital Collections.epson

Dive Deep.  For those of you who would like to take a deep dive into image capture for cultural heritage institutions, here is the full standard.  For those of you who don’t fall into this category, I’ve boiled down the standard below.  I believe that it’s necessary to use the whole standard in order for a program to become stable and mature.  As we did, this can be implemented over time. dive

Boil It Down. The FADGI standard provides a tiered approach for still image capture, from 1 to 4 stars, with four stars being the highest.  The 1 and 2 star tiers are used when imaging for access and tiers 3 and 4 are used for archival imaging and preservation at the focus.

The physical environment: The environment should be color neutral.   Walls should be painted a neutral gray to minimize color shifts and flare that might come from a wall color that is not neutral.  Monitors should be positioned to avoid glare on the screens (This is why most professional monitors have hoods).  Overhead lighting should be around 5000K (Tungsten, florescent and other bulbs can have yellow, magenta and green color shifts which can affect the perception of the color of an image).  Each capture device should be separated so that light spillover doesn’t affect another capture device.

Monitors and Light boxes and viewing of originals: Overhead light or a viewing booth should be set up for viewing of originals and should be a neutral 5000K.  A light box used for viewing transmissive material should also be 5000K.

Digital images should be viewed in the colorspace they were captured in and the monitor should be able to display that colorspace.  Most monitors display in the sRGB colorspace. However, professional monitors use the AdobeRGB colorspace which is commonly used in cultural heritage image capture.  The color temperature of your monitor should be set to the Kelvin temperature that most closely matches the viewing environment.  If the overhead lights are 5000K, then the monitor’s color temperature should also be set to 5000K.

Calibrating packages that consist of hardware and software that read and evaluate color is an essential piece of equipment.  These packages normalize the luminosity, color temperature and color balance of a monitor and create an ICC display profile that is used by the computer’s operating system to display colors correctly so that accurate color assessment can be made. gedT013

Capture Devices: The market is flooded with capture devices of varying quality.  It is important to do research on any new capture device.  I recommend skipping the marketing schemes that tout all the bells and whistles and just stick to talking to institutions that have established digital collections programs.  This will help to focus research on the few contenders that will produce the files that you need.  They will help you slog through how many megapixels are necessary, what lens are best for which application, what scanner driver is easiest to use while balanced with getting the best color out of your scanner.  Beyond the capture device, other things that come into play are effective scanner drivers that produce the most accurate and consistent results, upgrade paths for your equipment and service packages that help maintain your equipment.

Capture Specifications: I’ll keep this part short because there are a wide variety of charts covering many formats, capture specifications and their corresponding tiers.  Below I have simplified the information from the charts.  These specification hover between tier 3 and 4 mostly leaning toward 4.

Always use a FADGI compliant reference target at the beginning of a session to ensure the capture device is within acceptable deviation.  The target values differ depending on which reference targets are used.  Most targets come with a chart representing numerical value of each swatch in the target.  Our lab uses a classic Gretagmacbeth target and our acceptable color deviation is +/- 5 units of color.

Our general technical specs for reflective material including books, documents, photographs and maps are:

  • Master File Format: TIFF
  • Resolution: 300 ppi
  • Bit Depth: 8
  • Color Depth: 24 bit RGB
  • Color Space: Adobe 1998

These specifications generally follow the standard.  If the materials being scanned are smaller than 5×7 inches we increase the PPI to 400 or 600 depending on the font size and dimensions of the object.

Our general technical specs for transmissive material including acetate, nitrate and glass plate negatives, slides and other positive transmissive material are:

  • Master File Format: TIFF
  • Resolution: 3000 – 4000 ppi
  • Bit Depth: 16
  • Color Depth: 24 bit RGB
  • Color Space: Adobe 1998

These specifications generally follow the standard.  If the transmissive materials being scanned are larger than 4×5 we decrease the PPI to 1500 or 2000 depending on negative size and condition.

Recommended capture devices: The standard goes into detail on what capture devices to use and not to use when digitizing different types of material.  It describes when to use manually operated planetary scanners as opposed to a digital scan back, when to use a digital scan back instead of a flatbed scanner,   when and when not to use a sheet fed scanner.  Not every device can capture every type of material.  In our lab we have 6 different devices to capture a wide variety of material in different states of fragility.  We work with our Conservation Department when making decisions on what capture device to use.

General Guidelines for still image capture

  • Do not apply pressure with a glass platen or otherwise unless approved by a paper conservator.
  • Do not use vacuum boards or high UV light sources unless approved by a paper conservator.
  • Do not use auto page turning devices unless approved by a paper conservator.
  • For master files, pages, documents and photographs should be imaged to include the entire area of the page, document or photograph.
  • For bound items the digital image should capture as far into the gutter as practical but must include all of the content that is visible to the eye.
  • If a backing sheet is used on a translucent piece of paper to increase contrast and readability, it must extend beyond the edge of the page to the end of the image on all open sides of the page.
  • For master files, documents should be imaged to include the entire area and a small amount beyond to define the area.
  • Do not use lighting systems that raise the surface temperature of the original more than 6 degrees F(3 degrees C)in the total imaging process.
  • When capturing oversized material, if the sections of a multiple scan item are compiled into a single image, the separate images should be retained for archival and printing purposes.
  • The use of glass or other materials to hold photographic images flat during capture is allowed, but only when the original will not be harmed by doing so. Care must be taken to assure that flattening a photograph will not result in emulsion cracking, or the base material being damaged.  Tightly curled materials must not be forced to lay flat.
  • For original color transparencies, the tonal scale and color balance of the digital image should match the original transparency being scanned to provide accurate representation of the image.
  • When scanning  negatives,  for  master  files  the  tonal  orientation  may be  inverted  to produce a positive    The  resulting image  will  need  to  be  adjusted  to  produce  a  visually-pleasing representation. Digitizing negatives is very analogous to printing negatives in a darkroom and it is very dependent on the  photographer’s/ technician’s  skill  and  visual  literacy  to  produce  a  good  image. There are few objective metrics for evaluating the overall representation of digital images produced from negatives.
  • The lack of dynamic range in a film scanning system will result in poor highlight and shadow detail and poor color reproduction.
  • No image retouching is permitted to master files.

These details were pulled directly from the standard.  They cover a lot of ground but there are always decisions to be made that are uniquely related to the material to be digitized.  There are 50 or so more pages of this standard related to workflow, color management, data storage, file naming and technical metadata.  I’ll have to cover that in my next blog post.

The FADGI standard for still image capture is very thorough but also leaves room to adapt.  While we don’t follow everything outlined in the standard we do follow the majority.  This standard, years of experience and a lot of trial and error have helped make our program more sound, consistent and scalable.

Digitizing for Exhibits

While most of my Bitstreams posts have focused on my work preserving and archiving audio collections, my job responsibilities also include digitizing materials for display in Duke University Libraries Exhibits.  The recent renovation and expansion of the Perkins Library entrance and the Rubenstein Library have opened up significantly more gallery space, meaning more exhibits being rotated through at a faster pace.

gallery2

Just in the past year, I’ve created digital images for exhibits on Vesalius’s study of human anatomy, William Gedney’s photographs, Duke Chapel’s stained glass windows, and the 1793 Yellow Fever epidemic.  I also worked with a wide range of materials spanning “books, manuscripts, photographs, recordings and artifacts that document human aspirations” for the Dreamers and Dissenters exhibit celebrating the reopening of the newly renovated David M. Rubenstein Rare Book & Manuscript Library.  The digital images are used to create enlargements and facsimiles for the physical exhibits and are also used in the online “virtual exhibits.”

 

Working with such a variety of media spanning different library collections presents a number of challenges and necessitates working closely with our Exhibits and Conservation departments.  First, we have to make sure that we have all of the items listed in the inventory provided by the exhibit curator.  Secondly, we have to make sure we have all of the relevant information about how each item should be digitally captured (e.g. What image resolution and file specifications?  Which pages from a larger volume?  What section of a larger map or print?)  Next we have to consider handling for items that are in fragile condition and need special attention.  Finally, we use all of this information to determine which scanner, camera, or A/V deck is appropriate for each item and what the most efficient order to capture them in is.

All of this planning and preliminary work helps to ensure that the digitization process goes smoothly and that most questions and irregularities have already been addressed.  Even so, there are always issues that come up forcing us to improvise creative solutions.  For instance:  how to level and stabilize a large, fragile folded map that is tipped into a volume with tight binding?  How to assemble a seamless composite image of an extremely large poster that has to be photographed in multiple sections?  How to minimize glare and reflection from glossy photos that are cupped from age?  I won’t give away all of our secrets here, but I’ll provide a couple examples from the Duke Chapel exhibit that is currently on display in the Jerry and Bruce Chappell Family gallery.

angel

This facsimile of a drawing for one of the Chapel’s carved angels was reproduced from an original architectural blueprint.  It came to us as a large and tightly rolled blueprint–so large, in fact, that we had to add a piece of plywood to our usual camera work surface to accommodate it.  We then strategically placed weights around the blueprint to keep it flattened while not obscuring the section with the drawing.  The paper was still slightly wrinkled and buckled in places (which can lead to uneven color and lighting in the resulting digital image) but fortunately the already mottled complexion of the blueprint material made it impossible to notice these imperfections.

projection

These projected images of the Chapel’s stained glass were reproduced from slides taken by a student in 1983 and currently housed in the University Archives.  After the first run through our slide scanner, the digital images looked okay on screen, but were noticeably blurry when enlarged.  Further investigation of the slides revealed an additional clear plastic protective housing which we were able to carefully remove.  Without this extra refractive layer, the digital images were noticeably sharper and more vibrant.

Despite the digitization challenges, it is satisfying to see these otherwise hidden treasures being displayed and enjoyed in places that students, staff, and visitors pass through everyday–and knowing that we played a small part in contributing to the finished product!

 

The Attics of Your Life

If you happen to be rummaging through your parents’ or grandparents’ attic, basement or garage, and stumble upon some old reel-to-reel audiotape, or perhaps some dust-covered videotape reels that seem absurdly large & clunky, they are most likely worthless, except for perhaps sentimental value. Even if these artifacts did, at one time, have some unique historic content, you may never know, because there’s a strong chance that decades of temperature extremes have made the media unplayable. The machines that were once used to play the media are often no longer manufactured, hard to find, and only a handful of retired engineers know how to repair them. That is, if they can find the right spare parts, which no one sells anymore.

Bart_starr_bw
Quarterback Bart Starr led the Green Bay Packers to a 35-10 victory over the Kansas City Chiefs in Super Bowl 1.
RCA Quadruplex 2"
Martin Haupt likely recorded Super Bowl 1 using an RCA Quadruplex 2″ color videotape recorder, common at television studios in the late 1960s.

However, once in a while, something that is one of a kind miraculously survives. That was the case for Troy Haupt, a resident of North Carolina’s Outer Banks, who discovered that his father, Martin Haupt, had recorded the very first Super Bowl onto 2” Quadruplex color videotape directly from the 1967 live television broadcast. After Martin passed away, the tapes ended up in Troy’s mother’s attic, yet somehow survived the elements.

What makes this so unique is that, in 1967, videotape was very expensive and archiving at television networks was not a priority. So the networks that aired the first Super Bowl, CBS and NBC, did not save any of the broadcast.

But Martin Haupt happened to work for a company that repaired professional videotape recorders, which were, in 1967, cutting edge technology. Taping television broadcasts was part of Martin’s job, a way to test the machines he was rebuilding. Fortunately, Martin went to work the day Super Bowl 1 aired live. The two Quadruplex videotapes that Martin Haupt used to record Super Bowl 1 cost $200 each in 1967. In today’s dollars, that’s almost $3000 total for the two tapes. Buying a “VCR” at your local department store was unfathomable then, and would not be possible for at least another decade. Somehow, Martin missed recording halftime, and part of the third quarter, but it turns out that Martin’s son Troy now owns the most complete known video recording of Super Bowl 1, in which the quarterback Bart Starr led the Green Bay Packers to a 35-10 victory over the Kansas City Chiefs.

Nagra IV-S
Betty Cantor-Jackson recorded many of the Grateful Dead’s landmark concerts using a Nagra IV-S Reel to Reel audiotape recorder. The Dead’s magnum opus, “Dark Star” could easily fill an entire reel.

For music fans, another treasure was uncovered in a storage locker in Marin County, CA, in 1986. Betty Cantor-Jackson worked for The Grateful Dead’s road crew, and made professional multi-track recordings of many of their best concerts, between 1971-1980, on reel-to-reel audiotape. The Dead were known for marathon concerts in which some extended songs, like “Dark Star” could easily fill an entire audio reel. The band gave Betty permission to record, but she purchased her own gear and blank tape, tapping into the band’s mixing console to capture high-quality, soundboard recordings of the band’s epic concerts during their prime era. Betty held onto her tapes until she fell on hard times in the 1980’s, lost her home, and had to move the tapes to a storage locker. She couldn’t pay the storage fees, so the locker contents went up for auction.

barton
Betty Cantor-Jackson recorded the Grateful Dead’s show at Barton Hall in 1977, considered by many fans to be one of their best concerts.

Some 1000 audio reels ended up in the hands of three different buyers, none of whom knew what the tapes contained. Once the music was discovered, copies of the recordings began to leak to hardcore tape-traders within the Deadhead community, and they became affectionately referred to as “The Betty Boards.” It turns out the tapes include some legendary performances, such as the 1971 Capitol Theatre run, and the May 1977 tour, including “Barton Hall, May 8, 1977,” considered by many Deadheads as one of the best Grateful Dead concerts of all time.

You would think the current owners of Super Bowl 1 and Barton Hall, May 8, 1977 would be sitting on gold. But, that’s where the lawyers come in. Legally, the people who possess these tapes own the physical tapes, but not the content on those tapes. So, Troy Haupt owns the 2” inch quadriplex reels of Super Bowl 1, but the NFL owns what you can see on those reels. The NFL owns the copyright of the broadcast. Likewise, The Grateful Dead owns the music on the audio reels, regardless of who owns the physical tape that contains the music. Unfortunately, for NFL fans and Deadheads, this makes the content somewhat inaccessable for now. Troy Haupt has offered to sell his videotapes to the NFL, but they have mostly ignored him. If Troy tries to sell the tapes to a third party instead, the NFL says they will sue him, for unauthorized distribution of their content. The owners of the Grateful Dead tapes face a similar dilema. The band’s management isn’t willing to pay money for the physical tapes, but if the owners, or any third party the owners sell the tapes to, try to distribute the music, they will get sued. However, if it weren’t for Martin Haupt and Betty Cantor-Jackson, who had the foresight to record these events in the first place, the content would not exist at all.

Multispectral Imaging in the Library

MSI setup
Bill Christens-Barry and Mike Adamo test the MSI system

 

Over the past 6 months or so the Digital Production Center has been collaborating with Duke Collaboratory for Classics Computing (DC3) and the Conservation Services Department to investigate multispectral imaging capabilities for the Library. Multispectral imaging (MSI) is a mode of image capture that uses a series of narrow band lights of specific frequencies along with a series of filters to illuminate an object.  Highly tailored hardware and software are used in a controlled environment to capture artifacts with the goal of revealing information not seen by the human eye. This type of capture system in the Library would benefit many departments and researchers alike. Our primary focus for this collaboration are the needs of the Papyri community, Conservation Services along with additional capacity for the Digital Production Center.

Josh Sosin of DC3 was already in contact with Mike Toth of R. B. Toth Associates, a company that is at the leading edge of MSI for Cultural Heritage and research communities, on a joint effort between DC3, Conservation Services and the Duke Eye Center to use Optical Coherence Tomography (OCT) to hopefully reveal hidden layers of mummy masks made of papyri. The DPC has a long standing relationship with Digital Transitions, a reseller of the Phase One digital back, which happens to be the same digital back used in the Toth MSI system. And the Conservation lab was already involved in the OCT collaboration so it was only natural to invite R. B. Toth Associates to the Library to show us their MSI system.

After observing the OCT work done at the Eye Center we made our way to the Library to setup the MSI system. Bill Christens-Barry of R. B. Toth Associates walked me through some very high-level physics related to MSI, we setup the system and got ready to capture selected material which included Ashkar-Gilson manuscripts, various papyri and other material that might benefit from MSI. By the time we started capturing images we had a full house. Crammed into the room were members of DC3, DPC, Conservation, Digital Transitions and Toth Associates all of whom had a stake in this collaboration. After long hours of sitting in the dark (necessary for MSI image capture) we emerged from the room blurry eyed and full of hope that something previously unseen would be revealed.

Ashkar-Gilson
The text of this manuscript was revealed primarily with the IR narrowband light at 940 nm, which Bill enhanced.

The resulting captures are as ‘stack’ or ‘block’ of monochromatic images captured using different wavelengths of light and ultraviolet and infrared filters. Using software developed by Bill Christens-Barry to process and manipulate the images will reveal information if it is there by combining, removing or enhancing images in the stack. One of the first items we processed was Ashkar-GilsonMS14 Deuteronomy 4.2-4.23 seen below. This really blew us away.

This item went from nearly unreadable to almost entirely readable! Bill assured me that he had only done minimal processing and that he should be able to uncover more of the text in the darker areas with some fine tuning. The text of this manuscript was revealed primarily through the use of the IR filter and was not necessarily the direct product of exposing the manuscript to individual bands of light but the result is no less spectacular. Because the capture process is so time consuming and time was limited no other Ashkar-Gilson manuscript was digitized at this time.

We digitized the image on the left in 2010 and ever since then, when asked, ‘What is the most exciting thing you have digitized’ I often answer, “The Ashkar-Gilson manuscripts. Manuscripts from ca. 7th to 8th Century C.E. Some of them still have fur on the back and a number of them are unreadable… but you can feel the history.” Now my admiration for these manuscripts is renewed and maybe Josh can tell me what it says.

It is our hope that we can bring this technology to Duke University so we can explore our material in greater depth and reveal information that has not been seen for a very, very long time.

Beth Doyle, Head of Conservation Services, wrote a blog post for Preservation Underground about her experience with MSI. Check it out!

group
Mike Toth, Mike Adamo, Bill Christens-Barry, Beth Doyle, Josh Sosin and Michael Chan

Also, check out this article from the New & Observer.

________

Want to learn even more about MSI at DUL?

Digital Projects and Production Services’ “Best Of” List, 2015

Its that time of year when all the year end “best of” lists come out, best music, movies, books, etc.  Well, we could not resist following suit this year, so… Ladies in gentlemen, I give you in – no particular order – the 2015 best of list for the Digital Projects and Production Services department (DPPS).

Metadata!
Metadata!

Metadata Architect
In 2015, DPPS welcomed a new staff member to our team; Maggie Dickson came on board as our metadata architect! She is already leading a team to whip our digital collections metadata into shape, and is actively consulting with the digital repository team and others around the library.  Bringing metadata expertise into the DPPS portfolio ensures that collections are as discoverable, shareable, and re-purposable as possible.

An issue of the Chronicle from 1988
An issue of the Chronicle from 1988

King Intern for Digital Collections
DPPS started the year with two large University Archives projects on our plates: the ongoing Duke University Chronicle digitization and a grant to digitize hundreds of Chapel recordings.  Thankfully, University Archives allocated funding for us to hire an intern, and what a fabulous intern we found in Jessica Serrao (the proof is in her wonderful blogposts).  The internship has been an unqualified success, and we hope to be able to repeat such a collaboration with other units around the library.

 

dukeandsonsTripod 3
Our digital project developers have spent much of the year developing the new Tripod3 interface for the Duke Digital Repository. This process has been an excellent opportunity for cross departmental collaborative application development and implementing Agile methodology with sprints, scrums, and stand up meetings galore!  We launched our first collection not the new platform in October and we will have a second one out the door before the end of this year.   We plan on building on this success in 2016 as we migrate existing collections over to Tripod3.

Repository ingest planning
Speaking of Tripod3 and the Duke Digital Repository, we have ingesting digital collections into the Duke Digital Repository since 2014.  However, we have a plan to kick ingests up a notch (or 5).  Although the real work will happen in 2016, the planning has been a long time coming and we are all very excited to be at this phase of the Tripod3 / repository process (even if it will be a lot of work).   Stay tuned!

DCcardfrontDigital Collections Promotional Card
This is admittedly a small achievement, but it is one that has been on my to-do list for 2 years so it actually feels like a pretty big deal.  In 2015, we designed a 5 x 7 postcard to hand out during Digital Production Center (DPC) tours, at conferences, and to any visitors to the library.   Also, I just really love to see my UNC fan colleagues cringe every time they turn the card over and see Coach K’s face.  Its really the little things that make our work fun.

New Exhibits Website
In anticipation of opening of new exhibit spaces in the renovated Rubenstein library, DPPS collaborated with the exhibits coordinator to create a brand new library exhibits webpage.  This is your one stop shop for all library exhibits information in all its well-designed glory.

Aggressive cassette rehousing procedures
Aggressive cassette rehousing procedures

Audio and Video Preservation
In 2014, the Digital production Center bolstered workflows for preservation based digitization.  Unlike our digital collections projects, these preservation digitization efforts do not have a publication outcome so they often go unnoticed.  Over the past year, we have quietly digitized around 400 audio cassettes in house (this doesn’t count outsourced Chapel Recordings digitization), some of which need to be dramatically re-housed.

On the video side, efforts have been sidelined by digital preservation storage costs.  However some behind the scenes planning is in the works, which means we should be able to do more next year.  Also, we were able to purchase a Umatic tape cleaner this year, which while it doesn’t sound very glamorous to the rest of the world, thrills us to no end.

Revisiting the William Gedney Digital Collection
Fans of Duke Digital Collections are familiar with the current Gedney Digital Collection. Both the physical and digital collection have long needed an update.  So in recent years, the physical collection has been reprocessed, and this Fall we started an effort to digitized more materials in the collection and to higher standards than were practical in the late 1990s.

DPC's new work room
DPC’s new work room

Expanding DPC
When the Rubenstein Library re-opened, our neighbor moved into the new building, and the DPC got to expand into his office!   The extra breathing room means more space for our specialists and our equipment, which is not only more comfortable but also better for our digitization practices.  The two spaces are separate for now, but we are hoping to be able to combine them in the next year or two.

 

2015 was a great year in DPPS, and there are many more accomplishments we could add to this list.  One of our team mottos is: “great productivity and collaboration, business as usual”.  We look forward to more of the same in 2016!

William Gedney Wants Me To Build A Darkroom

The initial thought I had for this blog post was to describe a slice of my day that revolved around the work of William Gedney.  I was going to spin a tale about being on the hunt for a light meter to take lux (luminance) readings used to help calibrate the capture environment of one of our scanners.  On my search for the light meter I bumped into the new exhibit of William Gedney’s  handmade books displayed in the Chappell Family Gallery in the Perkins Library.  I had digitized a number of these books a few months ago and enjoyed pretty much every image in the books.  One of the books on display was opened to a particular photograph.  To my surprise, I had just digitized a finished print of the same image that very morning while working on a larger project to digitize all of Gedney’s finished prints, proof prints, contact sheets and other material.  Once the project is complete (a year or so from now) I will have personally seen, handled and digitized over 20,000 of Gedney’s photographs. Whoa!  Would I be able to recognize Gedney images whenever one presented itself just like the book in the gallery?  Maybe.

Once the collection is digitized and published through Duke Digital Collections the whole world will be able to see this amazing body of work.  Instead of boring you with the details of that story I thought I would just leave you with a few images from the collection.  For me, many of Gedney’s photographs have a kinetic energy to them.  It seems as if I can almost feel the air.  My imagination may be working overtime to achieve this and the reality of what was happening when the photograph was taken may be wholly different but the fact is these photographs spin up my imagination and transport me to the moments he has captured.  These photographs inspire me to dust off my enlarger and set up a darkroom.

It may take some time to complete this particular project but there are other William Gedney related projects, materials and events available at Duke.

 

 

 

 

 

 

 

 

 

 

Lichens, Bryophytes and Climate Change

As 2015 winds down, the Digital Production Center is wrapping up a four-year collaboration with the Duke Herbarium to digitize their lichen and bryophyte specimens. The project is funded by the National Science Foundation, and the ultimate goal is to digitize over 2 million specimens from more than 60 collections across the nation. Lichens and bryophytes (mosses and their relatives) are important indicators of climate change. After the images from the participating institutions are uploaded to one central portal, called iDigBio, large-scale distribution mapping will be used to identify regions where environmental changes are taking place, allowing scientists to study the patterns and effects of these changes.

0233518_1

The specimens are first transported from the Duke Herbarium to Perkins Library on a scheduled timeline. Then, we photograph the specimen labels using our Phase One overhead camera. Some of the specimens are very bulky, but our camera’s depth of field is broad enough to keep them in focus. To be clear, what the project is utilizing is not photos of the actual plant specimens themselves, but rather images of the typed and hand-written scientific metadata adorning the envelopes which house the specimens. After we photograph them, the images are uploaded to the national database, where they are available for online research, along with other specimen labels uploaded from universities across the United States. Optical character recognition is used to digest and organize the scientific metadata in the images.

0167750_1

Over the past four years, the Digital Production Center has digitized approximately 100,000 lichen and bryophyte specimens. Many are from the Duke Herbarium, but some other institutions have also asked us to digitize some of their specimens, such as UNC-Chapel Hill, SUNY-Binghamton, Towson University and the University of Richmond. The Duke Herbarium is the second-largest herbarium of all U.S. private universities, next to Harvard. It was started in 1921, and it contains more than 800,000 specimens of vascular plants, bryophytes, algae, lichens, and fungi, some of which were collected as far back as the 1800s. Several specimens have unintentionally humorous names, like the following, which wants to be funky, but isn’t fooling anyone. Ok, maybe only I find that funny.

10351607_10203888424672120_3650120923747796868_n

The project has been extensive, but enjoyable, thanks to the leadership of Duke Herbarium Data Manager Blanka Shaw. Dr. Shaw has personally collected bryophytes on many continents, and has brought a wealth of knowledge, energy and good humor to the collaboration with the Digital Production Center. The Duke Herbarium is open for visitors, and citizen scientists are also needed to volunteer for transcription and georeferencing of the extensive metadata collected in the national database.