Category Archives: Equipment

How Duke Chronicle Goes Digital

Today we will take a detailed look at how the Duke Chronicle, the university’s beloved newspaper for over 100 years, is digitized. Since our scope of digitization spans nine decades (1905-1989), it is an ongoing project the Digital Production Center (DPC), part of Digital Projects and Production Services (DPPS) and Duke University Libraries’ Digital Collections Program, has been chipping away at. Scanning and digitizing may seem straightforward to many – place an item on a scanner and press scan, for goodness sake! – but we at the DPC want to shed light on our own processes to give you a sense of what we do behind the scenes. It seems like an easy-peasy process of scanning and uploading images online, but there is much more that goes into it than that. Digitizing a large collection of newspapers is not always a fun-filled endeavor, and the physical act of scanning thousands of news pages is done by many dedicated (and patient!) student workers, staff members, and me, the King Intern for Digital Collections.

Pre-Scanning Procedures

chrondigblog_chronboxcrop
Large format 1940s Chronicles in over-sized archival box

Many steps in the digitization process do not actually occur in the DPC, but among other teams or departments within the library. Though I focus mainly on the DPC’s responsibilities, I will briefly explain the steps others perform in this digital projects tango…or maybe it’s a waltz?

Each proposed project must first be approved by the Advisory Council for Digital Collections (ACDC), a team that reviews each project for its strategic value. Then it is passed on to the Digital Collections Implementation Team (DCIT) to perform a feasibility study that examines the project’s strengths and weaknesses (see Thomas Crichlow’s post for an overview of these teams). The DCIT then helps guide the project to fruition. After clearing these hoops back in 2013, the Duke Chronicle project started its journey toward digital glory.

We pull 10 years’ worth of newspapers at a time from the University Archives in Rubenstein Library. Only one decade at a time is processed to make the 80+ years of Chronicle publications more manageable. The first stop is Conservation. To make sure the materials are stable enough to withstand digitizing, Conservation must inspect the condition of the paper prior to giving the DPC the go-ahead. Because newspapers since the mid-19th century were printed on cheap and very acidic wood pulp paper, the pages can become brittle over time and may warrant extensive repairs. Senior Conservator, Erin Hammeke, has done great work mending tears and brittle edges of many Chronicle pages since the start of this project. As we embark on digitizing the older decades, from the 1940s and earlier, Erin’s expertise will be indispensable. We rely on her not only to repair brittle pages but to guide the DPC’s strategy when deciding the best and safest way to digitize such fragile materials. Also, several volumes of the Chronicle have been bound, and to gain the best digital image scan these must be removed from their binding. Erin to the rescue!

chrondigblog_conservation1crop
Conservation repair on a 1940s Chronicle page
chrondigblog_conservation2crop
Conservation repair to a torn 1940s Chronicle ad

 

chrondigblog_digguide
1950s Duke Chronicle digitization guide

Now that Conservation has assessed the condition and given the DPC the green light, preliminary prep work must still be done before the scanner comes into play. A digitization guide is created in Microsoft Excel to list each Chronicle issue along with its descriptive metadata (more information about this process can be found in my metadata blog post). This spreadsheet acts as a guide in the digitization process (hence its name, digitization guide!) to keep track of each analog newspaper issue and, once scanned, its corresponding digital image. In this process, each Chronicle issue is inspected to collect the necessary metadata. At this time, a unique identifier is assigned to every issue based on the DPC’s naming conventions. This identifier stays with each item for the duration of its digital life and allows for easy identification of one among thousands of Chronicle issues. At the completion of the digitization guide, the Chronicle is now ready for the scanner.

 

The DPC’s Zeutschel OS 14000 A2

The Scanning Process

With all loose unbound issues, the Zeutschel is our go-to scanner because it allows for large format items to be imaged on a flat surface. This is less invasive and less damaging to the pages, and is quicker than other scanning methods. The Zeutschel can handle items up to 25 x 18 inches, which accommodates the larger sized formats of the Chronicle used in the 1940s and 1950s. If bound issues must be digitized, due to the absence of a loose copy or the inability to safely dis-bound a volume, the Phase One digital camera system is used as it can better capture large bound pages that may not necessarily lay flat.

chrondigblog_folderorganization
Folders each containing multiple page images of one Chronicle issue

For every scanning session, we need the digitization guide handy as it tells what to name the image files using the previously assigned unique identifier. Each issue of the newspaper is scanned as a separate folder of images, with one image representing one page of the newspaper. This system of organization allows for each issue to become its own compound object – multiple files bound together with an XML structure – once published to the website. The Zeutschel’s scanning software helps organize these image files into properly named folders. Of course, no digitization session would be complete without the initial target scan that checks for color calibration (See Mike Adamo’s post for a color calibration crash course).

chrondigblog_zeutschelbuttonscrop
The Zeutschel’s control panel of buttons
chrondigblog_zeutschelpedalscrop
The Zeutschel’s optional foot pedals

The scanner’s plate glass can now be raised with the push of a button (or the tap of a foot pedal) and the Chronicle issue is placed on the flatbed.  Lowering the plate glass down, the pages are flattened for a better scan result. Now comes the excitement… we can finally press SCAN. For each page, the plate glass is raised, lowered, and the scan button is pressed. Chronicle issues can have anywhere from 2 to 30 or more pages, so you can image this process can become monotonous – or even mesmerizing – at times. Luckily, with the smaller format decades, like the 1970s and 1980s, the inner pages can be scanned two at a time and the Zeutschel software separates them into two images, which cuts down on the scan time. As for the larger formats, the pages are so big you can only fit one on the flatbed. That means each page is a separate scan, but older years tended to publish less issues, so it’s a trade-off. To put the volume of this work into perspective, the 1,408 issues of the 1980s Chronicle took 28,089 scans to complete, while the 1950s Chronicle of about 482 issues took around 3,700 scans to complete.

 

chrondigblog_zeutschelpaper1crop
A 1940s Chronicle page is placed on the flatbed for scanning

 

chrondigblog_zeutschelscancrop
Scanning in progress of the 1940s Chronicle page
chrondigblog_targetadobe
Target image opened in Adobe Photoshop for color calibration

Every scanned image that pops up on the screen is also checked for alignment and cropping errors that may require a re-scan. Once all the pages in an issue are digitized and checked for errors, clicking the software’s Finalize button will compile the images in the designated folder. We now return to our digitization guide to enter in metadata pertaining to the scanning of that issue, including capture person, capture date, capture device, and what target image relates to this session (subsequent issues do not need a new target scanned, as long as the scanning takes place in the same session).

Now, with the next issue, rinse and repeat: set the software settings and name the folder, scan the issue, finalize, and fill out the digitization guide. You get the gist.

 

Post-Scanning Procedures

chrondigblog_qcrotate
Rotating an image in Adobe Photoshop

We now find ourselves with a slue of folders filled with digitized Chronicle images. The next phase of the process is quality control (QC). Once every issue from the decade is scanned, the first round of QC checks all images for excess borders to be cropped, crooked images to be squared, and any other minute discrepancy that may have resulted from the scanning process. This could be missing images, pages out of order, or even images scanned upside down. This stage of QC is often performed by student workers who diligently inspect image after image using Adobe Photoshop. The second round of QC is performed by our Digital Production Specialist Zeke Graves who gives every item a final pass.

At this stage, derivatives of the original preservation-quality images are created. The originals are archived in dark storage, while the smaller-sized derivatives are used in the CONTENTdm ingest process. CONTENTdm is the digital collection management software we use that collates the digital images with their appropriate descriptive metadata from our digitization guide, and creates one compound object for each Chronicle issue. It also generates the layer of Optical Character Recognition (OCR) data that makes the Chronicle text searchable, and provides an online interface for users to discover the collection once published on the website. The images and metadata are ingested into CONTENTdm’s Project Client in small batches (1 to 3 years of Chronicle issues) to reduce the chance of upload errors. Once ingested into CONTENTdm, the items are then spot-checked to make sure the metadata paired up with the correct image. During this step, other metadata is added that is specific to CONTENTdm fields, including the ingest person’s initials. Then, another ingest must run to push the files and data from the Project Client to the CONTENTdm server. A third step after this ingest finishes is to approve the items in the CONTENTdm administrative interface. This gives the go-ahead to publish the material online.

Hold on, we aren’t done yet. The project is now passed along to our developers in DPPS who must add this material to our digital collections platform for online discovery and access (they are currently developing Tripod3 to replace the previous Tripod2 platform, which is more eloquently described in Will Sexton’s post back in April). Not only does this improve discoverability, but it makes all of the library’s digital collections look more uniform in their online presentation.

Then, FINALLY, the collection goes live on the web. Now, just repeat the process for every decade of the Duke Chronicle, and you can see how this can become a rather time-heavy and laborious process. A labor of love, that is.

I could have narrowly stuck with describing to you the scanning process and the wonders of the Zeutschel, but I felt that I’d be shortchanging you. Active scanning is only a part of the whole digitization process which warrants a much broader narrative than just “push scan.” Along this journey to digitize the Duke Chronicle, we’ve collectively learned many things. The quirks and trials of each decade inform our process for the next, giving us the chance to improve along the way (to learn how we reflect upon each digital project after completion, go to Molly Bragg’s blog post on post-mortem reports).

If your curiosity is piqued as to how the Duke Chronicle looks online, the Fall 1959-Spring 1970 and January 1980-February 1989 issues are already available to view in our digital collections. The 1970s Chronicle is the next decade slated for publication, followed by the 1950s. Though this isn’t a comprehensive detailed account of the digitization process, I hope it provides you with a clearer picture of how we bring a collection, like the Duke Chronicle, into digital existence.

The Beauty of Auto Crop

One of the most tedious and time-consuming tasks we do in the Digital Production Center is cropping and straightening still image files. Hired students spend hours sitting at our computers, meticulously straightening and cropping extraneous background space out of hundreds of thousands of photographed images, using Adobe Photoshop. This process is neccessary in order to present a clean, concise image for our digital collections, but it causes delays in the completion of our projects, and requires a lot of student labor. Auto cropping software has long been sought after in digital imaging, but few developers have been able to make it work efficiently, for all materials. The Digital Production Center’s Zeutschel overhead scanner utilizes auto cropping software, but the scanner can only be used with completely flat media, due to its limited depth of field. Thicker and more fragile materials must be photographed using our Phase One digital camera system, shown above.

Screen Shot 2015-07-15 at 4.32.26 PM
Capture One’s Cultural Heritage software includes the auto crop feature.

Recently, Digital Transitions, who is the supplier of Phase One and it’s accompanying Capture One software, announced an update to the software which includes an auto crop and straightening feature. The new software is called Capture One Cultural Heritage, and is specifically designed for use in libraries and archival institutions. The auto crop feature, previously unavailable in Capture One, is a real breakthrough, and there are several options for how to use it.

First of all, the user can choose to auto crop “On Capture” or “On Crop.” That is, the software can auto crop instantly, right after a photograph has been taken (On Capture), or it can be applied to the image, or batch of images, at a later time (On Crop). You can also choose between auto cropping at a fixed size, or by the edge of the material. For instance, if you are photographing a collection of posters that are all sized 18” x 24,” you would choose “Fixed Size” and set the primary crop to “18 x 24,” or slightly larger if you want your images to have an outer border. The software recognizes the rectangular shape, and applies the crop. If you are photographing a collection of materials that are a variety of different sizes, you would choose “Generic,” which tells the software to crop wherever it sees a difference between the edge of the material and the background. “Padding” can be used to give those images a border.

camera_stand
The Digital Production Center’s Phase One camera system.

Because Capture One utilizes raw files, the auto crops are non-destructive edits. One benefit of this is that if your background color is close to the color of your material, you can temporarily adjust the contrast of the photograph in order to darken the edges of the object, thus enhancing the delineation between object and background.  Next apply the auto crop, which will be more successful due to it’s ability to recognize the newly-defined edges of the material. After the crops are applied, you can reverse the contrast adjustment, thus returning the images to their original state, while still keeping the newly-generated crops.

levels
Temporarily increasing the contrast of your images can help the auto crop feature find the edges of the object.

Like a lot of technological advances, reliable auto cropping seemed like a fantasy just a few years ago, but is now a reality. It doesn’t work perfectly every time, and quality control is still necessary to uncover errors, but it’s a big step forward. The only thing disconcerting is the larger question facing our society. How long will it be before our work is completely automated, and humans are left behind?

…and We’re Putting it on Wax (The Frank Clyde Brown Collection)

My last several posts have focused on endangered–some would say obsolete–audio formats: open reel tape, compact cassette, DAT, and Minidisc. In this installment, we travel back to the dawn of recorded sound and the 20th Century to investigate some of the earliest commercial recording media. Unlike the formats above, which operate on post-WW2 magnetic and optical technology, these systems carved sound waves into stone (or, more accurately, wax) behind strictly acousto-mechanical principles.

Thomas Edison is credited as inventing the first phonograph (“soundwriter”) on July 18, 1877. It consisted of tinfoil wrapped around a hand-cranked metal cylinder. Sound waves would be funneled through a horn, causing a stylus to vibrate and indent a groove around the outside of the cylinder. The cylinder could be played by reversing the procedure: By retracing the groove with the stylus, the sound would be amplified back through the horn and heard as a rough approximation of the original sound.

cylinder2

Alexander Graham Bell quickly improved the innovation by introducing wax as a superior material for the cylinders and using a needle to scratch the sound waves into their surface. He called his device the “Graphophone”. By 1888, Edison had also adopted wax as the preferred medium for recorded cylinders and a patent-sharing agreement was signed. In 1889, the wax cylinder because the first commercially marketed audio medium.

cylinder1

Initially, the cylinders were installed in the ancestors of jukeboxes in public places. Drop a coin into the slot, and the machine would magically dispense a song, monologue, or comedy routine. The technology was soon adapted for home use. Consumers could purchase prerecorded cylinders to play on their machines. Perhaps more amazingly, they could buy a home recording attachment and cut their own content onto the wax.

[PAUSE—shift from PLAY to RECORD mode]

nc_folklore

Biographical and Historical Note

Frank Clyde Brown (1870-1943) served as a Professor of English at Trinity College, Duke University, from 1909 until his death. A native of Virginia, he received his Ph.D. at the University of Chicago in 1908. While at Duke University he served in many capacities, including being chairman of his department, University Marshal, and Comptroller of the University during its initial construction. These aspects of his life are chronicled in his papers held by the Duke University Archives.

This collection of materials, however, is concerned with activities to which he devoted equal time and energy, the organization of the North Carolina Folklore Society in 1913 and his personal effort to gather and record the nuances and culture of “folk” of North Carolina and its near neighbors, which occupied him from 1912 until his death. Under the impetus of a 1912 mailing from John A. Lomax, then President of the American Folklore Society, Brown as well as other faculty members and other citizens in North Carolina, became interested in folklore and organized the North Carolina Folklore Society in 1913, with Brown as secretary-treasurer. As secretary-treasurer of this organization from its inception until his death, he provided the organizational impetus behind the Society. Through his course in folklore at Duke, he also sent class after class out to gather the folklore of their locales, both during their studies and afterward. And virtually every summer he could be found in the most remote parts of the state, with notebook and recorder — first a dictaphone employing cylinders, and later a machine employing aluminum discs provided for his use by the University. The result, by 1943, was a collection of about 38,000 written notes on lore, 650 musical scores, 1400 songs vocally recorded, and numerous magazine articles, student theses, books, lists, and other items related to this study. The material originated in at least 84 North Carolina counties, with about 5 percent original in 20 other states and Canada, and came from the efforts of 650 other contributors besides Brown himself.

[STOP]

cylinder4

cylinder5

[POWER OFF]

Thanks to our Audiovisual Archivist, Craig Breaden, for the excellent photos and unused title suggestion (“The Needle and the Damage Done”). Future posts will include updates on work with the Frank C. Brown Collection, other audio collections at Duke, and the history of sound recording and reproduction.

 

Sports Information negatives sneak preview

We all probably remember having to pose for an annual class photograph in primary school. If you made the mistake of telling your mother about the looming photograph beforehand you probably had to wear something “nice” and had your hair plastered to your head by your mother while she informed you of the trouble you’d be in if you made a funny face. Everyone looks a little awkward in these photographs and only a few of us wanted to have the picture taken in the first place. Frankly, I’m amazed that they got us all to sit still long enough to take the photograph. Some of us also had similar photographs taken while participating in team sports which also led to some interesting photographs.

These are some of the memories that have been popping up this past month as I digitize nitrate negatives from the Sports Information Office: Photographic Negatives collection circa 1924-1992, 1995 and undated. The collection contains photographic negatives related to sports at Duke. I’ve digitized about half of the negatives and seen images from mostly football, basketball, baseball and boxing. The majority of these photographs are of individuals but there are also team shots, group shots and coaches. While you may have to wait a bit for the publication of these negatives through the Digital Collections website I had to share some of these gems with you.

Some of the images strike me as funny for the expressions, some for the pose and others for the totally out of context background. It makes me wonder what the photographer’s intention/ instruction was.FlexTight X5

To capture these wonderful images we are using a recently purchased Hasselblad FlexTight X5. The Hasselblad is a dedicated high-end film scanner that uses glassless drum scanning technology. Glassless drum scanning takes advantage of all the benefits of a classic drum scanner (high resolution, sharpness, better D-max/ D-min) without all the disadvantages (wet mounting messiness, newton rings, time consuming, price, speed).   This device produces extremely sharp reproductions of which the film grain in the digital image can be seen. A few more important factors about this scanner are: a wide variety of standard film sizes can be digitized along with custom sizes and it captures in a raw file format. This is significant because negatives contain a significant amount of tonal information that printed photographs do not. Once this information is captured we have to adjust each digital image as if we were printing the negative in a traditional dark room. When using image editing software to adjust an image an algorithm is at work making decisions about compressing, expanding, keeping or discarding tonal information in the digital image. This type of adjustment causes data loss. Because we are following archival imaging standards, retaining the largest amount of data is important. Sometimes the data loss is not visible to the naked eye but making adjustments renders the image data “thin”. The more adjustments to an image the less data there is to work with.

histogram
A histogram is a visual representation of tonal data in an image. This is a histogram of an image before and after an adjustment.

It kind of reminds me of the scene in Shawshank Redemption (spoiler alert) where the warden is in Andy Dufresne’s (Tim Robbins) cell after discovering he has escaped. The warden throws a rock at a poster on the wall in anger only to find there is a hole in the wall behind the poster. An adjusted digital image is similar in that the image looks normal and solid but there is no depth to it. This becomes a problem if anyone, after digitization, wants to reuse the image in some other context where they will need to make adjustments to suit their purposes. They won’t have a whole lot of latitude to make adjustments before digital artifacts start appearing. By using the Hasselblad RAW file format and capturing in 16 bit RGB we are able to make adjustments to the raw file without data loss. This enables us to create a robust file that will be more useful in the future.

I’m sure there will be many uses for the negatives in this collection. Who wouldn’t want a picture of a former Duke athlete in an odd pose in an out of context environment with a funny look on their face? Right?

The Pros and Cons of FFV1

One of the greatest challenges to digitizing moving image content isn’t the actual digitization. It’s the enormous file sizes that result, and the high costs associated with storing and maintaining those files for long-term preservation. Most cultural heritage institutions consider 10-bit uncompressed to be the preservation standard for moving image content. 10-bit uncompressed uses no file compression, as the name states, and is considered the safest, and most reliable format for moving image preservation at this time. It delivers the highest image resolution, color quality, and sharpness, while avoiding motion compensation and compression artifacts.

Unfortunately, one hour of 10-bit uncompressed video can produce a 100 gigabyte file. That’s at least 50 times larger than an audio preservation file of the same duration, and about 1000 times larger than most still image preservation files. In physical media terms, it would take 21 DVDs, or 142 CDs, to store one hour of 10-bit uncompressed video. That’s a lot of data!

Recently, the FFV1 codec has gained in popularity as an alternative to 10-bit uncompressed. FFV1 uses lossless compression to store digitized moving image content at reduced file sizes, without data loss. FFV1 is part of the free, open-source FFmpeg project, and has been in existence since 2003. FFV1 uses entropy encoding to deliver mathematically lossless intra-frame compression, which produces substantially smaller file sizes when compared to uncompressed 10-bit moving image digitization.

ffv1_terminal
Because commercial video digitization hardware does not natively support the FFV1 codec, operation must be conducted via CPU terminal command-line prompts.

 

Testing in the Digital Production Center showed that files encoded with the FFV1 codec produced files almost 1/3 the size of their 10-bit uncompressed counterparts. Both formats can be carried in a variety of wrappers, or container files, such as AVI (Microsoft) or MOV (Apple), or MKV (open source). The encoded video and audio streams are wrapped together in the container with other data streams that include technical metadata. The type and variety of data that a container can hold are specific to that container format.

ffv1_visual
Within the terminal command line window, incoming video image and waveform readouts are displayed, while the content is compressed to FFV1.

 

The reduced file sizes produced via FFV1 are exciting, but there are some downsides. Although FFV1 is open-source, the files will not play using standard video software on Mac and Windows, nor can FFV1 be utilized within commercially-available digitization hardware and software (only via terminal command). This is because no major company (Apple, Microsoft, Adobe, Blackmagic, etc.) has adopted the codec, or announced plans to do so. Any file format that does not eventually achieve widespread adoption and universal playback capability within the broadcasting and filmmaking communities, has a higher risk of long-term obsolescence, and lack of engineering support.

The concept of “lossless compression” is mysterious, and seemingly a paradox. How can it make a file smaller, without eliminating or irreversibly altering any data? In testing, it is difficult to verify that a file converted (compressed) to FFV1 and then converted back (decompressed) is an identical file to its original state. Although the specs may be the same, the before and after file-sizes are not identical. So, “lossless” and “reversible” may not be synonymous, although ideally, they should be. In addition to the software and hardware compatibility issues of FFV1, it is challenging to accurately validate the integrity of a file that incorporates lossless compression.

When MiniDiscs Recorded the Earth

My last several posts have focused on endangered audio formats: open reel tape, compact cassette, and DAT. Each of these media types boasted some advantages over their predecessors, as well as disadvantages that ultimately led to them falling out of favor with most consumers. Whether entirely relegated to our growing tech graveyard or moving into niche and specialty markets, each of the above formats has seen its brightest days and now slowly fades into extinction.

This week, we turn to the MiniDisc, a strange species that arose from Sony Electronics in 1992 and was already well on its way to being no more than a forgotten layer in the technological record by the time its production was discontinued in 2013.

IMG_1783

The MiniDisc was a magneto-optical disc-based system that offered 74 minutes of high-quality digital audio per disc (up to 320 minutes in long-play mode). It utilized a psychoacoustic lossy compression scheme (known as ATRAC) that allowed for significant data compression with little perceptible effect on audio fidelity. This meant you could record near perfect digital copies of CDs, tapes, or records—a revolutionary feat before the rise of writable CDs and hard disc recording. The minidisc platform was also popular in broadcasting and field recording. It was extremely light and portable, had excellent battery life, and possessed a number of sophisticated file editing and naming functions.

Despite these advantages, the format never grabbed a strong foothold in the market for several reasons. The players were expensive, retailing at $750 on launch in December 1992. Even the smaller portable Minidisc “Walkman” never dropped into the low consumer price range.  As a result, relatively few music albums were commercially released on the format. Once affordable CD-Rs and then mp3 players came onto the scene, the Minidisc was all but obsolete without ever truly breaking through to the mainstream.

IMG_1787

I recently unearthed a box containing my first and only Minidisc player, probably purchased used on eBay sometime in the early 2000’s. It filled several needs for me: a field recorder (for capturing ambient sound to be used in audio art and music), a playback device for environmental sounds and backing tracks in performance situations, and a “Walkman” that was smaller, held more music, and skipped less than my clunky portable CD player.

While it was long ago superceded by other electronic tools in my kit, the gaudy metallic yellow still evokes nostalgia. I remember the house I lived in at the time, walks to town with headphones on, excursions into the woods to record birds and creeks and escape the omnipresent hum of traffic and the electrical grid. The handwritten labels on the discs provide clues to personal interests and obsessions of the time: “Circuit Bends,” “Recess – Musique Concrete Master,” “Field Recordings 2/28/04,” “PIL – Second Edition, Keith Hudson – Pick A Dub, Sonic Youth – Sister, Velvet Underground – White Light White Heat.” The sounds and voices of family, friends, and creative collaborators populate these discs as they inhabit the recesses of my memory.

IMG_1780

While some may look at old technology as supplanted and obsolete, I refrain from this kind of Darwinism. The current renaissance of the supposedly extinct vinyl LP has demonstrated that markets and tastes change, and that ancient audio formats can be resurrected and have vital second lives. Opto-magnetic ghosts still walk the earth, and I hear them calling. I’m keeping my Minidisc player.

Man to Fight Computers!

1965 Engineers Show Image_DukEngineer
Duke Engineers Show in March 1965, DukEngineers

Fifty years ago this week, Duke students faced off with computers in model car races and tic-tac-toe matches in the annual Engineers’ Show.  In stark contrast to the up-and-coming computers, a Duke Chronicle article dubbed these human competitors as old-fashioned and obsolete.  Five decades later, although we humans haven’t completely lost our foothold to computers, they have become a much bigger part of our daily lives than in 1965.  Yes, there are those of you out there who fear the imminent robot coup is near, but we mostly have found a way to live alongside this technology we have created.  Perhaps we could call it a peaceful coexistence.

 

Zeutschel Image
Zeutschel Overhead Scanner

At least, that’s how I would describe our relationship to technology here at the Digital Production Center (DPC) where I began my internship six weeks ago.  We may not have the entertaining gadgets of the Engineers’ Show, like a mechanical swimming shark or mechanical monkey climbing a pole, but we do have exciting high-tech scanners like the Zeutschel, which made such instant internet access to articles like “Man To Fight Computers” possible.  The university’s student newspaper has been digitized from fall 1959 to spring 1970, and it is an ongoing project here at the DPC to digitize the rest of the collection spanning from 1905 to 1989.

 

My first scanning project has been the 1970s Duke Chronicle issues.  While standing at the Zeutschel as it works its digitization magic, it is fascinating to read the news headlines and learn university history through pages written by and for the student population.  The Duke Chronicle has been covering campus activities since 1905 when Duke was still Trinity College.  Over the years it has captured the evolution of student life as well as the world beyond East and West Campus.  The Chronicle is like a time capsule in its own right, each issue freezing and preserving moments in time for future generations to enjoy.  This is a wonderful resource for researchers, history nerds (like me!), and Duke enthusiasts alike, and I invite you to explore the digitized collection to see what interesting articles you may find.  And don’t forget to keep checking back with BitStreams to hear about the latest access to other decades of the Duke Chronicle.

 

1965 Engineers Show_DukEngineer
DukEngineer, The College of Engineering magazine, covered this particular Engineers’ Show in their April 1965 issue.

The year 1965 doesn’t seem that distant in time, yet in terms of technological advancement it might as well be eons away from where we are now.  Playing tic-tac-toe against a computer seems arcane compared to today’s game consoles and online gaming communities, but it does put things into perspective.  Since that March day in 1965, it is my hope that man and computer both have put down their boxing gloves.

Winter Cross-Training in the DPC

The Digital Production Center engages with various departments within the Libraries and across campus to preserve endangered media and create unique digital collections. We work especially closely with The Rubenstein Rare Book, Manuscript, & Special Collections Library, as they hold many of the materials that we digitize and archive on a daily basis. This collaboration requires a shared understanding of numerous media types and their special characteristics; awareness of potential conservation and preservation issues; and a working knowledge of digitization processes, logistics, and limitations.

In order to facilitate this ongoing collaboration, we recently did a semester-long cross-training course with The Rubenstein’s Reproductions Manager, Megan O’Connell. Megan is one of our main points of contact for weekly patron requests, and we felt that this training would strengthen our ability to navigate tricky and time-sensitive digitization jobs heading into the future. The plan was for Megan to work with all three of our digitization specialists (audio, video, & still image) to get a combination of hands-on and observational learning opportunities.

IMG_1068

Still image comprises the bulk of our workload, so we decided to spend most of the training on these materials. “Still image” includes anything that we digitize via photographic or scanning technology, e.g. manuscripts, maps, bound periodicals, posters, photographs, slides, etc. We identified a group of uniquely challenging materials of this type and digitized one of each for hands-on training, including:

  • Bound manuscript – Most of these items cannot be opened more than 90 degrees. We stabilize them in a custom-built book cradle, capture the recto sides of the pages, then flip the book and capture the verso sides. The resulting files then have to be interleaved into the correct sequence.
  • Map, or other oversize item – These types of materials are often too large to capture in one single camera shot. Our setup allows us to take multiple shots (with the help of the camera being mounted on a sliding track) which we then stitch together into a seamless whole.
  • Item with texture or different item depths, e.g. a folded map, tipped into a book – It is often challenging to properly support these items and level the map so that it is all in focus within the camera’s depth of field.
  • ANR volume – These are large, heavy volumes that typically contain older newspapers and periodicals. The paper can be very fragile and they have to be handled and supported carefully so as not to damage or tear the material.
  • Item with a tight binding w/ text that goes into the gutter – We do our best to capture all of the text, but it will sometimes appear to curve or disappear into the gutter in the resulting digital image.

IMG_1069

Working through this list with Megan, I was struck by the diversity of materials that we collect and digitize. The training process also highlighted the variety of tricks, techniques, and hacks that we employ to get the best possible digital transfers, given the limitations of the available technology and the materials’ condition. I came out of the experience with a renewed appreciation of the complexity of the digitization work we do in the DPC, the significance of the rare materials in the collection, and the excellent service that we are able to provide to researchers through the Rubenstein Library.

IMG_1065

Check out Megan’s blog post on the Devil’s Tale for more on the other media formats I wasn’t able to cover in the scope of this post.

Here’s to more collaboration across boundaries in the New Year!

IMG_1173

Digital Transitions Roundtable

In late October of this year, the Digital Production Center (along with many others in the Library) were busy developing budgets for FY 2015. We were asked to think about the needs of the department, where the bottlenecks were and possible new growth areas. We were asked to think big. The idea was to develop a grand list and work backwards to identify what we could reasonably ask for. While the DPC is able to digitize many types of materials and formats, such as audio and video, my focus is specifically still image digitization. So that’s what I focused on.dt-bc100-book

We serve many different parts of the Library and in order to accommodate a wide variety of requests, we use many different types of capture devices in the DPC: high-speed scanners, film scanners, overhead scanners and high-end cameras. The most heavily used capture device is the Phase One camera system. This camera system uses P65 60 MP digital back with a 72mm Schneider flat field lens. This enables us to capture high quality images at archival standards. The majority of material we digitize using this camera are bound volumes (most of them rare books from the David M. Rubenstein Library), but we also use this camera to digitize patron requests, which have increased significantly over the years (everything is expected to be digital it seems), oversized items, glass plate negatives, high-end photography collections and much more. It is no surprise that this camera is a bottleneck for still image production. In researching cameras to include in the budget, I was hard pressed to find another camera system that can compete with the Phase One camera. For over 5 years we have used Digital Transitions, a New York-based provider of high-end digital solutions, for our Phase One purchases and support. We have been very happy with the service, support and equipment we have purchased from them over the years, so I contacted them to inquire about new equipment on the horizon and pricing for upgrading our current system.captureone

New equipment they turned me onto is the BC100 book scanner. This scanner uses a 100° glass platen and two reprographic cameras to capture two facing pages at the same time. While there are other camera systems that use a similar two camera setup (most notably the Scribe, Kirtas and Atiz), the cameras and digital backs used with the BC100, as well as the CaptureOne software that drives the cameras, are more well suited for cultural heritage reproduction. Along with the new BC100, CaptureOne is now offering a new software package specifically geared toward the cultural heritage community for use with this new camera system. While inquiring about the new system, I was invited to attend a Cultural Heritage Round Table event that Digital Transitions was hosting.

This roundtable was focused on the new CaptureOne software for use with the BC100 and the specific needs of the cultural heritage community. I have always found the folks at Digital Transitions to be very professional, knowledgeable and helpful. The event they put together included Jacob Frost, Application Software R&D Manager for PhaseOne; Doug Peterson, Technical Support, Training, R&D at Digital Transitions; and Don Williams of Image Science Associates, Imaging Scientist. Don is also on the Still Image Digitization Advisory Board with the Federal Agencies Digitization Guidelines Initiative (FADGI), a collaborative effort by federal agencies to FADGI1define common guidelines, methods, and practices for digitizing historical content. They talked about the new features of the software, the science behind the software, the science behind the color technology and new information about the FADGI Still Image standard that we currently follow at the Library. I was impressed by the information provided and the knowledge shared, but what impressed me the most was the fact that the main reason Digital Transitions pulled this particular group of users and developers together was to ask us what the cultural heritage community needed from the new software. WHAT!? What we need from the software? I’ve been doing this work for about 15 years now and I think that’s the first time any software developer from any digital imaging company has asked our community specifically what we need. Don’t get me wrong, there is a lot of good software out there but usually the software comes “as is.” While it is fully functional, there are usually some work-arounds to get the software to do what I need it to do. We, as a community, spent about an hour drumming up ideas for software improvements and features.

While we still need to see follow-through on what we talked about, I am hopeful that some of the features we talked about will show up in the software. The software still needs some work to be truly beneficial (especially in post-production), but Phase One and Digital Transitions are definitely on to something.

Midnight in the Garden of Film and Video

A few weeks ago, archivists, engineers, students and vendors from across the globe arrived in the historic city of Savannah, GA for AMIA 2014. The annual conference for The Association of Moving Image Archivists is a gathering of professionals who deal with the challenge of preserving motion picture film and videotape content for future generations. Since today is Halloween, I must also point out that Savannah is a really funky city that is haunted! The downtown area is filled with weeping willow trees, well-preserved 19th century architecture and creepy cemeteries dating back to the U.S. Civil and Revolutionary wars. Savannah is almost as scary as a library budget meeting.

The bad moon rises over Savannah City Hall.
The bad moon rises over Savannah City Hall.

Since many different cultural heritage institutions are digitizing their collections for preservation and online access, it’s beneficial to develop universal file standards and best practices. For example, organizations like NARA and FADGI have contributed to the universal adoption of the 8-bit uncompressed TIFF file format for (non-transmissive) still image preservation. Likewise, for audio digitization, 24-bit uncompressed WAV has been universally adopted as the preservation standard. In other words, when it comes to still image and audio digitization, everyone is driving down the same highway. However, at AMIA 2014, it was apparent there are still many different roads being taken in regards to moving image preservation, with some potential traffic jams ahead. Are you frightened yet? You should be!

The smallest known film gauge: 3mm. Was it designed by ancient druids?
The smallest known film gauge: 3mm. Was it built by ancient druids?

Up until now, two file formats have been competing for dominance for moving image preservation: 10-bit uncompressed (.mov or .avi wrapper) vs. Motion JPEG2000 (MXF wrapper). The disadvantage of uncompressed has always been its enormous file size. Motion JPEG2000 incorporates lossless compression, which can reduce file sizes by 50%, but it’s expensive to implement, and has limited interoperability with most video software and players. At AMIA 2014, some were championing the use of a newer format, FFV1, a lossless codec that has compression ratios similar to JPEG2000, but is open source, and thus more widely adoptable. It is part of the FFmpeg software project. Adoption of FFV1 is growing, but many institutions are still heavily invested in 10-bit uncompressed or Motion JPEG2000. Which format will become the preservation standard, and which will become ghosts that haunt us forever?!?

Another emerging need is for content management systems that can store and provide public access to digitized video. The Hydra repository solution is being adopted by many institutions for managing preservation video files. In conjunction with Hydra, many are also adopting Avalon to provide public access for online viewing of video content. Like FFmpeg, both Hydra and Avalon are open source, which is part of their appeal. Others are building their own systems, catered specifically to their own needs, like The Museum of Modern Art. There are also competing metadata standards. For example, PBCore has been adopted by many public television stations, but is generally disliked by libraries. In fact, they find it really creepy!

A new print of Peter Pan was shown at AMIA 2014
A new print of Peter Pan was shown at AMIA 2014. That movie gave me nightmares as a child.

Finally, there is the thorny issue of copyright. Once file formats are chosen and delivery systems are in place, methods must be implemented to control access by only those intended, to protect copyright and hinder piracy. The Avalon Media System enables rights and access control to video content via guest passwords. The Library of Congress works around some of these these issues another way, by setting up remote viewing rooms in Washington, DC, which are connected via fiber-optic cable to their Audio-Visual Conservation Center in Culpeper, Va. Others, with more limited budgets, like Dino Everett at USC Cinematic Arts, watermark their video, upload it to sites like Vimeo, and implement temporary password protection, canceling the passwords manually after a few weeks. I mean, is there anything more frightening than a copyright lawsuit? Happy Halloween!