Category Archives: Digitization Expertise

Digitization Details: The Process of Digitizing a Collection

About four and a half years ago I wrote a blog post here on Bitstreams titled: “Digitization Details: Before We Push the “Scan” Button” in which I wrote about how we use color calibration, device profiling and modified viewing environments to produce “consistent results of a measurable quality” in our digital images.  About two and a half years ago, I wrote a blog post adjacent to that subject titled “The FADGI Still Image standard: It isn’t just about file specs” about the details of the FADGI standard and how its guidelines go beyond ppi and bit depth to include information about UV light, vacuum tables, translucent material, oversized material and more.  I’m surprised that I have never shared the actual process of digitizing a collection because that is what we do in the Digital Production Center.

Building digital collections is a complex endeavor that requires a cross-departmental team that analyzes project proposals, performs feasibility assessments, gathers project requirements, develops project plans, and documents workflows and guidelines in order to produce a consistent and scalable outcome in an efficient manner.  We call our cross-departmental team the Digital Collections Implementation Team (DCIT) which includes representatives from the Conservation staff, Technical Services, Digital Production, Metadata Architects and Digital Collections UI developers, among others.  By having representatives from each department participate, we are able to consider all perspectives including the sticking points, technical limitations and time constraints of each department. Over time, our understanding of each other’s workflows and sticking points has enabled us to refine our approach to efficiently hand off a project between departments.

I will not be going into the details of all the work other departments contribute to building digital collections (you can read just about any post on the blog for that). I will just dip my toe into what goes on in the Digital Production Center to digitize a collection.

Digitization

Once the specifics of a project are nailed down, the scope of the project has been finalized, the material has been organized by Technical Services, Conservation has prepared the material for digitization, the material has been transferred to the Digital Production Center and an Assessment Checklist is filled out describing the type, condition, size and number of items in a collection, we are ready to begin the digitization process.

Digitization Guide
A starter digitization guide is created using output from ArchivesSpace and the DPC adds 16-20 fields to capture technical metadata during the digitization process. The digitization guide is an itemized list representing each item in a collection and is centrally stored for ease of access. 

Setup
Cameras and monitors are calibrated with a spectrometer.  A color profile is built for each capture device along with job settings in the capture software. This will produce consistent results from each capture device and produce an accurate representation of any items captured which in turn removes subjective evaluation from the scanning process.

Training
Instructions are developed describing the scanning, quality control, and handling procedures for the project and students are trained.

Scanning
Following instructions developed for each collection, the scanner operator will use the appropriate equipment, settings and digitization guide to digitize the collection.  Benchmark tests are performed and evaluated periodically during the project. During the capture process the images are monitored for color fidelity and file naming errors. The images are saved in a structured way on the local drive and the digitization guide is updated to reflect the completion of an item.   At the end of each shift the files are moved to a production server.

Quality Control 1
The Quality Control process is different depending on the device with which an item was captured and the nature of the material.  All images are inspected for:  correct file name, skew, clipping, banding, blocking, color fidelity, uniform crop, and color profile.  The digitization guide is updated to reflect the completion of an item.

Quality Control 2
Images are cropped (leaving no background) and saved as JPEGs for online display.  During the second pass of quality control each image is inspected for: image consistency from operator to operator and image to image, skew and other anomalies.

Finalize
During this phase we compare the digitization guide against the item and file counts of the archival and derivative images on our production server.   Discrepancies such as missing files, misnamed files and missing line items in the digitization guide and are resolved.

Create Checksums and dark storage
We then create a SHA1 checksum for each image file in the collection and push the collection into a staging area for ingest into the repository.

Sometimes this process is referred to simply as “scanning”.

Not only is this process in active motion for multiple projects at the same time, the Digital Production Center also participates in remediation of legacy projects for ingest into the Duke Digital Repository, multispectral imaging, audio digitization and video digitization for, preservation, patron and staff requests… it is quite a juggling act with lots of little details but we love our work!

Time to get back to it so I can get to a comfortable stopping point before the Thanksgiving break!

We are Hiring!

Duke University Libraries is recruiting a Digital Production Services Manager to direct the operations of our Digital Production Center, its staff (3 FTE plus student assistants), and associated digitization services. We are seeking someone experienced in leading digitization projects who is excited to partner with colleagues around the library to reformat and preserve unique library collections and provide access to them online. This is an excellent opportunity for someone who likes working with people, projects, and primary sources!

This newly created position combines people and project management responsibilities with hands-on digitization duties. Previous supervisory experience is not required; however, the ability to direct the work of others is essential to this position, as is a service oriented attitude. Strong organizational and project management skills are also a must. Some form of digitization experience in a library or other cultural heritage setting is required for this role as well. The successful candidate will join the highly collaborative Digital Collections and Curation Services department and work under the direct supervision of the department head.

The Digital Production Center (DPC) is a specialized unit that creates digital surrogates of primary resources from Duke University Libraries collections for the purposes of preservation and access. Learn more about the DPC on our web page, or through the Digital Strategies and Technology division’s blog, Bitstreams. To see some of the materials we have digitized, check out Duke Digital Collections online.

Duke is a diverse community committed to the principles of excellence, fairness, and respect for all people. As part of this commitment, we actively value diversity in our workplace and learning environments as we seek to take advantage of the rich backgrounds and abilities of everyone. We believe that when we understand, celebrate, and tap into our uniqueness to creatively solve problems and address shared goals, our possibilities are limitless. Duke University Libraries value diversity of thought, perspective, experience, and background and are actively committed to a culture of inclusion and respect.

Duke offers a comprehensive benefit package, which includes both traditional benefits such as health insurance, leave time and retirement, as well as wide ranging work/life and cultural benefits. Details can be found at: http://www.hr.duke.edu/benefits/index.php.

For a full job description please see https://library.duke.edu/about/jobs/dpsmanager. To apply, submit an electronic resume, cover letter, and list of 3 references: https://hr.duke.edu/careers/apply – refer to requisition #401463554. Review of applications will begin immediately and will continue until the position is filled.

Multispectral Imaging Summer Snapshots

If you are a regular Bitstreams reader, you know we just love talking about Multispectral Imaging.  Seriously, we can go on and on about it, and we are not the only ones.   This week however we are keeping it short and sweet and sharing a couple before and after images from one of our most recent imaging sessions.

Below are two stacked images of Ashkar MS 16 (from the Rubenstein Library).  The top half of each image is the manuscript under natural light, and the bottom are the results of Multispectral imaging and processing.  We tend to post black and white MSI images most often as they are generally the most legible, however our MSI software can produce a lot of wild color variations!  The orange one below seemed the most appropriate for a hot NC July afternoon like today.  More processing details are included in the image captions below – enjoy!

The text of this manuscript above was revealed primarily with the IR narrowband light at 780 nm.
This image was created using Teem, a tool used to process and visualize scientific raster data. This specific image is the result of flatfielding each wavelength image and arranging them in wavelength order to produce a vector for each pixel. The infinity norm is computed for each vector to produce a scalar value for each pixel which is then histogram-equalized and assigned a color by a color-mapping function.

A History of Videotape, Part 1

As a Digital Production Specialist at Duke Libraries, I work with a variety of obsolete videotape formats, digitizing them for long-term preservation and access. Videotape is a form of magnetic tape, consisting of a magnetized coating on one side of a strip of plastic film. The film is there to support the magnetized coating, which usually consists of iron oxide. Magnetic tape was first invented in 1928, for recording sound, but it would be several decades before it could be used for moving images, due to the increased bandwidth that is required to capture the visual content.

Bing Crosby was the first major entertainer who pushed for audiotape recordings of his radio broadcasts. in 1951, his company, Bing Crosby Enterprises (BCE) debuted the first videotape technology to the public.

Television was live in the beginning, because there was no way to pre-record the broadcast other than with traditional film, which was expensive and time-consuming. In 1951, Bing Crosby Enterprises (BCE), owned by actor and singer Bing Crosby, demonstrated the first videotape recording. Crosby had previously incorporated audiotape recording into the production of his radio broadcasts, so that he would have more time for other commitments, like golf! Instead of having to do a live radio broadcast once a week for a month, he could record four broadcasts in one week, then have the next three weeks off. The 1951 demonstration ran quarter-inch audiotape at 360 inches per second, using a modified Ampex 200 tape recorder, but the images were reportedly blurry and not broadcast quality.

Ampex introduced 2” quadruplex videotape at the National Association of Broadcasters convention in 1956. Shown here is a Bosch 2″ Zoll Quadruplex Machine.

More companies experimented with the emerging technology in the early 1950’s, until Ampex introduced 2” black and white quadruplex videotape at the National Association of Broadcasters convention in 1956. This was the first videotape that was broadcast quality. Soon, television networks were broadcasting pre-recorded shows on quadruplex, and were able to present them at different times in all four U.S. time zones. Some of the earliest videotape broadcasts were CBS’s “The Edsel Show,” CBS’s “Douglas Edwards with the News,” and NBC’s “Truth or Consequences.” In 1958, Ampex debuted a color quadruplex videotape recorder. NBC’s “An Evening with Fred Astaire” was the first major TV show to be videotaped in color, also in 1958.

Virtually all the videotapes of the first ten years (1962-1972) of “The Tonight Show with Johnny Carson” were taped over by NBC to save money, so no one has seen these episodes since broadcast, nor will they… ever.

 

One of the downsides to quadruplex, is that the videotapes could only be played back using the same tape heads which originally recorded the content. Those tape-heads wore out very quickly, which mean’t that many tapes could not be reliably played back using the new tape-heads that replaced the exhausted ones. Quadruplex videotapes were also expensive, about $300 per hour of tape. So, many TV stations maximized the expense, by continually erasing tapes, and then recording the next broadcast on the same tape. Unfortunately, due to this, many classic TV shows are lost forever, like the vast majority of the first ten years (1962-1972) of “The Tonight Show with Johnny Carson,” and Super Bowl II (1968).

Quadruplex was the industry standard until the introduction of 1” Type C, in 1976. Type C video recorders required less maintenance, were more compact and enabled new functions, like still frame, shuttle and slow motion, and 1” Type C did not require time base correction, like 2” Quadruplex did. Type C is a composite videotape format, with quality that matches later component formats like Betacam. Composite video merges the color channels so that it’s consistent with a broadcast signal. Type C remained popular for several decades, until the use of videocassettes gained in popularity. We will explore that in a future blog post.

Let’s Get Small: a tribute to the mighty microcassette

In past posts, I’ve paid homage to the audio ancestors with riffs on such endangered–some might say extinct–formats as DAT and Minidisc.  This week we turn our attention to the smallest (and perhaps the cutest) tape format of them all:  the Microcassette.

Introduced by the Olympus Corporation in 1969, the Microcassette used the same width tape (3.81 mm) as the more common Philips Compact Cassette but housed it in a much smaller and less robust plastic shell.  The Microcassette also spooled from right to left (opposite from the compact cassette) as well as using slower recording speeds of 2.4 and 1.2 cm/s.  The speed adjustment, allowing for longer uninterrupted recording times, could be toggled on the recorder itself.  For instance, the original MC60 Microcassette allowed for 30 minutes of recorded content per “side” at standard speed and 60 minutes per side at low speed.

The microcassette was mostly used for recording voice–e.g. lectures, interviews, and memos.  The thin tape (prone to stretching) and slow recording speeds made for a low-fidelity result that was perfectly adequate for the aforementioned applications, but not up to the task of capturing the wide dynamic and frequency range of music.  As a result, the microcassette was the go-to format for cheap, portable, hand-held recording in the days before the smartphone and digital recording.  It was standard to see a cluster of these around the lectern in a college classroom as late as the mid-1990s.  Many of the recorders featured voice-activated recording (to prevent capturing “dead air”) and continuously variable playback speed to make transcription easier.

The tiny tapes were also commonly used in telephone answering machines and dictation machines.

As you may have guessed, the rise of digital recording, handheld devices, and cheap data storage quickly relegated the microcassette to a museum piece by the early 21st century.  While the compact cassette has enjoyed a resurgence as a hip medium for underground music, the poor audio quality and durability of the microcassette have largely doomed it to oblivion except among the most willful obscurantists.  Still, many Rubenstein Library collections contain these little guys as carriers of valuable primary source material.  That means we’re holding onto our Microcassette player for the long haul in all of its atavistic glory.

image by the author. other images in this post taken from Wikimedia Commons (https://commons.wikimedia.org/wiki/Category:Microcassette)

 

The Outer Limits of Aspect Ratios

“There is nothing wrong with your television set. Do not attempt to adjust the picture. We are controlling transmission. We will control the horizontal. We will control the vertical. We repeat: there is nothing wrong with your television set.”

That was part of the cold open of one of the best science fiction shows of the 1960’s, “The Outer Limits.” The implication being that by controlling everything you see and hear in the next hour, the show’s producers were about to blow your mind and take you to the outer limits of human thought and fantasy, which the show often did.

In regards to controlling the horizontal and the vertical, one of the more mysterious parts of my job is dealing with aspect ratios when it comes to digitizing videotape. The aspect ratio of any shape is the proportion of it’s dimensions. For example, the aspect ratio of a square is always 1 : 1 (width : height). That means, in any square, the width is always equal to the height, regardless of whether a square is 1-inch wide or 10-feet wide. Traditionally, television sets displayed images in a 4 : 3 ratio. So, if you owned a 20” CRT (cathode ray tube) TV back in the olden days, like say 1980, the broadcast image on the screen was 16” wide by 12” high. So, the height was 3/4 the size of the width, or 4 : 3. The 20” dimension was determined by measuring the rectangle diagonally, and was mainly used to categorize and advertise the TV.

 

 

Almost all standard-definition analog videotapes, like U-matic, Beta and VHS, have a 4 : 3 aspect ratio. But when digitizing the content, things get more complicated. Analog video monitors display pixels that are tall and thin in shape. The height of these pixels is greater than their width, whereas modern computer displays use pixels that are square in shape. On an analog video monitor, NTSC video displays at roughly 720 (tall and skinny) pixels per horizontal line, and there are 486 visible horizontal lines. If you do the math on that, 720 x 486 is not 4 : 3. But because the analog pixels display tall and thin, you need more of them aligned vertically to fill up a 4 : 3 video monitor frame.


When Duke Libraries digitizes analog video, we create a master file that is 720 x 486 pixels, so that if someone from the broadcast television world later wants to use the file, it will be native to that traditional standard-definition broadcast specification. However, in order to display the digitized video on Duke’s website, we make a new file, called a derivative, with the dimensions changed to 640 x 480 pixels, because it will ultimately be viewed on computer monitors, laptops and smart phones, which use square pixels. Because the pixels are square, 640 x 480 is mathematically a 4 : 3 aspect ratio, and the video will display properly. The derivative video file is also compressed, so that it will stream smoothly regardless of internet bandwidth limits.

“We now return control of your television set to you. Until next week at the same time, when the control voice will take you to – The Outer Limits.”

Multispectral Imaging Through Collaboration

I am sure you have all been following the Library’s exploration into Multispectral Imaging (MSI) here on Bitstreams, Preservation Underground and the News & Observer.  Previous posts have detailed our collaboration with R.B. Toth Associates and the Duke Eye Center, the basic process and equipment, and the wide range of departments that could benefit from MSI.  In early December of last year (that sounds like it was so long ago!), we finished readying the room for MSI capture, installed the equipment, and went to MSI boot camp.

Obligatory before and after shot. In the bottom image, the new MSI system is in the background on the left with the full spectrum system that we have been using for years on the right. Other additions to the room are blackout curtains, neutral gray walls and black ceiling tiles all to control light spill between the two camera systems. Full spectrum overhead lighting and a new tile floor were installed which is standard for an imaging lab in the Library.

Well, boot camp came to us. Meghan Wilson, an independent contractor who has worked with R.B. Toth Associates for many years, started our training with an overview of the equipment and the basic science behind it. She covered the different lighting schemes and when they should be used.  She explained MSI applications for identifying resins, adhesives and pigments and how to use UV lighting and filters to expose obscured text.   We quickly went from talking to doing.  As with any training session worth its salt, things went awry right off the bat (not Meghan’s fault).  We had powered up the equipment but the camera would not communicate with the software and the lights would not fire when the shutter was triggered.  This was actually a good experience because we had to troubleshoot on the spot and figure out what was going on together as a team.  It turns out that there are six different pieces of equipment that have to be powered-up in a specific sequence in order for the system to communicate properly (tee up Apollo 13 soundtrack). Once we got the system up and running we took turns driving the software and hardware to capture a number of items that we had pre-selected.  This is an involved process that produces a bunch of files that eventually produce an image stack that can be manipulated using specialized software.  When it’s all said and done, files have been converted, cleaned, flattened, manipulated and variations produced that are somewhere in the neighborhood of 300 files. Whoa!

This is not your parents’ point and shoot—not the room, the lights, the curtains, the hardware, the software, the pricetag, none of it. But it is different in another more important way too. This process is team-driven and interdisciplinary. Our R&D working group is diverse and includes representatives from the following library departments.

  • The Digital Production Center (DPC) has expertise in high-end, full spectrum imaging for cultural heritage institutions along with a deep knowledge of the camera and lighting systems involved in MSI, file storage, naming and management of large sets of files with complex relationships.
  • The Duke Collaboratory for Classics Computing (DC3) offers a scholarly and research perspective on papyri, manuscripts, etc., as well as  experience with MSI and other imaging modalities
  • The Conservation Lab brings expertise in the Libraries’ collections and a deep understanding of the materiality and history of the objects we are imaging.
  • Duke Libraries’ Data Visualization Services (DVS) has expertise in the processing and display of complex data.
  • The Rubenstein Library’s Collection Development brings a deep understanding of the collections, provenance and history of materials, and valuable contacts with researchers near and far.

To get the most out of MSI we need all of those skills and perspectives. What MSI really offers is the ability to ask—and we hope answer—strings of good questions. Is there ink beneath that paste-down or paint? Is this a palimpsest? What text is obscured by that stain or fire-damage or water damage? Can we recover it without having to intervene physically? What does the ‘invisible’ text say and what if anything does this tell us about the object’s history? Is the reflectance signature of the ink compatible with the proposed date or provenance of the object? That’s just for starters. But you can see how even framing the right question requires a range of perspectives; we have to understand what kinds of properties MSI is likely to illuminate, what kinds of questions the material objects themselves suggest or demand, what the historical and scholarly stakes are, what the wider implications for our and others’ collections are, and how best to facilitate human interface with the data that we collect. No single person on the team commands all of this.

Working in any large group can be a challenge. But when it all comes together, it is worth it. Below is a page from Jantz 723, one processed as a black and white image and the other a Principal Component Analysis produced by the MSI capture and processed using ImageJ and a set of tools created by Bill Christens-Barry of R.B. Toth Associates with false color applied using Photoshop. Using MSI we were able to better reveal this watermark which had previously been obscured.

Jantz 723

I think we feel like 16-year-old kids with newly minted drivers’ licenses who have never driven a car on the highway or out of town. A whole new world has just opened up to us, and we are really excited and a little apprehensive!

What now?

Practice, experiment, document, refine. Over the next 12 (16? 18) months we will work together to hone our collective skills, driving the system, deepening our understanding of the scholarly, conservation, and curatorial use-cases for the technology, optimizing workflow, documenting best practices, getting a firm grip on scale, pace, and cost of what we can do. The team will assemble monthly, practice what we have learned, and lean on each other’s expertise to develop a solid workflow that includes the right expertise at the right time.  We will select a wide variety of materials so that we can develop a feel for how far we can push the system and what we can expect day to day. During all of this practice, workflows, guidelines, policies and expectations will come into sharper focus.

As you can tell from the above, we are going to learn a lot over the coming months.  We plan to share what we learn via regular posts here and elsewhere.  Although we are not prepared yet to offer MSI as a standard library service, we are interested to hear your suggestions for Duke Library collection items that may benefit from MSI imaging.  We have a long queue of items that we would like to shoot, and are excited to add more research questions, use cases, and new opportunities to push our skills forward.   To suggest materials, contact Molly Bragg, Digital Collections Program Manager (molly.bragg at Duke.edu), Joshua Sosin, Associate Professor in Classical Studies & History (jds15 at Duke.edu) or Curator of Collections (andrew.armacost at Duke.edu).

Want to learn even more about MSI at DUL?

Cutting Through the Noise

Noise is an inescapable part of our sonic environment.  As I sit at my quiet library desk writing this, I can hear the undercurrent of the building’s pipes and HVAC systems, the click-clack of the Scribe overhead book scanner, footsteps from the floor above, doors opening and closing in the hallway, and the various rustlings of my own fidgeting.  In our daily lives, our brains tune out much of this extraneous noise to help us focus on the task at hand and be alert to sounds conveying immediately useful information: a colleagues’s voice, a cell-phone buzz, a fire alarm.

When sound is recorded electronically, however, this tuned-out noise is often pushed to the foreground.  This may be due to the recording conditions (e.g. a field recording done on budget equipment in someone’s home or outdoors) or inherent in the recording technology itself (electrical interference, mechanical surface noise).  Noise is always present in the audio materials we digitize and archive, many of which are interviews, oral histories, and events recorded to cassette or open reel tape by amateurs in the field.  Our first goal is to make the cleanest and most direct analog-to-digital transfer possible, and then save this as our archival master .wav file with no alterations.  Once this is accomplished, we have some leeway to work with the digital audio and try to create a more easily listenable and intelligible access copy.

img_2190

I recently started experimenting with Steinberg WaveLab software to clean up digitized recordings from the Larry Rubin Papers.  This collection contains some amazing documentation of Rubin’s work as a civil rights organizer in the 1960s, but the ever-present hum & hiss often threaten to obscure the content.  I worked with two plug-ins in WaveLab to try to mitigate the noise while leaving the bulk of the audio information intact.

plugin1

Even if you don’t know it by name, anyone who has used electronic audio equipment has probably heard the dreaded 60 Cycle Hum.  This is a fixed low-frequency tone that is related to our main electric power grid operating at 120 volts AC in the United States.  Due to improper grounding and electromagnetic interference from nearby wires and appliances, this current can leak into our audio signals and appear as the ubiquitous 60 Hz hum (disclaimer–you may not be able to hear this as well on tiny laptop speakers or earbuds).  Wavelab’s De-Buzzer plug-in allowed me to isolate this troublesome frequency and reduce its volume level drastically in relation to the interview material.  Starting from a recommended preset, I adjusted the sensitivity of the noise reduction by ear to cut unwanted hum without introducing any obvious digital artifacts in the sound.

plugin2

Similarly omnipresent in analog audio is High-Frequency Hiss.  This wash of noise is native to any electrical system (see Noise Floor) and is especially problematic in tape-based media where the contact of the recording and playback heads against the tape introduces another level of “surface noise.”  I used the De-Noiser plug-in to reduce hiss while being careful not to cut into the high-frequency content too much.  Applying this effect too heavily could make the voices in the recording sound dull and muddy, which would be counterproductive to improving overall intelligibility.

Listen to the before & after audio snippets below.  While the audio is still far from perfect due to the original recording conditions, conservative application of the noise reduction tools has significantly cleaned up the sound.  It’s possible to cut the noise even further with more aggressive use of the effects, but I felt that would do more harm than good to the overall sound quality.

BEFORE:

AFTER:

 

I was fairly pleased with these results and plan to keep working with these and other software tools in the future to create digital audio files that meet the needs of archivists and researchers.  We can’t eliminate all of the noise from our media-saturated lives, but we can always keep striving to keep the signal-to-noise ratio at manageable and healthy levels.

 

img_2187

Presto! The Magic of Instantaneous Discs

This week’s post is inspired by one of the more fun aspects of digitization work:  the unexpected, unique, and strange audio objects that find their way to my desk from time to time.  These are usually items that have been located in our catalog via Internet search by patrons, faculty, or library staff.  Once the item has been identified as having potential research value and a listening copy is requested, it comes to us for evaluation and digital transfer.  More often than not it’s just your typical cassette or VHS tape, but sometimes something special rises to the surface…

IMG_1283

The first thing that struck me about this disc from the James Cannon III Papers was the dreamy contrast of complementary colors.  An enigmatic azure label sits atop a translucent yellow grooved disc.  The yellow has darkened over time in places, almost resembling a finely aged wheel of cheese.  Once the initial mesmerization wore off,  I began to consider several questions.  What materials is it made out of?  How can I play it back?  What is recorded on it?

A bit of research confirmed my suspicion that this was an “instantaneous disc,” a one-of-a-kind record cut on a lathe in real time as a musical performance or speech is happening.  Instantaneous discs are a subset of what are typically known as “lacquers” or “acetates” (the former being the technically correct term used by recording engineers, and the latter referring to the earliest substance they were manufactured with).  These discs consist of a hard substrate coated with a material soft enough to cut grooves into, but durable enough to withstand being played back on a turntable.  This particular disc seems to be made of a fibre-based material with a waxy coating.  The Silvertone label was owned by Sears, who had their own line of discs and recorders.  Further research suggested that I could probably safely play the disc a couple of times on a standard record player without damaging it, providing I used light stylus pressure.

Playback revealed (in scratchy lo-fi form) an account of a visit to New York City, which was backed up by adjacent materials in the Cannon collection:

IMG_1284IMG_1279

I wasn’t able to play this second disc due to surface damage, but it’s clear from the text that it was recorded in New York and intended as a sort of audio “letter” to Cannon.  These two discs illustrate the novelty of recording media in the early 20th Century, and we can imagine the thrill of receiving one of these in the mail and hearing a friend’s voice emerge from the speaker.  The instantaneous disc would mostly be replaced by tape-based media by the 1950s and ’60s, but the concept of a “voice message” has persisted to this day.

If you are interested in learning more about instantaneous discs, you may want to look into the history of the Presto Recording Company.  They were one of the main producers of discs and players, and there are a number of websites out there documenting the history and including images of original advertisements and labels.

Presto_ad_01

Color Bars & Test Patterns

In the Digital Production Center, many of the videotapes we digitize have “bars and tone” at the beginning of the tape. These are officially called “SMPTE color bars.” SMPTE stands for The Society of Motion Picture and Television Engineers, the organization that established the color bars as the North American video standard, beginning in the 1970s. In addition to the color bars presented visually, there is an audio tone that is emitted from the videotape at the same time, thus the phrase “bars and tone.”

color_bars
SMPTE color bars

The purpose of bars and tone is to serve as a reference or target for the calibration of color and audio levels coming from the videotape during transmission. The color bars are presented at 75% intensity. The audio tone is a 1kHz sine wave. In the DPC, we can make adjustments to the incoming signal, in order to bring the target values into specification. This is done by monitoring the vectorscope output, and the audio levels. Below, you can see the color bars are in proper alignment on the DPC’s vectorscope readout, after initial adjustment.

vectorscope
Color bars in proper alignment with the Digital Production Center’s vectorscope readout. Each letter stands for a color: red, magenta, blue, cyan, green and yellow.

We use Blackmagic Design’s SmartView monitors to check the vectorscope, as well as waveform and audio levels. The SmartView is an updated, more compact and lightweight version of the older, analog equipment traditionally used in television studios. The Smartview monitors are integrated into our video rack system, along with other video digitization equipment, and numerous videotape decks.

dpc_video_rack
The Digital Production Center’s videotape digitization system.

If you are old enough to have grown up in the black and white television era, you may recognize this old TV test pattern, commonly referred to as the “Indian-head test pattern.” This often appeared just before a TV station began broadcasting in the morning, and again right after the station signed off at night. The design was introduced in 1939 by RCA. The “Indian-head” image was integrated into a pattern of lines and shapes that television engineers used to calibrate broadcast equipment. Because the illustration of the Native American chief contained identifiable shades of gray, and had fine detail in the feathers of the headdress, it was ideal for adjusting brightness and contrast.

indian_head
The Indian-head test pattern was introduced by RCA in 1939.

When color television debuted in the 1960’s, the “Indian-head test pattern” was replaced with a test card showing color bars, a precursor to the SMPTE color bars. Today, the “Indian-head test pattern” is remembered nostalgically, as a symbol of the advent of television, and as a unique piece of Americana. The master art for the test pattern was discovered in an RCA dumpster in 1970, and has since been sold to a private collector.  In 2009, when all U.S. television stations were required to end their analog signal transmission, many of the stations used the Indian-head test pattern as their final analog broadcast image.