New and Recently Migrated Digital Collections

In the past 3 months, we have launched a number of exciting digital collections!  Our brand new offerings are either available now or will be very soon.  They are:

  • Duke Property Plats: https://repository.duke.edu/dc/uapropplat
  • Early Arabic Manuscripts (included in the recently migrated Early Greek Manuscripts): https://repository.duke.edu/dc/earlymss
  • International Broadsides (added to migrated Broadsides and Ephemera collection): https://repository.duke.edu/dc/broadsides
  • Orange County Tax List Ledger, 1875: https://repository.duke.edu/dc/orangecountytaxlist
  • Radio Haiti Archive, second batch of recordings: https://repository.duke.edu/dc/radiohaiti
  • William Gedney Finished Prints and Contact Sheets (newly re-digitized with new and improved metadata): https://repository.duke.edu/dc/gedney
A selection from the William Gedney Photographs digital collection

In addition to the brand new items, the digital collections team is constantly chipping away at the digital collections migration.  Here are the latest collections to move from Tripod 2 to the Duke Digital Repository (these are either available now or will be very soon):

One of the Greek items in the Early Manuscripts Collection.

Regular readers of Bitstreams are familiar with our digital collections migrations project; we first started writing about it almost 2 years ago when we announced the first collection to be launched in the new Duke Digital Repository interface.  Since then we have posted about various aspects of the migration with some regularity.

What we hoped would be a speedy transition is still a work in progress 2 years later.   This is due to a variety of factors one of which is that the work itself is very complex.  Before we can move a collection into the digital repository it has to be reviewed, all digital objects fully accounted for, and all metadata remediated and crosswalked into the DDR metadata profile.  Sometimes this process requires little effort.   However other times, especially with older collection, we have items with no metadata, or metadata with no items, or the numbers in our various systems simply do not match.  Tracking down the answers can require some major detective work on the part of my amazing colleagues.

Despite these challenges, we eagerly press on.  As each collection moves we get a little closer to having all of our digital collections under preservation control and providing access to all of them from a single platform.  Onward!

And Then There’s The Other Stuff… Meet FileTracker

The Duke Digital Repository is a pretty nice place if you’re a file in need of preservation and perhaps some access.  Provided you’re well-described and your organizational relationship to other files and collections is well understood, you could hardly hope for a better home.  But what if you’re not?  What if you’re an important digitized file with only collection-level description?  Or what if you’re digital reproduction of an 18th century encyclopedia created by a conservator to supplement traditional conservation methods?  It takes time to prepare materials for the repository.  We try our best to preserve the materials in the repository, but we also have to think about the other stuff.

We may apply different levels of preservation to materials depending on their source, uniqueness, cost to reproduce or reacquire, and other factors, but the baseline is knowing the objects we’re maintaining are the same objects we were given.  For that, we rely on fixity and checksums.  Unfortunately, it’s not easy to keep track of a couple of hundred terabytes of files from different collections, with different organizational schemes, different owners, and sometimes active intentional change.  The hard part isn’t only knowing what has changed, but providing that information to the owners and curators of the data so they can determine if those changes are intentional and desirable.  Seems like a lot, right?

We’re used some great tools from our colleagues, notably ACE Audit Control Environment, for scheduled fixity reporting.  We really wanted, though, to provide reporting to data owners that was tailored to they way they thought of their data to help reduce noise (with hundreds of terabytes there can be a lot of it!) and make it easier for them to identify unintentional changes.  So, we got work.

That work is named FileTracker.  FileTracker is a Rails application for tracking files and their fixity information.  It’s got a nice dashboard, too.

 

 

What we really needed, though, was a way to disentangle the work of the monitoring application from the work of stakeholder reporting.  The database that FileTracker generates makes it much easier to generate reports that contain the information that stakeholders want.  For instance, one stakeholder may want to know the number of files in each directory and the difference between the present number of files and the number of files at last audit.  We can also determine when files have been moved or renamed and not report those as missing files.

If you’d like to know more, see https://github.com/duke-libraries/file-tracker.

September scale-up: promoting the DDR and associated services to faculty and students

It’s September, and Duke students aren’t the only folks on campus in back-to-school mode. On the contrary, we here at the Duke Digital Repository are gearing up to begin promoting our research data curation services in real earnest. Over the last eight months, our four new research data staff have been busy getting to know the campus and the libraries, getting to know the repository itself and the tools we’re working with, and establishing a workflow. Now we’re ready to begin actively recruiting research data depositors!

As our colleagues in Data and Visualization Services noted in a presentation just last week, we’re aiming to scale up our data services in a big way by engaging researchers at all stages of the research lifecycle, not just at the very end of a research project. We hope to make this effort a two-front one. Through a series of ongoing workshops and consultations, the Research Data Management Consultants aspire to help researchers develop better data management habits and take the longterm preservation and re-use of their data into account when designing a project or applying for grants. On the back-end of things, the Content Analysts will be able to carry out many of the manual tasks that facilitate that longterm preservation and re-use, and are beginning to think about ways in which to tweak our existing software to better accommodate the needs of capital-D Data.

This past spring, the Data Management Consultants carried out a series of workshops intending to help researchers navigate the often muddy waters of data management and data sharing; topics ranged from available and useful tools to the occasionally thorny process of obtaining consent for–and the re-use of–data from human subjects.

Looking forward to the fall, the RDM consultants are planning another series of workshops to expand on the sessions given in the spring, covering new tools and strategies for managing research output. One of the tools we’re most excited to share is the Open Science Framework (OSF) for Institutions, which Duke joined just this spring. OSF is a powerful project management tool that helps promote transparency in research and allows scholars to associate their work and projects with Duke.

On the back-end of things, much work has been done to shore up our existing workflows, and a number of policies–both internal and external–have been met with approval by the Repository Program Committee. The Content Analysts continue to become more familiar with the available repository tools, while weighing in on ways in which we can make the software work better. The better part of the summer was devoted to collecting and analyzing requirements from research data stakeholders (among others), and we hope to put those needs in the development spotlight later this fall.

All of this is to say: we’re ready for it, so bring us your data!

Squirlicorn, spirit guide of the digital repository: Four things you should know

One thing I’ve learned on my life’s journey is the importance of knowing your spirit guide.

That’s why, by far the most important point that I made in a talk at the TRLN Annual Meeting in July is that the spirit guide of the digital repository movement is the squirlicorn.

Continue reading Squirlicorn, spirit guide of the digital repository: Four things you should know

Voices from the Movement

This past year the SNCC Digital Gateway has brought a number of activists to Duke’s campus to discuss lesser known aspects of the Student Nonviolent Coordinating Committee (SNCC)’s history and how their approach to organizing shifted over time. These sessions ranged the development of the symbol of the Black Panther for the Lowndes County Freedom Party, the strength of local people in the Movement in Southwest Georgia, and the global network supporting SNCC’s fight for Black empowerment in the U.S. and across the African Diaspora. Next month, there will be a session focused on music in the Movement, with a public panel the evening of September 19th.

Screenshot of “Born into the Movement,” from the “Our Voices” section on the SNCC Digital Gateway.

These visiting activist sessions, often spanning the course of a few days, produce hours of audio and video material, as SNCC veterans reengage with the history through conversation with their comrades. And this material is rich, as memories are dusted off and those involved explore how and why they did what they did. However, considering the structure of the SNCC Digital Gateway and wanting to make these 10 hour collections of A/V material digestible and accessible, we’ve had to develop a means of breaking them down.

Step One: Transcription

As is true for many projects, you begin by putting pen to paper (or by typing furiously). With the amount of transcribing that we do for this project, we’re certainly interested in making the process as seamless as possible. We depend on ExpressScribe, which allows you to set hot keys to start, stop, rewind, and fast forward audio material. Another feature is that you can easily adjust the speed at which the recording is being played, which is helpful for keeping your typing flow steady and uninterrupted. For those who really want to dive in, there is a foot pedal extension (yes, one did temporarily live in our project room) that allows you to control the recording with your feet – keeping your fingers even more free to type at lightning speed. After transcribing, it is always good practice to review the transcription, which you can do efficiently while listening to a high speed playback.

Step Two: Selecting Clips

Once these have been transcribed (each session results in approximately a 130 page transcript, single-spaced), it is time to select clips. For the parameters of this project, we keep the clips roughly between 30 seconds and 8 minutes and intentionally try to pull out the most prominent themes from the conversation. We then try to fit our selections into a larger narrative that tells a story. This process takes multiple reviews of the material and a significant amount of back and forth to ensure that the narrative stays true to the sentiments of the entire conversation.

The back-end of one of our pages.

Step Three: Writing the Narrative

We want users to listen to all of the A/V material, but sometimes details need to be laid out so that the clips themselves make sense. This is where the written narrative comes in. Without detracting from the wealth of newly-created audio and video material, we try to fill in some of the gaps and contextualize the clips for those who might be less familiar with the history. In addition to the written narrative, we embed relevant documents and photographs that complement the A/V material and give greater depth to the user’s experience.

Step Four: Creating the Audio Files

With all of the chosen clips pulled from the transcript, it’s time to actually make the audio files. For each of these sessions, we have multiple recorders in the room, in order to ensure everyone can be heard on the tape and that none of the conversation is lost due to recorder malfunction. These recorders are set to record in .WAV files, an uncompressed audio format for maximum audio quality.

One complication with having multiple mics in the room, however, is that the timestamps on the files are not always one-to-one. In order to easily pull the clips from the best recording we have, we have to sync the files. Our process involves first creating a folder system on an external hard drive. We then create a project in Adobe Premiere and import the files. It’s important that these files be on the same hard drive as the project file so that Premiere can easily find them. Then, we make sequences of the recordings and match the waveform from each of the mics. With a combination of using the timestamps on the transcriptions and scrubbing through the material, it’s easy to find the clips we need. From there, we can make any post-production edits that are necessary in Adobe Audition and export them as .mp3 files with Adobe Media Encoder.

Step Five: Uploading & Populating

Due to the SNCC Digital Gateway’s sustainability requirements, we host the files in a Duke Digital Collections folder and then embed them in the website, which is built on a WordPress platform. These files are then formatted between text, document, and image, to tell a story.

The Inaugural TRLN Institute – an Experiment in Consortial Collaboration

In June of this year I was fortunate to have participated in the inaugural TRLN Institute. Modeled as a sort of Scholarly Communication Institute for TRLN (Triangle Research Libraries Network, a consortium located in the Triangle region of North Carolina), the Institute provided space (the magnificent Hunt Library on North Carolina State University’s campus), time (three full days), and food (Breakfast! Lunch! Coffee!) for groups of 4-6 people from member libraries to get together to exclusively focus on developing innovative solutions to shared problems. Not only was it productive, it was truly delightful to spend time with colleagues from member institutions who, although we are geographically close, don’t get together often enough.

Six projects were chosen from a pool of applicants who proposed topics around this year’s theme of Scholarly Communication:

  • Supporting Scholarly Communications in Libraries through Project Management Best Practices
  • Locating Research Data in an Age of Open Access
  • Clarifying Rights and Maximizing Reuse with RightsStatements.org
  • Building a Research Data Community of Practice in NC
  • Building the 21st Century Researcher Brand
  • Scholarship in the Sandbox: Showcasing Student Works

You can read descriptions of the projects as well as group membership here.

The 2017 TRLN Institute participants and organizers, a happy bunch.

Having this much dedicated and unencumbered time to thoughtfully and intentionally address a problem area with colleagues was invaluable. And the open schedule allowed groups to be flexible as their ideas and expectations changed throughout the course of the three-day program. My own group – Clarifying Rights and Maximizing Reuse with RightsStatements.org – was originally focused on developing practices for the application and representation of RightsStatements.org statements for TRLN libraries’ online digitized collections. Through talking as a group, however, we realized early on that some of the stickiest issues regarding the implementation of a new rights management strategy involves the work an institution has to do to identify appropriate staff to do the work, allocate resources, plan, and document the process.

So, we pivoted! Instead of developing a decision matrix for applying the RS.org statements in digital collections (which is what we originally thought our output would be), we instead spent our time drafting a report – a roadmap of sorts – that describes the following important components when implementing RightsStatements.org:

  • roles and responsibilities (including questions that a person in a role would need to ask)
  • necessary planning and documentation
  • technical decisions
  • example implementations (including steps taken and staff involved – perhaps the most useful section of the report)

This week, we put the finishing touches on our report: TRLN Rights Statements Report – A Roadmap for Implementing RightsStatements.org Statements (yep, yet another google doc).  We’re excited to get feedback from the community, as well as hear about how other institutions are handling rights management metadata, especially as it relates to upstream archival information management. This is an area rife for future exploration!

I’d say that the first TRLN Institute was a success. I can’t imagine my group having self-organized and produced a document in just over a month without having first had three days to work together in the same space and unencumbered by other responsibilities. I think other groups have found valuable traction via the Institute as well, which will result in more collaborative efforts. I look forward to seeing what future TRLN Institute produce – this is definitely a model to continue!

Pink Squirrel: It really is the nuts

During the last 8 months that I’ve worked at Duke, I’ve noticed a lot of squirrels. They seem to be everywhere on this campus, and, not only that, they come closer than any squirrels that I’ve ever seen. In fact, while working outside yesterday, and squirrel hopped onto our table and tried to take an apple from us. It’s become a bit of a joke in my department, actually. We take every opportunity we can to make a squirrel reference.

Anyhow, since we talk about squirrels so often, I decided I’d run a search in our digital collections to see what I’d get. The only image returned was the billboard above, but I was pretty happy with it. In fact, I was so happy with it that I used this very image in my last blog post. At the time, though, I was writing about what my colleagues and I had been doing in regards to the new research data initiative since the beginning of 2017, so I simply used it as a visual to make my coworkers laugh. However, I reminded myself to revisit and investigate. Plus, although I bartended for many years during grad school, I’d never made (much less heard of) a Pink Squirrel cocktail. Drawing inspiration from our friends in Rubenstein Library that write for “The Devil’s Tales” in the “Rubenstein Library Test Kitchen” category, I thought I’d not only write about what I learned, but also try to recreate it.

This item comes from the “Outdoor Advertising Association of America (OAAA) Archives, 1885-1990s” digital collection, which includes over 16,000 images of outdoor advertisements and other scenes. It is one of a few digital outdoor advertising collections that we have, as were previously written about here.

This digital collection houses 6 Glenmore Distilleries Company billboard images in total. 2 are for liquors (a bourbon and a gin), and 4 are for “ready-to-pour” Glenmore cocktails.

These signs indicate that Glenmore Distilleries Company created a total of 14 ready-to-pour cocktails. I found a New York Times article from August 19, 1965 in our catalog stating that Glenmore Distilleries Co. had expanded its line to 18 drinks, which means that the billboards in our collection have to pre-date 1965. Its president, Frank Thompson Jr., was quoted as saying that he expected “exotic drinks” to account for any future surge in sales of bottled cocktails.

OK, so I learned that Glenmore Distilleries had bottled a drink called a Pink Squirrel sometime before 1965. Next, I needed to research to figure out about the Pink Squirrel. Had Glenmore created it? What was in it? Why was it PINK?

It appears the Pink Squirrel was quite popular in its day and has risen and fallen in the decades since. I couldn’t find a definitive academic source, but if one trusts Wikipedia, the Pink Squirrel was first created at Bryant’s Cocktail Lounge in Milwaukee, Wisconsin. The establishment still exists, and its website states the original bartender, Bryant Sharp, is credited with inventing the Pink Squirrel (also the Blue Tail Fly and the Banshee, if you’re interested in cocktails). Wikipedia lists 15 popular culture references for the drink, many from 90s sitcoms (I’m a child of the 80s but don’t remember this) and other more current references. I also found an online source saying it was popular on the New York cocktail scene in the late 70s and early 80s (maybe?). Our Duke catalog returns some results, as well, including articles from Saveur (2014), New York Times Magazine (2006), Restaurant Hospitality (1990), and Cosmopolitan (1981). These are mostly variations on the recipe, including cocktails made with cream, a cocktail made with ice cream (Saveur says “blender drinks” are a cherished tradition in Wisconsin), a pie(!), and a cheesecake(!!).

Armed with recipes for the cream-based and the ice cream-based cocktails, I figured I was all set to shop for ingredients and make the drinks. However, I quickly discovered that one of the three ingredients, crème de noyaux, is a liqueur that is not made in large quantities by many companies anymore, and proved impossible to find around the Triangle. However, it’s an important ingredient in this drink, not only for its nutty flavor, but also because it’s what gives it its pink hue (and obviously its name!). Determined to make this work, I decided to search to see if I could come up with a good enough alternative. I started with the Duke catalog, as all good library folk do, but with very little luck, I turned back to Google. This led me to another Wikipedia article for crème de noyaux, which suggested substituting Amaretto and some red food coloring. It also directed me to an interesting blog about none other than crème de noyaux, the Pink Squirrel, Bryant’s Cocktail Lounge, and a recipe from 1910 on how to make crème de noyaux. However, with time against me, I chose to sub Amaretto and red food coloring instead of making the 1910 homemade version.

First up was the cream based cocktail. The drink contains 1.5 ounces of heavy cream, .75 ounces of white crème de cacao, and .75 ounces of crème de noyaux (or Amaretto with a drop of red food coloring), and is served up in a martini glass.

The result was a creamy, chocolatey flavor with a slight nuttiness, and just enough sweetness without being overbearing. The ice cream version substitutes the heavy cream for a half a cup of vanilla ice cream and is blended rather than shaken. It had a thicker consistency and was much sweeter. My fellow taster and I definitely preferred the cream version. In fact, don’t be surprised if you see me around with a pink martini in hand sometime in the near future.

Nested Folders of Files in the Duke Digital Repository

Born digital archival material present unique challenges to representation, access, and discovery in the DDR. A hard drive arrives at the archives and we want to preserve and provide access to the files. In addition to the content of the files, it’s often important to preserve to some degree the organization of the material on the hard drive in nested directories.

One challenge to representing complex inter-object relationships in the repository is the repository’s relatively simple object model. A collection contains one or more items. An item contains one or more components. And a component has one or more data streams. There’s no accommodation in this model for complex groups and hierarchies of items. We tend to talk about this as a limitation, but it also makes it possible to provide search and discovery of a wide range of kinds and arrangements of materials in a single repository and forces us to make decisions about how to model collections in sustainable and consistent ways. But we still need to preserve and provide access to the original structure of the material.

One approach is to ingest the disk image or a zip archive of the directories and files and store the content as a single file in the repository. This approach is straightforward, but makes it impossible to search for individual files in the repository or to understand much about the content without first downloading and unarchiving it.

As a first pass at solving this problem of how to preserve and represent files in nested directories in the DDR we’ve taken a two-pronged approach. We will use a simple approach to modeling disk image and directory content in the repository. Every file is modeled in the repository as an item with a single component that contains the data stream of the file. This provides convenient discovery and access to each individual file from the collection in the DDR, but does not represent any folder hierarchies. The files are just a flat list of objects contained by a collection.

To preserve and store information about the structure of the files we add an XML METS structMap as metadata on the collection. In addition we store on each item a metadata field that stores the complete original file path of the file.

Below is a small sample of the kind of structural metadata that encodes the nested folder information on the collection. It encodes the structure and nesting, directory names (in the LABEL attribute), the order of files and directories, as well as the identifiers for each of the files/items in the collection.

<?xml version="1.0"?>
<mets xmlns="http://www.loc.gov/METS/" xmlns:xlink="http://www.w3.org/1999/xlink">
  <metsHdr>
    <agent ROLE="CREATOR">
      <name>REPOSITORY DEFAULT</name>
    </agent>
  </metsHdr>
  <structMap TYPE="default">
    <div LABEL="2017-0040" ORDER="1" TYPE="Directory">
      <div ORDER="1">
        <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk42j6qc37"/>
      </div>
      <div LABEL="RL11405-LFF-0001_Programs" ORDER="2" TYPE="Directory">
        <div ORDER="1">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk4j67r45s"/>
        </div>
        <div ORDER="2">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk4d50x529"/>
        </div>
        <div ORDER="3">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk4086jd3r"/>
        </div>
      </div>
      <div LABEL="RL11405-LFF-0002_H1_Early-Records-of-Decentralization-Conference" ORDER="3" TYPE="Directory">
        <div ORDER="1">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk4697f56f"/>
        </div>
        <div ORDER="2">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk45h7t22s"/>
        </div>
      </div>
    </div>
  </structMap>
</mets>

Combining the 1:1 (item:component) object model with structural metadata that preserves the original directory structure of the files on the file system enables us to display a user interface that reflects the original structure of the content even though the structure of the items in the repository is flat.

There’s more to it of course. We had to develop a new ingest process that could take as its starting point a file path and then crawl it and its subdirectories to ingest files and construct the necessary structural metadata.

On the UI end of things a nifty Javascript plugin called jsTree powers the interactive directory structure display on the collection page.

Because some of the collections are very large and loading a directory tree structure of 100,000 or more items would be very slow, we implemented a small web service in the application that loads the jsTree data only when someone clicks to open a directory in the interface.

The file paths are also keyword searchable from within the public interface. So if a file is contained in a directory named “kitchen/fruits/bananas/this-banana.txt” you would be able to find the file this-banana.txt by searching for “kitchen” or “fruit” or “banana.”

This new functionality to ingest, preserve, and represent files in nested folder structures in the Duke Digital Repository will be included in the September release of the Duke Digital Repository.

A History of Videotape, Part 1

As a Digital Production Specialist at Duke Libraries, I work with a variety of obsolete videotape formats, digitizing them for long-term preservation and access. Videotape is a form of magnetic tape, consisting of a magnetized coating on one side of a strip of plastic film. The film is there to support the magnetized coating, which usually consists of iron oxide. Magnetic tape was first invented in 1928, for recording sound, but it would be several decades before it could be used for moving images, due to the increased bandwidth that is required to capture the visual content.

Bing Crosby was the first major entertainer who pushed for audiotape recordings of his radio broadcasts. in 1951, his company, Bing Crosby Enterprises (BCE) debuted the first videotape technology to the public.

Television was live in the beginning, because there was no way to pre-record the broadcast other than with traditional film, which was expensive and time-consuming. In 1951, Bing Crosby Enterprises (BCE), owned by actor and singer Bing Crosby, demonstrated the first videotape recording. Crosby had previously incorporated audiotape recording into the production of his radio broadcasts, so that he would have more time for other commitments, like golf! Instead of having to do a live radio broadcast once a week for a month, he could record four broadcasts in one week, then have the next three weeks off. The 1951 demonstration ran quarter-inch audiotape at 360 inches per second, using a modified Ampex 200 tape recorder, but the images were reportedly blurry and not broadcast quality.

Ampex introduced 2” quadruplex videotape at the National Association of Broadcasters convention in 1956. Shown here is a Bosch 2″ Zoll Quadruplex Machine.

More companies experimented with the emerging technology in the early 1950’s, until Ampex introduced 2” black and white quadruplex videotape at the National Association of Broadcasters convention in 1956. This was the first videotape that was broadcast quality. Soon, television networks were broadcasting pre-recorded shows on quadruplex, and were able to present them at different times in all four U.S. time zones. Some of the earliest videotape broadcasts were CBS’s “The Edsel Show,” CBS’s “Douglas Edwards with the News,” and NBC’s “Truth or Consequences.” In 1958, Ampex debuted a color quadruplex videotape recorder. NBC’s “An Evening with Fred Astaire” was the first major TV show to be videotaped in color, also in 1958.

Virtually all the videotapes of the first ten years (1962-1972) of “The Tonight Show with Johnny Carson” were taped over by NBC to save money, so no one has seen these episodes since broadcast, nor will they… ever.

 

One of the downsides to quadruplex, is that the videotapes could only be played back using the same tape heads which originally recorded the content. Those tape-heads wore out very quickly, which mean’t that many tapes could not be reliably played back using the new tape-heads that replaced the exhausted ones. Quadruplex videotapes were also expensive, about $300 per hour of tape. So, many TV stations maximized the expense, by continually erasing tapes, and then recording the next broadcast on the same tape. Unfortunately, due to this, many classic TV shows are lost forever, like the vast majority of the first ten years (1962-1972) of “The Tonight Show with Johnny Carson,” and Super Bowl II (1968).

Quadruplex was the industry standard until the introduction of 1” Type C, in 1976. Type C video recorders required less maintenance, were more compact and enabled new functions, like still frame, shuttle and slow motion, and 1” Type C did not require time base correction, like 2” Quadruplex did. Type C is a composite videotape format, with quality that matches later component formats like Betacam. Composite video merges the color channels so that it’s consistent with a broadcast signal. Type C remained popular for several decades, until the use of videocassettes gained in popularity. We will explore that in a future blog post.

Notes from the Duke University Libraries Digital Projects Team