Voices from the Movement

This past year the SNCC Digital Gateway has brought a number of activists to Duke’s campus to discuss lesser known aspects of the Student Nonviolent Coordinating Committee (SNCC)’s history and how their approach to organizing shifted over time. These sessions ranged the development of the symbol of the Black Panther for the Lowndes County Freedom Party, the strength of local people in the Movement in Southwest Georgia, and the global network supporting SNCC’s fight for Black empowerment in the U.S. and across the African Diaspora. Next month, there will be a session focused on music in the Movement, with a public panel the evening of September 19th.

Screenshot of “Born into the Movement,” from the “Our Voices” section on the SNCC Digital Gateway.

These visiting activist sessions, often spanning the course of a few days, produce hours of audio and video material, as SNCC veterans reengage with the history through conversation with their comrades. And this material is rich, as memories are dusted off and those involved explore how and why they did what they did. However, considering the structure of the SNCC Digital Gateway and wanting to make these 10 hour collections of A/V material digestible and accessible, we’ve had to develop a means of breaking them down.

Step One: Transcription

As is true for many projects, you begin by putting pen to paper (or by typing furiously). With the amount of transcribing that we do for this project, we’re certainly interested in making the process as seamless as possible. We depend on ExpressScribe, which allows you to set hot keys to start, stop, rewind, and fast forward audio material. Another feature is that you can easily adjust the speed at which the recording is being played, which is helpful for keeping your typing flow steady and uninterrupted. For those who really want to dive in, there is a foot pedal extension (yes, one did temporarily live in our project room) that allows you to control the recording with your feet – keeping your fingers even more free to type at lightning speed. After transcribing, it is always good practice to review the transcription, which you can do efficiently while listening to a high speed playback.

Step Two: Selecting Clips

Once these have been transcribed (each session results in approximately a 130 page transcript, single-spaced), it is time to select clips. For the parameters of this project, we keep the clips roughly between 30 seconds and 8 minutes and intentionally try to pull out the most prominent themes from the conversation. We then try to fit our selections into a larger narrative that tells a story. This process takes multiple reviews of the material and a significant amount of back and forth to ensure that the narrative stays true to the sentiments of the entire conversation.

The back-end of one of our pages.

Step Three: Writing the Narrative

We want users to listen to all of the A/V material, but sometimes details need to be laid out so that the clips themselves make sense. This is where the written narrative comes in. Without detracting from the wealth of newly-created audio and video material, we try to fill in some of the gaps and contextualize the clips for those who might be less familiar with the history. In addition to the written narrative, we embed relevant documents and photographs that complement the A/V material and give greater depth to the user’s experience.

Step Four: Creating the Audio Files

With all of the chosen clips pulled from the transcript, it’s time to actually make the audio files. For each of these sessions, we have multiple recorders in the room, in order to ensure everyone can be heard on the tape and that none of the conversation is lost due to recorder malfunction. These recorders are set to record in .WAV files, an uncompressed audio format for maximum audio quality.

One complication with having multiple mics in the room, however, is that the timestamps on the files are not always one-to-one. In order to easily pull the clips from the best recording we have, we have to sync the files. Our process involves first creating a folder system on an external hard drive. We then create a project in Adobe Premiere and import the files. It’s important that these files be on the same hard drive as the project file so that Premiere can easily find them. Then, we make sequences of the recordings and match the waveform from each of the mics. With a combination of using the timestamps on the transcriptions and scrubbing through the material, it’s easy to find the clips we need. From there, we can make any post-production edits that are necessary in Adobe Audition and export them as .mp3 files with Adobe Media Encoder.

Step Five: Uploading & Populating

Due to the SNCC Digital Gateway’s sustainability requirements, we host the files in a Duke Digital Collections folder and then embed them in the website, which is built on a WordPress platform. These files are then formatted between text, document, and image, to tell a story.

The Inaugural TRLN Institute – an Experiment in Consortial Collaboration

In June of this year I was fortunate to have participated in the inaugural TRLN Institute. Modeled as a sort of Scholarly Communication Institute for TRLN (Triangle Research Libraries Network, a consortium located in the Triangle region of North Carolina), the Institute provided space (the magnificent Hunt Library on North Carolina State University’s campus), time (three full days), and food (Breakfast! Lunch! Coffee!) for groups of 4-6 people from member libraries to get together to exclusively focus on developing innovative solutions to shared problems. Not only was it productive, it was truly delightful to spend time with colleagues from member institutions who, although we are geographically close, don’t get together often enough.

Six projects were chosen from a pool of applicants who proposed topics around this year’s theme of Scholarly Communication:

  • Supporting Scholarly Communications in Libraries through Project Management Best Practices
  • Locating Research Data in an Age of Open Access
  • Clarifying Rights and Maximizing Reuse with RightsStatements.org
  • Building a Research Data Community of Practice in NC
  • Building the 21st Century Researcher Brand
  • Scholarship in the Sandbox: Showcasing Student Works

You can read descriptions of the projects as well as group membership here.

The 2017 TRLN Institute participants and organizers, a happy bunch.

Having this much dedicated and unencumbered time to thoughtfully and intentionally address a problem area with colleagues was invaluable. And the open schedule allowed groups to be flexible as their ideas and expectations changed throughout the course of the three-day program. My own group – Clarifying Rights and Maximizing Reuse with RightsStatements.org – was originally focused on developing practices for the application and representation of RightsStatements.org statements for TRLN libraries’ online digitized collections. Through talking as a group, however, we realized early on that some of the stickiest issues regarding the implementation of a new rights management strategy involves the work an institution has to do to identify appropriate staff to do the work, allocate resources, plan, and document the process.

So, we pivoted! Instead of developing a decision matrix for applying the RS.org statements in digital collections (which is what we originally thought our output would be), we instead spent our time drafting a report – a roadmap of sorts – that describes the following important components when implementing RightsStatements.org:

  • roles and responsibilities (including questions that a person in a role would need to ask)
  • necessary planning and documentation
  • technical decisions
  • example implementations (including steps taken and staff involved – perhaps the most useful section of the report)

This week, we put the finishing touches on our report: TRLN Rights Statements Report – A Roadmap for Implementing RightsStatements.org Statements (yep, yet another google doc).  We’re excited to get feedback from the community, as well as hear about how other institutions are handling rights management metadata, especially as it relates to upstream archival information management. This is an area rife for future exploration!

I’d say that the first TRLN Institute was a success. I can’t imagine my group having self-organized and produced a document in just over a month without having first had three days to work together in the same space and unencumbered by other responsibilities. I think other groups have found valuable traction via the Institute as well, which will result in more collaborative efforts. I look forward to seeing what future TRLN Institute produce – this is definitely a model to continue!

Pink Squirrel: It really is the nuts

During the last 8 months that I’ve worked at Duke, I’ve noticed a lot of squirrels. They seem to be everywhere on this campus, and, not only that, they come closer than any squirrels that I’ve ever seen. In fact, while working outside yesterday, and squirrel hopped onto our table and tried to take an apple from us. It’s become a bit of a joke in my department, actually. We take every opportunity we can to make a squirrel reference.

Anyhow, since we talk about squirrels so often, I decided I’d run a search in our digital collections to see what I’d get. The only image returned was the billboard above, but I was pretty happy with it. In fact, I was so happy with it that I used this very image in my last blog post. At the time, though, I was writing about what my colleagues and I had been doing in regards to the new research data initiative since the beginning of 2017, so I simply used it as a visual to make my coworkers laugh. However, I reminded myself to revisit and investigate. Plus, although I bartended for many years during grad school, I’d never made (much less heard of) a Pink Squirrel cocktail. Drawing inspiration from our friends in Rubenstein Library that write for “The Devil’s Tales” in the “Rubenstein Library Test Kitchen” category, I thought I’d not only write about what I learned, but also try to recreate it.

This item comes from the “Outdoor Advertising Association of America (OAAA) Archives, 1885-1990s” digital collection, which includes over 16,000 images of outdoor advertisements and other scenes. It is one of a few digital outdoor advertising collections that we have, as were previously written about here.

This digital collection houses 6 Glenmore Distilleries Company billboard images in total. 2 are for liquors (a bourbon and a gin), and 4 are for “ready-to-pour” Glenmore cocktails.

These signs indicate that Glenmore Distilleries Company created a total of 14 ready-to-pour cocktails. I found a New York Times article from August 19, 1965 in our catalog stating that Glenmore Distilleries Co. had expanded its line to 18 drinks, which means that the billboards in our collection have to pre-date 1965. Its president, Frank Thompson Jr., was quoted as saying that he expected “exotic drinks” to account for any future surge in sales of bottled cocktails.

OK, so I learned that Glenmore Distilleries had bottled a drink called a Pink Squirrel sometime before 1965. Next, I needed to research to figure out about the Pink Squirrel. Had Glenmore created it? What was in it? Why was it PINK?

It appears the Pink Squirrel was quite popular in its day and has risen and fallen in the decades since. I couldn’t find a definitive academic source, but if one trusts Wikipedia, the Pink Squirrel was first created at Bryant’s Cocktail Lounge in Milwaukee, Wisconsin. The establishment still exists, and its website states the original bartender, Bryant Sharp, is credited with inventing the Pink Squirrel (also the Blue Tail Fly and the Banshee, if you’re interested in cocktails). Wikipedia lists 15 popular culture references for the drink, many from 90s sitcoms (I’m a child of the 80s but don’t remember this) and other more current references. I also found an online source saying it was popular on the New York cocktail scene in the late 70s and early 80s (maybe?). Our Duke catalog returns some results, as well, including articles from Saveur (2014), New York Times Magazine (2006), Restaurant Hospitality (1990), and Cosmopolitan (1981). These are mostly variations on the recipe, including cocktails made with cream, a cocktail made with ice cream (Saveur says “blender drinks” are a cherished tradition in Wisconsin), a pie(!), and a cheesecake(!!).

Armed with recipes for the cream-based and the ice cream-based cocktails, I figured I was all set to shop for ingredients and make the drinks. However, I quickly discovered that one of the three ingredients, crème de noyaux, is a liqueur that is not made in large quantities by many companies anymore, and proved impossible to find around the Triangle. However, it’s an important ingredient in this drink, not only for its nutty flavor, but also because it’s what gives it its pink hue (and obviously its name!). Determined to make this work, I decided to search to see if I could come up with a good enough alternative. I started with the Duke catalog, as all good library folk do, but with very little luck, I turned back to Google. This led me to another Wikipedia article for crème de noyaux, which suggested substituting Amaretto and some red food coloring. It also directed me to an interesting blog about none other than crème de noyaux, the Pink Squirrel, Bryant’s Cocktail Lounge, and a recipe from 1910 on how to make crème de noyaux. However, with time against me, I chose to sub Amaretto and red food coloring instead of making the 1910 homemade version.

First up was the cream based cocktail. The drink contains 1.5 ounces of heavy cream, .75 ounces of white crème de cacao, and .75 ounces of crème de noyaux (or Amaretto with a drop of red food coloring), and is served up in a martini glass.

The result was a creamy, chocolatey flavor with a slight nuttiness, and just enough sweetness without being overbearing. The ice cream version substitutes the heavy cream for a half a cup of vanilla ice cream and is blended rather than shaken. It had a thicker consistency and was much sweeter. My fellow taster and I definitely preferred the cream version. In fact, don’t be surprised if you see me around with a pink martini in hand sometime in the near future.

Nested Folders of Files in the Duke Digital Repository

Born digital archival material present unique challenges to representation, access, and discovery in the DDR. A hard drive arrives at the archives and we want to preserve and provide access to the files. In addition to the content of the files, it’s often important to preserve to some degree the organization of the material on the hard drive in nested directories.

One challenge to representing complex inter-object relationships in the repository is the repository’s relatively simple object model. A collection contains one or more items. An item contains one or more components. And a component has one or more data streams. There’s no accommodation in this model for complex groups and hierarchies of items. We tend to talk about this as a limitation, but it also makes it possible to provide search and discovery of a wide range of kinds and arrangements of materials in a single repository and forces us to make decisions about how to model collections in sustainable and consistent ways. But we still need to preserve and provide access to the original structure of the material.

One approach is to ingest the disk image or a zip archive of the directories and files and store the content as a single file in the repository. This approach is straightforward, but makes it impossible to search for individual files in the repository or to understand much about the content without first downloading and unarchiving it.

As a first pass at solving this problem of how to preserve and represent files in nested directories in the DDR we’ve taken a two-pronged approach. We will use a simple approach to modeling disk image and directory content in the repository. Every file is modeled in the repository as an item with a single component that contains the data stream of the file. This provides convenient discovery and access to each individual file from the collection in the DDR, but does not represent any folder hierarchies. The files are just a flat list of objects contained by a collection.

To preserve and store information about the structure of the files we add an XML METS structMap as metadata on the collection. In addition we store on each item a metadata field that stores the complete original file path of the file.

Below is a small sample of the kind of structural metadata that encodes the nested folder information on the collection. It encodes the structure and nesting, directory names (in the LABEL attribute), the order of files and directories, as well as the identifiers for each of the files/items in the collection.

<?xml version="1.0"?>
<mets xmlns="http://www.loc.gov/METS/" xmlns:xlink="http://www.w3.org/1999/xlink">
  <metsHdr>
    <agent ROLE="CREATOR">
      <name>REPOSITORY DEFAULT</name>
    </agent>
  </metsHdr>
  <structMap TYPE="default">
    <div LABEL="2017-0040" ORDER="1" TYPE="Directory">
      <div ORDER="1">
        <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk42j6qc37"/>
      </div>
      <div LABEL="RL11405-LFF-0001_Programs" ORDER="2" TYPE="Directory">
        <div ORDER="1">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk4j67r45s"/>
        </div>
        <div ORDER="2">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk4d50x529"/>
        </div>
        <div ORDER="3">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk4086jd3r"/>
        </div>
      </div>
      <div LABEL="RL11405-LFF-0002_H1_Early-Records-of-Decentralization-Conference" ORDER="3" TYPE="Directory">
        <div ORDER="1">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk4697f56f"/>
        </div>
        <div ORDER="2">
          <mptr LOCTYPE="ARK" xlink:href="ark:/99999/fk45h7t22s"/>
        </div>
      </div>
    </div>
  </structMap>
</mets>

Combining the 1:1 (item:component) object model with structural metadata that preserves the original directory structure of the files on the file system enables us to display a user interface that reflects the original structure of the content even though the structure of the items in the repository is flat.

There’s more to it of course. We had to develop a new ingest process that could take as its starting point a file path and then crawl it and its subdirectories to ingest files and construct the necessary structural metadata.

On the UI end of things a nifty Javascript plugin called jsTree powers the interactive directory structure display on the collection page.

Because some of the collections are very large and loading a directory tree structure of 100,000 or more items would be very slow, we implemented a small web service in the application that loads the jsTree data only when someone clicks to open a directory in the interface.

The file paths are also keyword searchable from within the public interface. So if a file is contained in a directory named “kitchen/fruits/bananas/this-banana.txt” you would be able to find the file this-banana.txt by searching for “kitchen” or “fruit” or “banana.”

This new functionality to ingest, preserve, and represent files in nested folder structures in the Duke Digital Repository will be included in the September release of the Duke Digital Repository.

A History of Videotape, Part 1

As a Digital Production Specialist at Duke Libraries, I work with a variety of obsolete videotape formats, digitizing them for long-term preservation and access. Videotape is a form of magnetic tape, consisting of a magnetized coating on one side of a strip of plastic film. The film is there to support the magnetized coating, which usually consists of iron oxide. Magnetic tape was first invented in 1928, for recording sound, but it would be several decades before it could be used for moving images, due to the increased bandwidth that is required to capture the visual content.

Bing Crosby was the first major entertainer who pushed for audiotape recordings of his radio broadcasts. in 1951, his company, Bing Crosby Enterprises (BCE) debuted the first videotape technology to the public.

Television was live in the beginning, because there was no way to pre-record the broadcast other than with traditional film, which was expensive and time-consuming. In 1951, Bing Crosby Enterprises (BCE), owned by actor and singer Bing Crosby, demonstrated the first videotape recording. Crosby had previously incorporated audiotape recording into the production of his radio broadcasts, so that he would have more time for other commitments, like golf! Instead of having to do a live radio broadcast once a week for a month, he could record four broadcasts in one week, then have the next three weeks off. The 1951 demonstration ran quarter-inch audiotape at 360 inches per second, using a modified Ampex 200 tape recorder, but the images were reportedly blurry and not broadcast quality.

Ampex introduced 2” quadruplex videotape at the National Association of Broadcasters convention in 1956. Shown here is a Bosch 2″ Zoll Quadruplex Machine.

More companies experimented with the emerging technology in the early 1950’s, until Ampex introduced 2” black and white quadruplex videotape at the National Association of Broadcasters convention in 1956. This was the first videotape that was broadcast quality. Soon, television networks were broadcasting pre-recorded shows on quadruplex, and were able to present them at different times in all four U.S. time zones. Some of the earliest videotape broadcasts were CBS’s “The Edsel Show,” CBS’s “Douglas Edwards with the News,” and NBC’s “Truth or Consequences.” In 1958, Ampex debuted a color quadruplex videotape recorder. NBC’s “An Evening with Fred Astaire” was the first major TV show to be videotaped in color, also in 1958.

Virtually all the videotapes of the first ten years (1962-1972) of “The Tonight Show with Johnny Carson” were taped over by NBC to save money, so no one has seen these episodes since broadcast, nor will they… ever.

 

One of the downsides to quadruplex, is that the videotapes could only be played back using the same tape heads which originally recorded the content. Those tape-heads wore out very quickly, which mean’t that many tapes could not be reliably played back using the new tape-heads that replaced the exhausted ones. Quadruplex videotapes were also expensive, about $300 per hour of tape. So, many TV stations maximized the expense, by continually erasing tapes, and then recording the next broadcast on the same tape. Unfortunately, due to this, many classic TV shows are lost forever, like the vast majority of the first ten years (1962-1972) of “The Tonight Show with Johnny Carson,” and Super Bowl II (1968).

Quadruplex was the industry standard until the introduction of 1” Type C, in 1976. Type C video recorders required less maintenance, were more compact and enabled new functions, like still frame, shuttle and slow motion, and 1” Type C did not require time base correction, like 2” Quadruplex did. Type C is a composite videotape format, with quality that matches later component formats like Betacam. Composite video merges the color channels so that it’s consistent with a broadcast signal. Type C remained popular for several decades, until the use of videocassettes gained in popularity. We will explore that in a future blog post.

More MSI Fun: Experimenting with iron gall ink

Submitted on behalf of Erin Hammeke

For conservators, one of the aspects of having the MSI system that excites us most is being able to visualize and document the effects of the treatments that we perform. Although we are still learning the ropes with our new system, we had a recent opportunity to image some iron gall ink documents. Iron gall ink is common historic ink that reacts with moisture in the environment to form acidic complexes that spread and sink into the paper, weakening the paper and, in some cases, leaving holes and losses. This iron gall ink degradation can be better visualized with MSI, since the beginning stage, haloing, is not always visible under normal illumination. Look here for more information on iron gall ink damage and here for using MSI to document iron gall ink condition and treatment. We also illustrated the haloing effect of iron gall ink damage using MSI on Jantz MS #124 in a previous post.

Recently, DUL conservators experimented with treating some discarded iron gall ink manuscripts with a chemical treatment that aims to arrest the ink’s degradation. This treatment requires submerging the manuscripts in a calcium phytate solution – a chemical that bonds with free iron (II) ions, stabilizing the ink and preventing it from corroding further. The document is then rinsed in a water bath and an alkaline reserve is applied. Resizing with gelatin is another common step, but we did resize our test manuscripts.

Since these were discarded test material, we were able to cut the manuscripts in two and only treat one half. Imaging the manuscript with MSI revealed some notable findings.

Most of the treated papers now appear lighter and brighter under normal illumination because they have been washed. However, the untreated halves exhibited pronounced UV induced visible fluorescence around the 488 nm range and the treated halves did not. We believe this difference likely has to do with washing the paper substrate and rinsing out degradation products or perhaps paper size that may exhibit fluorescence at this wavelength. We were happy to see that for a treatment that targets the ink, there was very little noticeable difference in the appearance of the inks between untreated and treated portions of the test manuscript. There was some reduction in the “ink sink” (ink visible from the opposite side of the manuscript) and a very slight softness to the edges of the ink in the treated sample, but these changes were very minimal. We look forward to imaging more of our test manuscripts in the future and seeing what else we can learn from them.

______

Want to learn even more about MSI at DUL?

SNCC Digital Gateway Homepage Updates

Earlier this summer I worked with the SNCC Digital Gateway team to launch a revised version of their homepage. The SNCC Digital Gateway site originally was launched in the Fall of 2016. Since then much more content has been incorporated into the site. The team and their advisory board wanted to highlight some of this new content on the homepage (by making it scrollable) while also staying true to the original design.

The previous version of the homepage included two main features:

  • a large black and white photograph that would randomly load (based on five different options) every time a user visited the page
  • a ‘fixed’ primary navigation in the footer

Rotating Background Images

In my experience, the ‘go to’ approach for doing any kind of image rotation is to use a Javascript library, probably one that relies on jQuery. My personal favorite for a long time has been jQuery Cycle 2 which I appreciated for it’s lightweight, flexible implementation, and price ($free!). With the new SNCC homepage, I wanted to figure out a way to both crossfade the background images and fade the caption text in and out elegantly. It was also critical that the captions match up perfectly with their associated images. I was worried that doing this with Cycle 2 was going to be overly complicated with respect to syncing the timing, as in some past projects I’d run into trouble keeping discrete carousels locked in sync after several iterations — for example, with leaving the page up and running for several minutes.

I decided to try and build the SNCC background rotation using CSS animations. In the past I’d shied away from using CSS animations for anything that was presented as a primary feature or that was complex as the browser support was spotty. However, the current state of browser support is better, even though it still has a ways to go. In my first attempt I tried crossfading the images as backgrounds in a wrapper div, as this was going to make things work with resizing the page much easier by using background-size: cover property. But I discovered that animating background images isn’t actually supported in the spec, even though it worked perfectly in Chrome and Opera. So instead I went with the approach where you stack the images on top of each other and change the opacity one at a time, like so:

<div class="bg-image-wrapper">
  <img src="image-1.jpg" alt="">
  <img src="image-2.jpg" alt="">
  <img src="image-3.jpg" alt="">
  <img src="image-4.jpg" alt="">
  <img src="image-5.jpg" alt="">
</div>

I setup the structure for the captions in a similar way:

<div id="home-caption">
  <li>caption 1</li>
  <li>caption 2</li>
  <li>caption 3</li>
  <li>caption 4</li>
  <li>caption 5</li>
</div>

I won’t bore you with the details of CSS animation, but in short they are based on keyframes that can be looped and applied to html elements. The one thing that proved to be a little tricky was the timing between the images and the captions, as the keyframes are represented in percentages of the entire animation. This was further complicated by the types of transitions I was using (crossfading the images and linearly fading the captions) and that I wanted to slightly stagger the caption animations so that they would come in after the crossfade completes and transition out just before the next crossfade starts, like so:

Crossfade illustration
As time moves from left to right, the images and captions have independent transitions

The SNCC team and I also discussed a few options for the overall timing of the transitions and settled on eight seconds per image. With five images in our rotation, the total time of the animation would be 40 seconds. The entire animation is applied to each image, and offset with a delay based on their position in the .bg-image-wrapper stack. The CSS for the images looks like this:

.bg-image-wrapper img {
  animation-name: sncc-fader;
  animation-timing-function: ease-in-out;
  animation-iteration-count: infinite;
  animation-duration: 40s;
}


@keyframes sncc-fader {
  0% {
    opacity:1;
  }
  16% {
    opacity:1;
  }
  21% {
    opacity:0;
  }
  95% {
    opacity:0;
  }
  100% {
    opacity:1;
  }
}

.bg-image-wrapper img:nth-of-type(1) {
  animation-delay: 32s;
}
.bg-image-wrapper img:nth-of-type(2) {
  animation-delay: 24s;
}
.bg-image-wrapper img:nth-of-type(3) {
  animation-delay: 16s;
}
.bg-image-wrapper img:nth-of-type(4) {
  animation-delay: 8s;
}
.bg-image-wrapper img:nth-of-type(5) {
  animation-delay: 0;
}

The resulting animation looks something like this:

SNCC example rotation

The other piece of the puzzle was emulating the behavior of background: cover which resizes a background image to fill the entire width of a div and positions the image vertically in a consistent way. In general I really like using this attribute. I struggled to get things working on my own, but eventually came across a great code example of how to get things working. So I copied that implementation and it worked perfectly.

Fixed Nav

I was worried that getting the navigation bar to stay consistently positioned at the bottom of the page and allowing for scrolling — while also working responsively — was going to be a bit of a challenge. But in the end the solution was relatively simple.

The navigation bar is structured in a very typical way — as an unordered list with each menu element represented as a list item, like so:

<div id="navigation">
    <ul>
      <li><a href="url1">menu item 1</a></li>
      <li><a href="url2">menu item 2</a></li>
      <li><a href="url3">menu item 3</a></li>
    </ul>
</div>

To get it to ‘stick’ to the bottom of the page, I just placed it using position: absolute, gave it a fixed height, and set the width to 100%. Surprisingly, worked great just like that, and also allowed the page to be scrolled to reveal the content further down the page.


You can view the updated homepage by visiting snccdigital.org.

Turning on the Rights in the Duke Digital Repository

As 2017 reaches its halfway point, we have concluded another busy quarter of development on the Duke Digital Repository (DDR). We have several new features to share, and one we’re particularly delighted to introduce is Rights display.

Back in March, my colleague Maggie Dickson shared our plans for rights management in the DDR, a strategy built upon using rights status URIs from RightsStatements.org, and in a similar fashion, licenses from Creative Commons. In some cases, we supplement the status with free text in a local Rights Note property. Our implementation goals here were two-fold: 1) use standard statuses that are machine-readable; 2) display them in an easily understood manner to users.

New rights display feature in action on a digital object.

What to Display

Getting and assigning machine-readable URIs for Rights is a significant milestone in its own right. Using that value to power a display that makes sense to users is the next logical step. So, how do we make it clear to a user what they can or can’t do with a resource they have discovered? While we could simply display the URI and link to its webpage (e.g., http://rightsstatements.org/vocab/InC-EDU/1.0/ ) the key info still remains a click away. Alternatively, we could display the rights statement or license title with the link, but some of them aren’t exactly intuitive or easy on the eyes. “Attribution-NonCommercial-NoDerivatives 4.0 International,” anyone?

Inspiration

Looking around to see how other cultural heritage institutions have solved this problem led us to very few examples. RightsStatements.org is still fairly new and it takes time for good design patterns to emerge. However, Europeana — co-champion of the RightsStatements.org initiative along with DPLA — has a stellar collections site, and, as it turns out, a wonderfully effective design for displaying rights statuses to users. Our solution ended up very much inspired by theirs; hats off to the Europeana team.

Image from Europeana site.
Europeana Collections UI.

Icons

Both Creative Commons and RightsStatements.org provide downloadable icons at their sites (here and here). We opted to store a local copy of the circular SVG versions for both to render in our UI. They’re easily styled, they don’t take up a lot of space, and used together, they have some nice visual unity.

Rights & Licenses Icons
Circular icons from Creative Commons & RightsStatements.org

Labels & Titles

We have a lightweight Rails app with an easy-to-use administrative UI for managing auxiliary content for the DDR, so that made a good home for our rights statuses and associated text. Statements are modeled to have a URI and Title, but can also have three additional optional fields: short title, re-use text, and an array of icon classes.

Editing rights info associated with each statement.

Displaying the Info

We wanted to be sure to show the rights status in the flow of the rest of an object’s metadata. We also wanted to emphasize this information for anyone looking to download a digital object. So we decided to render the rights status prominently in the download menu, too.

Rights status in download menu
Rights status displays in the download menu.

 

Rights status also displays alongside other metadata.

What’s Next

Our focus in this area now shifts toward applying these newly available rights statuses to our existing digital objects in the repository, while ensuring that new ingests/deposits get assessed and assigned appropriate values. We’ll also have opportunities to refine where and how the statuses get displayed. We stand to learn a lot from our peer organizations implementing their own rights management strategies, and from our visitors as they use this new feature on our site. There’s a lot of work ahead, but we’re thrilled to have reached this noteworthy milestone.

Infrastructure and Multispectral Imaging in the Library

As we continue to work on our “standard” full color digitization projects such as Section A and the William Gedney Photograph Collection, both of which are multiyear projects, we are still hard at work with a variety of things related to Multispectral Imaging (MSI).  We have been writing documentation and posting it to our Knowledgebase, building tools to track MSI requests and establishing a dedicated storage space for MSI image stacks.  Below are some high-level details about these things and the kinks we are ironing out of the MSI process.  As with any new venture, it can be messy in the beginning and tedious to put all the details in order but in the end it’s worth it.

MSI Knowledge Base

We established a knowledge base for documents related to MSI that cover a wide variety of subjects:  How-To articles, to do lists, templates, notes taken during imaging sessions, technical support issues and more.  These documents will help us develop sound guidelines and workflows which in turn will make our work in this area more consistent, efficient and productive.

Dedicated storage space

Working with other IT staff, a new server space has been established specifically for MSI.  This is such a relief because, as we began testing the system in the early days, we didn’t have a dedicated space for storing the MSI image stacks and most of our established spaces were permissions restricted, preventing our large MSI group from using it.  On top of this we didn’t have any file management strategies in place for MSI.  This made for some messy file management. From our first demo, initial testing and eventual purchase of the system, we used a variety of storage spaces and a number of folder structures as we learned the system.  We used our shared Library server, the Digital Production Center’s production server, Box and Google Drive.  Files were all over the place!  What a mess!  In our new dedicated space, we have established standard folder structures and file management strategies and store all of our MSI image stacks in one place now.  Whew!

The Request Queue

In the beginning, once the MSI system was up and running, our group had a brainstorming session to identify a variety of material that we could use to test with and hone our skills in using the new system.  Initially this queue was a bulleted list in Basecamp identifying an item.  As we worked through the list it would sometimes be confusing as to what had already been done and what item was next.  This process became more cumbersome because multiple people were working through the list at the same time, both on capture and processing, with no specific reporting mechanism to track who was doing what.  We have recently built an MSI Request Queue that tracks items to be captured in a more straightforward, clear manner.  We have included title, barcode and item information along with the research question to be answered, it priority level, due date, requester information and internal contact information.  The MSI group will use this queue for a few weeks then tweak it as necessary.  No more confusion.

The Processing Queue

As described in a previous post, capturing with MSI produces lots of image stacks that contain lots of files.  On average, capturing one page can produce 6 image stacks totaling 364 images.  There are 6 different stages of conversion/processing that the image stack goes through before it might be considered “done”, and the fact that everyone on the MSI team has other job responsibilities makes it difficult to carve out a large enough block of time to convert and process the image stacks through all of the stages.  This made it difficult to know what items had been completely processed or not.  We have recently built an MSI Processing Queue that tracks what stage of processing each item is in.  We have included root file names, flat field information, PPI and a column for each phase of processing to indicate whether or not an image stack has passed through a phase.  As with the Request Queue, the MSI group will use this queue for a few weeks then tweak it as necessary.  No more confusion.

Duke University East Campus Progress Picture #27

As with most blog posts, the progress described above has been boiled down and simplified as to not bore you to death, but this is a fair amount of work nonetheless.  Having dedicated storage and a standardized folder structure simplifies the management of lots of files and puts them in a predictable structure.  Streamlining the Request Queue establishes a clear path of work and provides enough information about the request in order to move forward with a clear goal in mind.  Developing a Processing Queue that provides a snapshot of the state of processing across multiple requests and provides enough information so that any staff member familiar with our MSI process can complete a request.  Establishing a knowledge base to document our workflows and guidelines ties everything together in an organized and searchable manner making it easier to find information about established procedures and troubleshoot technical problems.

It is important to put this infrastructure in place and build a strong foundation for Multispectral Imaging at the Library so it will scale in the future.  This is only the beginning!

_______________________

Want to learn even more about MSI at DUL?

 

 

A Summer Day in the Life of Digital Collections

A recent tweet from my colleague in the Rubenstein Library (pictured above) pretty much sums up the last few weeks at work.  Although I rarely work directly with students and classes, I am still impacted by the hustle and bustle in the library when classes are in session.  Throughout the busy Spring I found myself saying, oh I’ll have time to work on that over the Summer.  Now Summer is here, so it is time to make some progress on those delayed projects while keeping others moving forward.  With that in mind here is your late Spring and early Summer round-up of Digital Collections news and updates.

Radio Haiti

A preview of the soon to be live Radio Haiti Archive digital collection.

The long anticipated launch of the Radio Haiti Archives is upon us.  After many meetings to review the metadata profile, discuss modeling relationships between recordings, and find a pragmatic approach to representing metadata in 3 languages all in the Duke Digital Repository public interface, we are now in preview mode, and it is thrilling.  Behind the scenes, Radio Haiti represents a huge step forward in the Duke Digital Repository’s ability to store and play back audio and video files.

You can already listen to many recordings via the Radio Haiti collection guide, and we will share the digital collection with the world in late June or early July.  In the meantime, check out this teaser image of the homepage.

 

Section A

My colleague Meghan recently wrote about our ambitions Section A digitization project, which will result in creating finding aids for and digitizing 3000+ small manuscript collections from the Rubenstein library.  This past week the 12 people involved in the project met to review our workflow.  Although we are trying to take a mass digitization and streamlined approach to this project, there are still a lot of people and steps.  For example, we spent about 20-30 minutes of our 90 minute meeting reviewing the various status codes we use on our giant Google spreadsheet and when to update them. I’ve also created a 6 page project plan that encompasses both a high and medium level view of the project. In addition to that document, each part of the process (appraisal, cataloging review, digitization, etc.) also has their own more detailed documentation.  This project is going to last at least a few years, so taking the time to document every step is essential, as is agreeing on status codes and how to use them.  It is a big process, but with every box the project gets a little easier.

Status codes for tracking our evaluation, remediation, and digitization workflow.
Section A Project Plan Summary

 

 

 

 

 

 

 

Diversity and Inclusion Digitization Initiative Proposals and Easy Projects

As Bitstreams readers and DUL colleagues know, this year we instituted 2 new processes for proposing digitization projects.  Our second digitization initiative deadline has just passed (it was June 15) and I will be working with the review committee to review new proposals as well as reevaluate 2 proposals from the first round in June and early July.  I’m excited to say that we have already approved one project outright (Emma Goldman papers), and plan to announce more approved projects later this Summer. 

We also codified “easy project” guidelines and have received several easy project proposals.  It is still too soon to really assess this process, but so far the process is going well.

Transcription and Closed Captioning

Speaking of A/V developments, another large project planned for this Summer is to begin codifying our captioning and transcription practices.  Duke Libraries has had a mandate to create transcriptions and closed captions for newly digitized A/V for over a year. In that time we have been working with vendors on selected projects.  Our next steps will serve two fronts; on the programmatic side we need  review the time and expense captioning efforts have incurred so far and see how we can scale our efforts to our backlog of publicly accessible A/V.  On the technology side I’ve partnered with one of our amazing developers to sketch out a multi-phase plan for storing and providing access to captions and time-coded transcriptions accessible and searchable in our user interface.  The first phase goes into development this Summer.  All of these efforts will no doubt be the subject of a future blog post.  

Testing VTT captions of Duke Chapel Recordings in JWPlayer

Summer of Documentation

My aspirational Summer project this year is to update digital collections project tracking documentation, review/consolidate/replace/trash existing digital collections documentation and work with the Digital Production Center to create a DPC manual.  Admittedly writing and reviewing documentation is not the most exciting Summer plan,  but with so many projects and collaborators in the air, this documentation is essential to our productivity, communication practices, and my personal sanity.   

Late Spring Collection launches and Migrations

Over the past few months we launched several new digital collections as well as completed the migration of a number of collections from our old platform into the Duke Digital Repository.  

New Collections:

Migrated Collections:

…And so Much More!

In addition to the projects above, we continue to make slow and steady progress on our MSI system, are exploring using the FFv1 format for preserving selected moving image collections, planning the next phase of the Digital Collections migration into the Duke Digital Repository, thinking deeply about collection level metadata and structured metadata, planning to launch newly digitized Gedney images, integrating digital objects in finding aids and more.  No doubt some of these efforts will appear in subsequent Bitstreams posts.  In the meantime, let’s all try not to let this Summer fly by too quickly!

Enjoy Summer while you can!

Notes from the Duke University Libraries Digital Projects Team