Category Archives: Technology

Digital Transitions Roundtable

In late October of this year, the Digital Production Center (along with many others in the Library) were busy developing budgets for FY 2015. We were asked to think about the needs of the department, where the bottlenecks were and possible new growth areas. We were asked to think big. The idea was to develop a grand list and work backwards to identify what we could reasonably ask for. While the DPC is able to digitize many types of materials and formats, such as audio and video, my focus is specifically still image digitization. So that’s what I focused on.dt-bc100-book

We serve many different parts of the Library and in order to accommodate a wide variety of requests, we use many different types of capture devices in the DPC: high-speed scanners, film scanners, overhead scanners and high-end cameras. The most heavily used capture device is the Phase One camera system. This camera system uses P65 60 MP digital back with a 72mm Schneider flat field lens. This enables us to capture high quality images at archival standards. The majority of material we digitize using this camera are bound volumes (most of them rare books from the David M. Rubenstein Library), but we also use this camera to digitize patron requests, which have increased significantly over the years (everything is expected to be digital it seems), oversized items, glass plate negatives, high-end photography collections and much more. It is no surprise that this camera is a bottleneck for still image production. In researching cameras to include in the budget, I was hard pressed to find another camera system that can compete with the Phase One camera. For over 5 years we have used Digital Transitions, a New York-based provider of high-end digital solutions, for our Phase One purchases and support. We have been very happy with the service, support and equipment we have purchased from them over the years, so I contacted them to inquire about new equipment on the horizon and pricing for upgrading our current system.captureone

New equipment they turned me onto is the BC100 book scanner. This scanner uses a 100° glass platen and two reprographic cameras to capture two facing pages at the same time. While there are other camera systems that use a similar two camera setup (most notably the Scribe, Kirtas and Atiz), the cameras and digital backs used with the BC100, as well as the CaptureOne software that drives the cameras, are more well suited for cultural heritage reproduction. Along with the new BC100, CaptureOne is now offering a new software package specifically geared toward the cultural heritage community for use with this new camera system. While inquiring about the new system, I was invited to attend a Cultural Heritage Round Table event that Digital Transitions was hosting.

This roundtable was focused on the new CaptureOne software for use with the BC100 and the specific needs of the cultural heritage community. I have always found the folks at Digital Transitions to be very professional, knowledgeable and helpful. The event they put together included Jacob Frost, Application Software R&D Manager for PhaseOne; Doug Peterson, Technical Support, Training, R&D at Digital Transitions; and Don Williams of Image Science Associates, Imaging Scientist. Don is also on the Still Image Digitization Advisory Board with the Federal Agencies Digitization Guidelines Initiative (FADGI), a collaborative effort by federal agencies to FADGI1define common guidelines, methods, and practices for digitizing historical content. They talked about the new features of the software, the science behind the software, the science behind the color technology and new information about the FADGI Still Image standard that we currently follow at the Library. I was impressed by the information provided and the knowledge shared, but what impressed me the most was the fact that the main reason Digital Transitions pulled this particular group of users and developers together was to ask us what the cultural heritage community needed from the new software. WHAT!? What we need from the software? I’ve been doing this work for about 15 years now and I think that’s the first time any software developer from any digital imaging company has asked our community specifically what we need. Don’t get me wrong, there is a lot of good software out there but usually the software comes “as is.” While it is fully functional, there are usually some work-arounds to get the software to do what I need it to do. We, as a community, spent about an hour drumming up ideas for software improvements and features.

While we still need to see follow-through on what we talked about, I am hopeful that some of the features we talked about will show up in the software. The software still needs some work to be truly beneficial (especially in post-production), but Phase One and Digital Transitions are definitely on to something.

Assembling the Game of Stones

Back in October, Molly detailed DigEx’s work on creating an exhibit for the Link Media Wall. We’ve finally finalized our content and hope to have the new exhibit published to the large display in the next week or two. I’d like to detail how this thing is actually put together.

HTML Code

In our planning meetings the super group talked about a few different approaches for how to start. We considered using a CMS like WordPress or Drupal, Four Winds (our institutional digital signage software), or potentially rolling our own system. In the end though, I decided to build using super basic HTML / CSS / Javascript. After the group was happy with the design, I built a simple page page framework to match our desired output of 3840 x 1080 pixels. And when I mean simple, I mean simple.

got_assembly

I broke the content chunks into five main sections: the masthead (which holds the branding), the navigation (which highlights the current section and construction period), the map (which shows the location of the buildings), the thumbnail (which shows the completed building and adds some descriptive text), and the images (which houses a set of cross-fading historic photos illustrating the progression of construction). Working with a fixed-pixel layout feels strange in the modern world of web development, but it’s quick and satisfying to crank out. I’m using the jQuery Cycle plugin to transition the images, which is lightweight and offers lots of configurable options. I also created a transparent PNG file containing a gradient that fades to the background color which overlays the rotating images.

Another part of the puzzle I wrestled with was how to transition from one section of the exhibit to another. I thought about housing all of the content on a single page and using some JS to move from one to the next, but I was a little worried about performance so I again opted for the super simple solution. Each page has a meta refresh in the header set to the number of seconds that it takes to cycle through the corresponding set of images and with a destination of the next section of the exhibit. It’s a little clunky in execution and I would probably try something more elegant next time, but it’s solid and it works.

Here’s a preview of the exhibit cycling through all of the content. It’s been time compressed – the actual exhibit will take about ten minutes to play through.

In a lot of ways this exhibit is an experiment in both process and form, and I’m looking forward to seeing how our vision translates to the Media Wall space. Using such simple code means that if there are any problems, we can quickly make changes. I’m also looking forward to working on future exhibits and helping to highlight the amazing items in our collections.

New Angles & Avenues for Bitstreams

This week, we added a display of our most recent Bitstreams blog posts to our Digital Collections homepage (example), and likewise, a view of posts relevant to a given collection on the respective collection’s homepage (example).

Screen Shot 2014-11-12 at 1.19.56 PM

Background

Our Digital Projects & Production team has been writing in Bitstreams at least weekly since February 2014. We’ve had some excellent guest contributors, too. Some posts share updates about new digital collections or additions, while others share insights, lessons learned, and behind-the-scenes looks at the projects we’re currently tackling.

Many of our posts have been featured on our library homepage and library news site. But until now, we haven’t been able to display any of them—not even the ones about new digital collections—alongside the collections themselves. So, if you visited the DukEngineer collection in the past, you likely missed out on Melanie’s excellent overview, which puts the magazine in context and highlights the best of what’s inside.

Past Solutions

Syndicating tagged blog posts for display elsewhere is a pretty common use case, and we’ve used a bunch of different solutions as our platforms have evolved. Each solution has naturally been painstakingly tailored to accommodate the inner workings of both the source and the destination. Seven years ago, we were writing custom XSLT to create and then consume our own RSS feeds in Cascade Server CMS. We have since hopped over to Wordpress for managing news and blogs (whew!). An older version of our digital collections app used WordPress’ XML-RPC API to get tagged posts and parsed them with Python.

These days, our library website does blog syndication by using a combo of WordPress RSS, Drupal’s feed aggregator module, and occasionally Yahoo! Pipes for data mashing and munging. It works well in Drupal, but other platforms require other approaches.

Under the Hood: Angular.js and Wordpress JSON API

Bret Davidson’s Code4Lib 2014 presentation, Towards Pasta Code Nirvana: Using JavaScript MVC to Fill Your Programming Ravioli  (slides) made me hungry. Hungry for pasta, yes, but also for knowledge. I wanted to:

  1. Experiment with one of the Javascript MVC frameworks to learn how they work, and in the process…
  2. Build something potentially useful for digital collections that could be ported over to a new application framework in the future (e.g., from our current Django app to a future Ruby on Rails app).

From the many possibilities, I chose AngularJS. It seemed well-documented, increasingly popular, and with Google’s backing, it seems like it’ll be around for awhile.

WordPress JSON API

Among Angular’s virtues is that it really simplifies the process of getting and using JSON data from an API. I found Wordpress’ JSON API plugin, which was interestingly developed by staff at MoMA so they could use WordPress as a back-end to a site with a Rails front-end. So we first had to enable that for our Bitstreams blog.

AngularJS

angularjsAngularJS definitely helps keep code clean, especially by abstracting the model (the blogposts & associated characteristics, as well as the page state) from the view (indicates how to display the data) from the controller (gets and refines the data into the model, updates the model upon interactions with the view). I’ve done several projects in the past using jQuery and DOM manipulation to retrieve and display data. It usually works, but in the process I create a veritable rat’s nest of spaghetti code wherein /* no amount of commenting */ can truly help disentangle what’s happening.

Angular also supercharges HTML with more useful attributes to control a display. I’ve only just scratched the surface, but it’s clear that built-in directives like ng-repeat and filters like limitTo spare me from writing a ton of Javascript, e.g., <li ng-repeat="post in blogposts | limitTo:pageSize">. After the initial learning curve, the markup is visually intuitive. And it’s nice that directives and filters are extensible so you can make your own.

Source code: controller js, HTML (view source)

Initial Lessons Learned

  • AngularJS has a steeper learning curve than I’d expected; I assumed I could do this mini-project in a few hours, but it took a couple days to really get a handle on the basic pieces I needed for this project.
  • Writing an Angular app within a Django app is tricky. Both use {{ variable }} template tags so I had to change Angular to use [[ variable ]] instead.

Looking Ahead

I consider this an encouraging proof of concept. While our own blog posts can be interesting, there are many other sources of valuable data out in the world that are relevant to our collections that would add value for our researchers if we were able to easily get and display them. AngularJS won’t be the answer to all of these needs, but it’s nice to have in the toolset.

Dispatches from the Digital Library Federation Forum

On October 27-29 librarians, archivists, developers, project managers, and others met for the Digital Library Federation (DLF) Forum in Atlanta, GA. The program was packed to the gills with outstanding projects and presenters, and several of us from Duke University Libraries were fortunate enough to attend.  Below is a round up of notes summarizing interesting sessions, software tools, projects and collections we learned about at the conference.

Please note that these notes were written by humans listening to presentations and mistakes are inevitable.  Click the links to learn more about each tool/project or session straight from the source.

Tools and Technology

Spotlight is an open-source tool for featuring digitized resources and is being developed at Stanford University.  It appears to have fairly similar functionality to Omeka, but is integrated into Blacklight, a discovery interface used by a growing number of libraries.

 

The J. Williard Marriott Library at the University of Utah presented on their use of Pamco Imaging tools to capture 360 degree images of artifacts.  The library purchased a system from Pamco that includes an automated turntable, lighting tent and software to both capture and display the 3-D objects.

 

There were two short presentations about media walls; one from our friends in Raleigh at the Hunt Library at N.C. State University, and the second from Georgia State.  Click the links to see just how much you can do with an amazing media wall.

Projects and Collections

The California Digital Library (CDL) is redesigning and reengineering their digital collections interface to create a kind of mini-Digital Public Library of America just for University of California digital collections.  They are designing the project using a platform called Nuxeo and storing their data through Amazon web services.  The new interface and platform development is highly informed by user studies done on the existing Calisphere digital collections interface.

 

Emblematica Online is a collection of  digitized emblem books contributed by several global institutions including Duke. The collection is hosted by University of Illinois at Urbana Champagne.  The project has been conducting user studies and hope to publish them in the coming year.

 

The University of Indiana Media Digitization and Preservation Initiative started in 2009 with a survey of all the audio and visual materials on campus.  In 2011, the initiative proposed digitizing all rare and unique audio and video items within a 15 year period. However in 2013, the President of the University said that the campus would commit to completing the project in a 7 year period.   To accomplish this ambitious goal, the university formed a public-private partnership with Memnon Archiving Services of Brussels. The university estimates that they will create over 9 petabytes of data. The initiative has been in the planning phases and should be ramping up in 2015.

Selected Session Notes

The Project Managers group within DLF organized a session on “Cultivating a Culture of Project Management” followed by a working lunch. Representatives from John’s Hopkins and Brown talked about implementing Agile Methodology for managing and developing technical projects.  Both libraries spoke positively about moving towards Agile, and the benefits of clear communication lines and defined development cycles.  A speaker from Temple university discussed her methods for tracking and communicating the capacity of her development team; her spreadsheet for doing so took the session by storm (I’m not exaggerating – check out Twitter around the time of this session).   Two speakers from the University of Michigan shared their work in creating a project management special interest group within their library to share PM skills, tools and heartaches.

A session entitled “Beyond the digital Surrogate” highlighted the work of several projects that are using digitized materials as a starting point for text mining and visualizing data.  First, many of UNC’s Documenting the American South collections are available as a text download.  Second, a tool out of Georgia Tech supports interactive exploration and visualization of text based archives.  Third, a team from University of Nebraska-Lincoln is developing methods for using visual information to leverage discovery and analysis of digital collections.

 

Assessment

“Moving Forward with Digital Library Assessment.” Based around the need to strategically focus our assessment efforts in digital libraries and to better understand and measure the value, impact, and associated costs of what we do. 

Community notes for this session

  • Joyce Chapman, Duke University
  • Jody DeRidder, University of Alabama
  • Nettie Lagace, National Information Standards Organization
  • Ho Jung Yoo, University of California, San Diego

Nettie Legace: update on NISO’s altmetrics initiative.

  • The first phase exposed areas for potential standardization. The community then collectively prioritized those potential projects, and the second phase is now developing those best practices. A Working group is developed, its recommendation due June 2016.
  • Alternative Metrics Initiative Phase 1 White Paper 

Joyce Chapman: a framework for estimating digitization costs

Jody DeRidder and Ho Jung Yoo: usability testing

  • What critical aspects need to be addressed by a community of practice?
  • What are next steps we can take as a community?

Midnight in the Garden of Film and Video

A few weeks ago, archivists, engineers, students and vendors from across the globe arrived in the historic city of Savannah, GA for AMIA 2014. The annual conference for The Association of Moving Image Archivists is a gathering of professionals who deal with the challenge of preserving motion picture film and videotape content for future generations. Since today is Halloween, I must also point out that Savannah is a really funky city that is haunted! The downtown area is filled with weeping willow trees, well-preserved 19th century architecture and creepy cemeteries dating back to the U.S. Civil and Revolutionary wars. Savannah is almost as scary as a library budget meeting.

The bad moon rises over Savannah City Hall.
The bad moon rises over Savannah City Hall.

Since many different cultural heritage institutions are digitizing their collections for preservation and online access, it’s beneficial to develop universal file standards and best practices. For example, organizations like NARA and FADGI have contributed to the universal adoption of the 8-bit uncompressed TIFF file format for (non-transmissive) still image preservation. Likewise, for audio digitization, 24-bit uncompressed WAV has been universally adopted as the preservation standard. In other words, when it comes to still image and audio digitization, everyone is driving down the same highway. However, at AMIA 2014, it was apparent there are still many different roads being taken in regards to moving image preservation, with some potential traffic jams ahead. Are you frightened yet? You should be!

The smallest known film gauge: 3mm. Was it designed by ancient druids?
The smallest known film gauge: 3mm. Was it built by ancient druids?

Up until now, two file formats have been competing for dominance for moving image preservation: 10-bit uncompressed (.mov or .avi wrapper) vs. Motion JPEG2000 (MXF wrapper). The disadvantage of uncompressed has always been its enormous file size. Motion JPEG2000 incorporates lossless compression, which can reduce file sizes by 50%, but it’s expensive to implement, and has limited interoperability with most video software and players. At AMIA 2014, some were championing the use of a newer format, FFV1, a lossless codec that has compression ratios similar to JPEG2000, but is open source, and thus more widely adoptable. It is part of the FFmpeg software project. Adoption of FFV1 is growing, but many institutions are still heavily invested in 10-bit uncompressed or Motion JPEG2000. Which format will become the preservation standard, and which will become ghosts that haunt us forever?!?

Another emerging need is for content management systems that can store and provide public access to digitized video. The Hydra repository solution is being adopted by many institutions for managing preservation video files. In conjunction with Hydra, many are also adopting Avalon to provide public access for online viewing of video content. Like FFmpeg, both Hydra and Avalon are open source, which is part of their appeal. Others are building their own systems, catered specifically to their own needs, like The Museum of Modern Art. There are also competing metadata standards. For example, PBCore has been adopted by many public television stations, but is generally disliked by libraries. In fact, they find it really creepy!

A new print of Peter Pan was shown at AMIA 2014
A new print of Peter Pan was shown at AMIA 2014. That movie gave me nightmares as a child.

Finally, there is the thorny issue of copyright. Once file formats are chosen and delivery systems are in place, methods must be implemented to control access by only those intended, to protect copyright and hinder piracy. The Avalon Media System enables rights and access control to video content via guest passwords. The Library of Congress works around some of these these issues another way, by setting up remote viewing rooms in Washington, DC, which are connected via fiber-optic cable to their Audio-Visual Conservation Center in Culpeper, Va. Others, with more limited budgets, like Dino Everett at USC Cinematic Arts, watermark their video, upload it to sites like Vimeo, and implement temporary password protection, canceling the passwords manually after a few weeks. I mean, is there anything more frightening than a copyright lawsuit? Happy Halloween!

A Digital Exhibits Epic Saga: Game of Stones

A screen from the Queering Duke History exhibit kiosk, just one of the ways DigEx supports library exhibits.

Just under a year ago Duke University Libraries formed the Digital Exhibits Working Group (DigEx) to provide vision, consulting expertise, and hands-on support to the wide array of projects and initiatives related to gallery exhibits, web exhibits, data visualizations, digital collections, and digital signage.  Membership in the group is as cross-departmental as the projects they support. With representatives from Data and Visualization, Digital Projects and Production Services, Digital Scholarship Services, Communications, Exhibits, Core Services and the Rubenstein Library, every meeting is a vibrant mix of people, ideas and agenda items.

The group has taken on a number of ambitious projects; one of which is to identify and understand digital exhibits publishing platforms in the library (we are talking about screens here).   Since April, a sub-committee – or “super committee” as we like to call ourselves – of DigEx members have been meeting to curate a digital exhibit for the Link Media Wall.  DigEx members have anecdotal evidence that our colleagues want to program content for the wall, but have not been able to successfully do so in the past.  DigEx super committee to the rescue!

The Link super committee started meeting in April, and at first we thought our goals were simple and clear.  In curating an exhibit for the link wall we wanted to create a process and template for other colleagues to follow.  We quickly chose an exhibit topic: the construction of West Campus in 1927-1932 told through the University Archive’s construction photography digital collection and Flickr feed.  The topic is both relevant given all the West campus construction happening currently, and would allow us to tell a visually compelling story with both digitized historic photographs and opportunities for visualizations (maps, timelines, etc).

Test stone wall created by University to select the stones for our Gothic campus.
Test stone wall created by University to select the stones for our Gothic campus (1925).

Our first challenge arose with the idea of templating.  Talking through ideas and our own experiences, we realized that creating a design template would hinder creative efforts and could potentially lead to an unattractive visual experience for our patrons.  Think Microsoft PowerPoint templates; do you really want to see something like that spread across 18 digital panels? So even though we had hoped that our exhibit could scale to other curators, we let go of the idea of a template.

 

We had logistical challenges too.  How do we design for such a large display like the media wall?  How do you create an exhibit that is eye catching enough to catch attention, simple enough for someone to understand as they are walking by yet moves through content slowly enough that someone could stop and really study the images?  How do we account for the lines between each separate display and avoid breaking up text or images?  How do we effectively layout our content on our 13-15” laptops when the final project is going to be 9 FEET long?!!  You can imagine that our process became de-railed at times.

Stone was carried from the quarry in Hillsborough to campus by way of a special railroad track.

But we didn’t earn the name super committee for nothing.  The Link media wall coordinator met with us early on to help solve some of our challenges. Meeting with him and bringing in our DigEx developer representative really jumpstarted the content creation process.  Using a scaled down grid version of the media wall, we started creating simple story boards in Powerpoint.  We worked together to pick a consistent layout each team member would follow, and then we divided the work of finding images, and creating visualizations.  Our layout includes the exhibit title, a map and a caption on every screen to ground the viewer in what they are seeing no matter where they come into the slideshow. We also came up with guidelines as to how quickly the images would change.

 

media_wall_grid.draft2-grid
Mockup of DigEx Link Media Wall exhibit showing gridlines representing delineations between each display.

At this point, we have handed our storyboards to our digital projects developer and he is creating the final exhibit using HTML and web socket technology to make it interactive (see design mockup above). We are also finishing up an intro slide for the exhibit.   Once the exhibit is finished, we will review our process and put together guidelines for other colleagues in DUL to follow.  In this way we hope to meet our goal of making visual technology in the library more available to our innovative staff and exhibits program.   We hope to premiere the digital exhibit on the Link Wall before the end of the calendar year.  Stay Tuned!!

Special shout out to the Link Media Wall Exhibit Super Committee within the Digital Experiences Working Group (DigEx):  Angela Zoss, Data Visualization Coordinator, Meg Brown, The E. Rhodes and Leona B. Carpenter Foundation Exhibits Coordinator, Michael Daul, Digital Projects Developer, Molly Bragg, Digital Collections Program Manager and Valerie Gillispie, University Archivist.

 

What’s DAT Sound?

My recent posts have touched on endangered analog audio formats (open reel tape and compact cassette) and the challenges involved in digitizing and preserving them.  For this installment, we’ll enter the dawn of the digital and Internet age and take a look at the first widely available consumer digital audio format:  the DAT (Digital Audio Tape).

IMG_0016

The DAT was developed by consumer electronics juggernaut Sony and introduced to the public in 1987.  While similar in appearance to the familiar cassette and also utilizing magnetic tape, the DAT was slightly smaller and only recorded on one “side.”  It boasted lossless digital encoding at 16 bits and variable sampling rates maxing out at 48 kHz–better than the 44.1 kHz offered by Compact Discs.  During the window of time before affordable hard disk recording (roughly, the 1990s), the DAT ruled the world of digital audio.

The format was quickly adopted by the music recording industry, allowing for a fully digital signal path through the recording, mixing, and mastering stages of CD production.  Due to its portability and sound quality, DAT was also enthusiastically embraced by field recordists, oral historians & interviewers, and live music recordists (AKA “tapers”):

tapers[Conway, Michael A., “Deadheads in the Taper’s section at an outside venue,” Grateful Dead Archive Online, accessed October 10, 2014, http://www.gdao.org/items/show/834556.]

 

However, the format never caught on with the public at large, partially due to the cost of the players and the fact that few albums of commercial music were issued on DAT [bonus trivia question:  what was the first popular music recording to be released on DAT?  see below for answer].  In fact, the recording industry actively sought to suppress public availability of the format, believing that the ability to make perfect digital copies of CDs would lead to widespread piracy and bootlegging of their product.  The Recording Industry Association of America (RIAA) lobbied against the DAT format and attempted to impose restrictions and copyright detection technology on the players.  Once again (much like the earlier brouhaha over cassette tapes and subsequent battle over mp3’s and file sharing) “home taping” was supposedly killing music.

By the turn of the millennium, CD burning technology had become fairly ubiquitous and hard disk recording was becoming more affordable and portable.  The DAT format slowly faded into obscurity, and in 2005, Sony discontinued production of DAT players.

In 2014, we are left with a decade’s worth of primary source audio tape (oral histories, interviews, concert and event recordings) that is quickly degrading and may soon be unsalvageable.  The playback decks (and parts for them) are no longer produced and there are few technicians with the knowledge or will to repair and maintain them.  The short-term answer to these problems is to begin stockpiling working DAT machines and doing the slow work of digitizing and archiving the tapes one by one.  For example, the Libraries’ Jazz Loft Project Records collection consisted mainly of DAT tapes, and now exists as digital files accessible through the online finding aid:  http://library.duke.edu/rubenstein/findingaids/jazzloftproject/.  A long-term approach calls for a survey of library collections to identify the number and condition of DAT tapes, and then for prioritization of these items as it may be logistically impossible to digitize them all.

And now, the answer to our trivia question:  in May 1988, post-punk icons Wire released The Ideal Copy on DAT, making it the first popular music recording to be issued on the new format.

 

Anatomy of an Exhibit Kiosk

I’ve had the pleasure of working on several exhibit kiosks during my time at the library. Most of them have been simple in their functionality, but we’re hoping to push some boundaries and get more creative in the future. Most recently, I’ve been working on building a kiosk for the Queering Duke History: Understanding the LGBTQ Experience at Duke and Beyond exhibit. It highlights oral history interviews with six former Duke students. This particular kiosk example isn’t very complicated, but I thought it would be fun to outline how it’s put together.

Screen shot of the 'attract' loop
Screen shot of the ‘attract’ loop

Hardware

Most of our exhibits run on one of two late 2009 27″ iMacs that we have at our disposal. The displays are high-res (1920×1080) and vivid, the built-in speakers sound fine, and the processors are strong enough to display multimedia content without any trouble. Sometimes we use the kiosk machines to loop video content, so there’s no user interaction required. With this latest iteration, as users will be able to select audio files for playback, we’ll need to provide a mouse. We do our best to secure them to our kiosk stand, and in my tenure we’ve not had any problems. But I understand in the past that sometimes input devices have been damaged or gone missing. As we migrate to touch-screen machines in the future these sorts of issues won’t be a problem.

Software

We tend to leave our kiosk machines out in the open in public spaces. If the machine isn’t sufficiently locked down, it can lead to it being used for purposes other than what we have in mind. Our approach is to setup a user account that has very narrow privileges and set it as the default login (so when the machine starts up it boots into our ‘kiosk’ account). In OS X you can setup user permissions, startup programs, and other settings via ‘Users and Groups’ in the System Preferences. We also setup power saving settings so that the computer will sleep between midnight and 6:00am using the Energy Saving Scheduler.

My general approach for interactive content is to build web pages, host them externally, and load them on to the kiosk in a web browser. I think the biggest benefits of this approach are that we can make updates without having to take down the kiosk and also track user interactions using Google analytics. However, there are drawbacks as well. We need to ensure that we have reliable network connectivity, which can be a challenge sometimes. By placing the machine online, we also add to the risk that it can be used for purposes other than what we intend. So in order to lock things down even more, we utilize xStand to display our interactive content. It allows for full screen browsing without any GUI chrome, black-listing and/or white-listing sites, and most importantly, it restarts automatically after a crash. In my experience it’s worked very well.

User Interface

This particular exhibit kiosk has only one real mission – to enable users to listen to a series of audio clips. As such, the UI is very simple. The first component is a looping ‘attract’ screen. The attract screen serves the dual purpose of drawing attention to the kiosk and keeping pixels from getting burned in on the display. For this kiosk I’m looping a short mp4 video file. The video container is wrapped in a link and when it’s clicked a javscript hides the video and displays the content div.

 

The content area of the page is very simple – there are a group of images that can be clicked on. When they are, a lightbox window (I like Fancy Box) pops up that holds the relevant audio clips. I’m using simple html5 audio playback controls to stream the mp3 files.

Screen shot of the 'home' screen UI
Screen shot of the ‘home’ screen UI
Screen shot of the audio playback UI
Screen shot of the audio playback UI

Finally, there’s another javascript running in the background that detects and user input. After 10 minutes of inactivity, the page reloads which brings back the attract screen.

The Exhibit

Queering Duke History runs through December 14, 2014 in the Perkins Library Gallery on West Campus. Stop by and check it out!

Digital Tools for Civil Rights History

The One Person, One Vote Project is trying to do history a different way. Fifty years ago, young activists in the Student Nonviolent Coordinating Committee broke open the segregationist south with the help of local leaders. Despite rerouting the trajectories of history, historical actors rarely get to have a say in how their stories are told. Duke and the SNCC Legacy Project are changing that. The documentary website we’re building (One Person, One Vote: The Legacy of SNCC and the Struggle for Voting Right) puts SNCC veterans at the center of narrating their history.

SNCC field secretary and Editorial Board member Charlie Cobb.
SNCC field secretary and Editorial Board member Charlie Cobb. Courtesy of www.crmvet.org.

So how does that make the story we tell different? First and foremost, civil rights becomes about grassroots organizing and the hundreds of local individuals who built the movement from the bottom up. Our SNCC partners want to tell a story driven by the whys and hows of history. How did their experiences organizing in southwest Mississippi shape SNCC strategies in southwest Georgia and the Mississippi Delta? Why did SNCC turn to parallel politics in organizing the Mississippi Freedom Democratic Party? How did ideas drive the decisions they made and the actions they took?

For the One Person, One Vote site, we’ve been searching for tools that can help us tell this story of ideas, one focused on why SNCC turned to grassroots mobilization and how they organized. In a world where new tools for data visualization, mapping, and digital humanities appear each month, we’ve had plenty of possibilities to choose from. The tools we’ve gravitated towards have some common traits; they all let us tell multi-layered narratives and bring them to life with video clips, photographs, documents, and music. Here are a couple we’ve found:

This StoryMap traces how the idea of Manifest Destiny progressed through the years and across the geography of the United States.
This StoryMap traces how the idea of Manifest Destiny progressed through the years and across the geography of the U.S.

StoryMap: Knightlab’s StoryMap tool is great for telling stories. But better yet, StoryMap lets us illustrate how stories unfold over time and space. Each slide in a StoryMap is grounded with a date and a place. Within the slides, creators can embed videos and images and explain the significance of a particular place with text. Unlike other mapping tools, StoryMaps progress linearly; one slide follows another in a sequence, and viewers click through a particular path. In terms of SNCC, StoryMaps give us the opportunity to trace how SNCC formed out of the Greensboro sit-ins, adopted a strategy of jail-no-bail in Rock Hill, SC, picked up the Freedom Rides down to Jackson, Mississippi, and then started organizing its first voter registration campaign in McComb, Mississippi.

Timeline.JS: We wanted timelines in the One Person, One Vote site to trace significant events in SNCC’s history but also to illustrate how SNCC’s experiences on the ground transformed their thinking, organizing, and acting. Timeline.JS, another Knightlab tool, provides the flexibility to tell overlapping stories in clean, understandable manner. Markers in Timeline.JS let us embed videos, maps, and photos, cite where they come from, and explain their significance. Different tracks on the timeline  give us the option of categorizing events into geographic regions, modes of organizing, or evolving ideas.

The history of Duke University as displayed by Timeline.JS.
The history of Duke University as displayed by Timeline.JS.

DH Press: Many of the mapping tools we checked out relied on number-heavy data sets, for example those comparing how many robberies took place on the corners of different city blocks. Data sets for One Person, One Vote come mostly in the form of people, places, and stories. We needed a tool that let us bring together events and relevant multimedia material and primary sources and represent them on a map. After checking out a variety of mapping tools, we found that DH Press served many of our needs.

DH Press project representing buildings and uses in Durham's Hayti neighborhood.
DH Press project representing buildings and uses in Durham’s Hayti neighborhood.

Coming out of the University of North Carolina – Chapel Hill’s Digital Innovation Lab, DH Press is a WordPress plugin designed specifically with digital humanities projects in mind. While numerous tools can plot events on a map, DH Press markers provide depth. We can embed the video of an oral history interview and have a transcript running simultaneously as it plays. A marker might include a detailed story about an event, and chronicle all of the people who were there. Additionally, we can customize the map legends to generate different spatial representations of our data.

Example of a marker in DH Press. Markers can be customized to include a range of information about a particular place or event.
Example of a marker in DH Press. Markers can be customized to include a range of information about a particular place or event.

 

These are some of the digital tools we’ve found that let us tell civil rights history through stories and ideas. And the search continues on.

Bodies of Knowledge: Seeking Design Contractors for Innovative Anatomical Digital Collection

The History of Medicine Collections, part of the Rubenstein Rare Book & Manuscript Library at Duke University, would like to create a digital collection of our ten anatomical fugitive sheets.

flap
An Anatomical Fugitive Sheet complete with flap.

Anatomical fugitive sheets are single sheets, very similar to items such as broadsides [early printed advertisements] that date from the sixteenth and seventeenth centuries and are incredibly rare and fragile. Eight of the ten sheets in our collections have overlays or moveable parts adding to the complexity of creating an online presence that allows a user to open or lift the flap digitally.

The primary deliverable for the design contractor of this project will be an online surrogate of the fugitive sheets and any accompanying plugins. Skills needed include JavaScript and CSS.

We’re looking for a talented design team to help us connect the past to the present. See the prospectus for candidate contractors linked below.

Bodies of Knowledge: a prospectus for design contractors to create an innovative anatomical digital collection.