Category Archives: Technology

Chapel Exhibit

Over the past few weeks I’ve been working on content for a new exhibit in the library; An Iconic Identity: Stories and Voices of Duke University Chapel. I’d like to share what we created and how they were built.

Chapel Kiosk

The exhibit is installed in the Jerry and Bruce Chappell Family Gallery near the main entrance to the library. There are many exhibit cases filled with interesting items relating to the history of Duke Chapel. A touchscreen lenovo all-in-one computer is installed in the corner and runs a fullscreen version of Chrome containing an interface built in HTML. The interface encourages users to view six different videos and also listen to recordings of sermons given by some famous people over the years (including Desmond Tutu, Dr. Martin Luther King Sr., and Billy Graham) – these clips were pulled from our Duke Chapel Recordings digital collection. Here are some screenshots of the interface:

chapel-kiosk-1
Home screen
Detail of audio files interface
Playing audio clips
Video player interface
Playing a video

Carillon Video

One of the videos featured in the kiosk captures the University Carillonneur playing a short introduction, striking the bells to mark the time, and then another short piece. I was very fortunate to be able to go up into the bell tower and record J. Samuel Hammond  playing this unique instrument.  I had no idea as to the physicality involved and listening to the bells so close was really interesting. Here’s the final version of the video:

Chapel Windows

Another space in the physical exhibit features a projection of ten different stained glass windows from the chapel. Each window scrolls slowly up and down, then cycles to the next one. This was accomplished using CSS keyframes and my favorite image transition plugin, jquery cycle2. Here’s a general idea of how it looks, only sped up for web consumption:

looping_window

Here’s a grouping of three of my favorite windows from the bunch:
windows

The exhibit will be on display until June 19 – please swing by and check it out!

The Attics of Your Life

If you happen to be rummaging through your parents’ or grandparents’ attic, basement or garage, and stumble upon some old reel-to-reel audiotape, or perhaps some dust-covered videotape reels that seem absurdly large & clunky, they are most likely worthless, except for perhaps sentimental value. Even if these artifacts did, at one time, have some unique historic content, you may never know, because there’s a strong chance that decades of temperature extremes have made the media unplayable. The machines that were once used to play the media are often no longer manufactured, hard to find, and only a handful of retired engineers know how to repair them. That is, if they can find the right spare parts, which no one sells anymore.

Bart_starr_bw
Quarterback Bart Starr led the Green Bay Packers to a 35-10 victory over the Kansas City Chiefs in Super Bowl 1.
RCA Quadruplex 2"
Martin Haupt likely recorded Super Bowl 1 using an RCA Quadruplex 2″ color videotape recorder, common at television studios in the late 1960s.

However, once in a while, something that is one of a kind miraculously survives. That was the case for Troy Haupt, a resident of North Carolina’s Outer Banks, who discovered that his father, Martin Haupt, had recorded the very first Super Bowl onto 2” Quadruplex color videotape directly from the 1967 live television broadcast. After Martin passed away, the tapes ended up in Troy’s mother’s attic, yet somehow survived the elements.

What makes this so unique is that, in 1967, videotape was very expensive and archiving at television networks was not a priority. So the networks that aired the first Super Bowl, CBS and NBC, did not save any of the broadcast.

But Martin Haupt happened to work for a company that repaired professional videotape recorders, which were, in 1967, cutting edge technology. Taping television broadcasts was part of Martin’s job, a way to test the machines he was rebuilding. Fortunately, Martin went to work the day Super Bowl 1 aired live. The two Quadruplex videotapes that Martin Haupt used to record Super Bowl 1 cost $200 each in 1967. In today’s dollars, that’s almost $3000 total for the two tapes. Buying a “VCR” at your local department store was unfathomable then, and would not be possible for at least another decade. Somehow, Martin missed recording halftime, and part of the third quarter, but it turns out that Martin’s son Troy now owns the most complete known video recording of Super Bowl 1, in which the quarterback Bart Starr led the Green Bay Packers to a 35-10 victory over the Kansas City Chiefs.

Nagra IV-S
Betty Cantor-Jackson recorded many of the Grateful Dead’s landmark concerts using a Nagra IV-S Reel to Reel audiotape recorder. The Dead’s magnum opus, “Dark Star” could easily fill an entire reel.

For music fans, another treasure was uncovered in a storage locker in Marin County, CA, in 1986. Betty Cantor-Jackson worked for The Grateful Dead’s road crew, and made professional multi-track recordings of many of their best concerts, between 1971-1980, on reel-to-reel audiotape. The Dead were known for marathon concerts in which some extended songs, like “Dark Star” could easily fill an entire audio reel. The band gave Betty permission to record, but she purchased her own gear and blank tape, tapping into the band’s mixing console to capture high-quality, soundboard recordings of the band’s epic concerts during their prime era. Betty held onto her tapes until she fell on hard times in the 1980’s, lost her home, and had to move the tapes to a storage locker. She couldn’t pay the storage fees, so the locker contents went up for auction.

barton
Betty Cantor-Jackson recorded the Grateful Dead’s show at Barton Hall in 1977, considered by many fans to be one of their best concerts.

Some 1000 audio reels ended up in the hands of three different buyers, none of whom knew what the tapes contained. Once the music was discovered, copies of the recordings began to leak to hardcore tape-traders within the Deadhead community, and they became affectionately referred to as “The Betty Boards.” It turns out the tapes include some legendary performances, such as the 1971 Capitol Theatre run, and the May 1977 tour, including “Barton Hall, May 8, 1977,” considered by many Deadheads as one of the best Grateful Dead concerts of all time.

You would think the current owners of Super Bowl 1 and Barton Hall, May 8, 1977 would be sitting on gold. But, that’s where the lawyers come in. Legally, the people who possess these tapes own the physical tapes, but not the content on those tapes. So, Troy Haupt owns the 2” inch quadriplex reels of Super Bowl 1, but the NFL owns what you can see on those reels. The NFL owns the copyright of the broadcast. Likewise, The Grateful Dead owns the music on the audio reels, regardless of who owns the physical tape that contains the music. Unfortunately, for NFL fans and Deadheads, this makes the content somewhat inaccessable for now. Troy Haupt has offered to sell his videotapes to the NFL, but they have mostly ignored him. If Troy tries to sell the tapes to a third party instead, the NFL says they will sue him, for unauthorized distribution of their content. The owners of the Grateful Dead tapes face a similar dilema. The band’s management isn’t willing to pay money for the physical tapes, but if the owners, or any third party the owners sell the tapes to, try to distribute the music, they will get sued. However, if it weren’t for Martin Haupt and Betty Cantor-Jackson, who had the foresight to record these events in the first place, the content would not exist at all.

Perplexed by Context? Slick Sticky Titles Skip the Toll of the Scroll

We have a few new exciting enhancements within our digital collections and archival collection guide interfaces to share this week, all related to the challenge of presenting the proper archival context for materials represented online. This is an enormous problem we’ve previously outlined on this blog, both in terms of reconciling different descriptions of the same things (in multiple metadata formats/systems)  and in terms of providing researchers with a clear indication of how a digitized item’s physical counterpart is arranged and described within its source archival collection.

Here are the new features:

View Item in Context Link

Our new digital collections (the ones in the Duke Digital Repository) have included a prominent link (under header “Source Collection”) from a digitized item to its source archival collection with some snippets of info from the collection guide presented in a popover. This was an important step toward connecting the dots, but still only gets someone to the top of the collection guide; from there, researchers are left on their own for figuring out where in the collection an item resides.

Archival source collection info presented for an item in the W. Duke & Sons collection.
Archival source collection info presented for an item in the W. Duke & Sons collection.

Beginning with this week’s newly-available Alex Harris Photographs Collection (and also the Benjamin & Julia Stockton Rush Papers Collection), we take it another step forward and present a deep link directly to the row in the collection guide that describes the item. For now, this link says “View Item in Context.”

A deep link to View Item in Context for an item in the Alex Harris Photographs Collection
A deep link to View Item in Context for an item in the Alex Harris Photographs Collection

This linkage is powered by indicating an ArchivesSpace ID in a digital object’s administrative metadata; it can be the ID for a series, subseries, folder, or item title, so we’re flexible in how granular the connection is between the digital object and its archival description.

Sticky Title & Series Info

Our archival collection guides are currently rendered as single webpages broken into sections. Larger collections make for long webpages. Sometimes they’re really super long. Where the contents of the collection are listed, there’s a visual hierarchy in place with nested descriptions of series, subseries, etc. but it’s still difficult to navigate around and simultaneously understand what it is you’re viewing. The physical tedium of scrolling and the cognitive load required to connect related descriptive information located far away on a page make for bad usability.

As of last week, we now we keep the title of the collection “stuck” to the top of the screen once you’re no longer viewing the top of the page (it also functions as a link to get back to the top). And even more helpful is a new sticky series header that links to the beginning of the archival series within which the currently visible items were arranged; there’s usually an important description up there that helps contextualize the items listed below. This sticky header is context-aware, meaning it follows you around like a loyal companion, updating itself perpetually to reflect where you are as you navigate up or down.

Title & series information "stuck" to the top of a collection guide.
Title & series information “stuck” to the top of a collection guide.

This feature is powered via the excellent Bootstrap Scrollspy Javascript utility combined with some custom styling.

All Series Browser

To give researchers easier browsing between different archival series in a collection, we added a link in the sticky header to browse “All Series.” That link pops down a menu to jump directly to the start of each series within the collection guide.

All Series

Direct Links to Anything

Researchers can now easily get a link to any row in a collection guide where the contents are described. This can be anything: a series, subseries, folder, or item. It’s simple—just mouseover the row, click the arrow that appears at the left, and copy the URL from the address bar. The row in the collection guide that’s the target of that link gets highlighted in green.

Click the arrow to link directly to a row within the collection guide.
Click the arrow to link directly to a row within the collection guide.

We would love to get feedback on these features to learn whether they’re helpful and see how we might enhance or adjust them going forward. Try them out and let us know what you think!

Special thanks to our metadata gurus Noah Huffman and Maggie Dickson for their contributions on these features.

Moving the Needle: Bring on Phase 2 of the Tripod3/Digital Collections Migration

Last time I wrote for Bitstreams, I said “Today is the New Future.” It was a day of optimism, as we published for the first time in our next-generation platform for digital collections. The debut of the W. Duke, Sons & Co. Advertising Materials, 1880-1910 was the first visible success of a major effort to migrate our digital collections into the Duke Digital Repository. “Our current plan,” I propounded, “Is to have nearly all of the content of Duke Digital Collections available in the new platform by the end of March, 2016.”

Since then we’ve published a second collection – the Benjamin and Julia Stockton Rush Papers – in the new platform, but we’ve also done more extensive planning for the migration. We’ll divide the work into six-week phases or “supersprints” that overlay the shorter sprints of our software development cycle. The work will take longer than I suggested in October – we now project the bulk of it to be completed by the end of the fourth six-week phase, or toward the end of June of this year, with some continuing until deeper in the calendar year.

As it happens, today represents the rollover from Phase 1 to Phase 2 of our plan.  Phase 1 was relatively light in its payload. During the next phase – concluding in six weeks on March 28 – we plan to add 24 of the collections currently published in our older platform, as well as two new collections.

As team leader, I take upon myself the hugely important task of assigning mottos to each phase of the project. The motto for Phase 1 was “Plant the seeds in the bottle.” It derives from the story of David Latimer’s bottle garden, which he planted in 1960 and has not watered since Duke Law alum Richard Nixon was president.

This image from from the Friedrich Carl Peetz Photographs, along with many other items from our photography and manuscript collections, will be among those re-published in the Duke Digital Repository during Phase 2 of our migration process.

Imagine, I said to the group, we are creating self-sustaining environments for our collections, that we can stash under the staircase next to the wine rack. Maybe we tend to them once or twice, but they thrive without our constant curation and intervention. Everyone sort of looked at me as if I had suggested using a guillotine as a bagel slicer for a staff breakfast event. But they’re all good sports. We hunkered down, and expect to publish one new collection, and re-publish two of the older collections, in the new platform this week.

The motto for Phase 2 is “Move the needle.” The object here is to lean on our work in Phase 1 to complete a much larger batch of materials. We’ll extend our work on photography collections in Phase 1 to include many of the existing photography collections. We’ll also re-publish many of the “manuscript collections,” which is our way of referring to the dozen or so collections that we previously published by embedding content in collection guides.

If we are successful in this approach, by the end of Phase 2, we’ll have completed a significant portion of the digital collections migrated to the Duke Digital Repository. Each collection, presumably, will flourish, sealed in a fertile, self-regulating environment, like bottle gardens, or wine.

Here’s a page where we’ll track progress.

As we’ve written previously, we’re in the process of re-digitizing the William Gedney Photographs, so they will not be migrated to the Duke Digital Repository in Phase 2, but will wait until we’ve completed that project.

OHMS-in’ with H. Lee Waters’ Movies of Local People

Q: How is a silent H. Lee Waters film like an oral history recording?
A: Neither is text searchable.

But, leave it to oral historians to construct solutions for access to audiovisual resources of all stripes. No mistake, they’ve been thinking about it for a long time. Purposefully, profoundly non-textual at their creation, oral histories have since their postwar genesis contended with a central irony: as research they are exploited almost exclusively via textual transcription. Oral histories that don’t get transcribed get, instead, infamously ignored. So as the online floodgates have opened and digital media recorders and players have kept pace, oral historians have seen an opportunity to grapple meaningfully with closing the gap between the text and its source, and perhaps at the same time free the interview from the expectation that it should be transcribed.

Enter OHMS (http://www.oralhistoryonline.org/). In 2013, Doug Boyd at the University of Kentucky debuted the results of an IMLS-funded project to create the Oral History Metadata Synchronizer. A free, open-source tool, OHMS empowers even the smallest oral history archive to encode its media with textual information. The OHMS editor enables the oral historian to easily create item level metadata for an oral history recording, including an index or subject list that can drop a researcher into an interview at that selected point. OHMS can also timestamp an existing transcript, so that researchers can track the audio via the text. In its short life, OHMS has demonstrated a way to bridge the great divide among oral history theorists, which reads something like this: Should our focus be the audio or the transcript?

While it springs from the minds of oral historians, OHMS might more accurately be termed the Media Metadata Synchronizer. When I saw Doug’s presentation on OHMS at the Oral History Association meeting in 2013, two alternative uses immediately came to mind: OHMS had the potential to help us provide bilingual entry to the 3,500+ recordings in our Radio Haiti Collection (currently being digitzed), and it could dramatically enhance access to one of Duke’s great collections, the H. Lee Waters Films. Waters filmed his Movies of Local People in mostly smaller communities around North Carolina from 1936-1942, using silent reversal film stock. Waters’ effort to supplement his family’s income has over the intervening years become a major historical document of the state during the Great Depression. And yet as rich as the collection is, it is difficult for students, scholars, and filmmakers to find specific scenes or subjects among the thousands of two-second shots Waters put to film. Several years ago, an intern in the archive created shotlists for some of the films, but these existed independently of the films and were not terribly accurate in matching times since they were created using VHS tapes (and VHS players are notorious for displaying incorrect times). OHMS would give us the opportunity to update the shotlists we had and create some new ones, linking description to precise points within the films.

Implementing OHMS at Duke Libraries was a pleasure, mostly because I had the opportunity to work with my colleagues in Digital Projects and Production Services, an outstanding team that can do amazing things with our equally amazing archival resources. Recognizing the open-source spirit of OHMS, Sean Aery, Will Sexton, and Molly Bragg immediately saw how the system could help us get deeper into the Waters films without having to build out a complex infrastructure (or lay out lots of cash). And so, when the H. Lee Waters website went live last year with 35 hours of mostly undescribed digital video (although we did post those older shotlists too, where we had them), it was generally agreed that a phase two would happen sooner rather than later and include a pilot for OHMS shotlists. Rubenstein Audiovisual Intern Olivia Carteaux worked diligently through the spring to normalize existing shotlists and create new ones where possible. This necessitated breaking down the descriptive data we had into spreadsheets, so we could then “crosswalk” the description into the OHMS xml file that is at the heart of the system.

While the OHMS index viewer allows for metadata including title or description, partial transcript, segment synopsis, keywords, subjects, GPS coordinates and a link to a map, we concentrated on providing a descriptive sentence as the title and, where it was easy to find, the location of the action.

The OHMS interface in action
The OHMS interface in action

While on the face of it generating description for the H. Lee Waters films might seem fairly straightforward, we found a number of challenges in describing his silent moving images. For starters, given Waters’ quick edits, what would adequate frequency of description look like? A new descriptive entry at every cut would be extremely unwieldy. At the same time we recognized that without a spoken or textual counterpart to the image, every time we chose not to describe would deprive potential users of a “way in.” We settled on creating entries whenever the general scene or action changed; for instance, when Waters shifts from a scene on main street to one in front of a mill or school, or within the scene at a school when the action goes from schoolyard play to the pledge of allegiance. Sometimes the shifts are obvious, other times they are more subtle, so watching the action with a deep focus is necessary. We also created new entries whenever Waters created a trick shot, such as a split screen, a speed up or slow down of the action, a reverse shot, or a masking shot. Additionally, storefront signs, buildings, and landmarks also became good places to create entries, depending on their prominence; for these, too, we attempted to create GPS coordinates where we could easily do so.

Our second challenge was how much to invest in each description. “A picture is worth a thousand words” and “every picture tells a story” sum up much of the Waters footage, but brevity was of value to the workflow. One sentence, which did not have to be properly complete — a sort of descriptive bullet point — was decided on as our rule of thumb. In the next phase of this process I hope to use the keywords field more effectively, but that requires a controlled vocabulary, which brings me to our third challenge: normalizing description was the most difficult single piece of describing the films. Turns out there’s not a lot of library-based methodology for describing moving images, although there are general recommended approaches for describing images for the visually impaired. Then, of course, there’s the difficulty in deciding how to represent nuanced factors such as race, ethnicity, class, and gender. It is clear that in the event we undertake to create shotlists for all the Waters films, the first order of business will be to create a thesaurus of terms, to provide consistent description across the films.

When we felt like we had enough transformed shotlists for a pilot OHMS project for the Waters website, the OHMS player was loaded onto a server and the playlists uploaded. Links to the 29 shotlists were then placed below the video windows on their respective pages. To access the video and synchronized description, simply click on the link that says “Synchronized Shot List.” In this initial run we’re hoping to upload about 20 more shotlists, and at that point take a breath and see how we can improve on what we’ve accomplished. Given the challenges of presenting audiovisual resources online, there’s never really a “done,” only steady improvement. OHMS has provided what I believe is a clear step forward on access to the Waters films, and has the potential to help us transform other audiovisual collections into deeply mined treasures of the archive.

Post contributed by Craig Breaden, Audiovisual Archivist, Rubenstein Library

Future Retro: New Frontiers in Portability

Duke Libraries’ Digital Collections offer a wealth of primary source material, opening unique windows to cultural moments both long past and quickly closing.  In my work as an audio digitization specialist, I take a particular interest in current and historical audio technology and also how it is depicted in other media.  The digitized Duke Chronicle newspaper issues from the 1980’s provide a look at how students of the time were consuming and using ever-smaller audio devices in the early days of portable technology.

walkman_ii

Sony introduced the Walkman in the U.S. in 1980.  Roughly pocket-sized (actually somewhere around the size of a sandwich or small brick), it allowed the user to take their music on the go, listening to cassette tapes on lightweight headphones while walking, jogging, or travelling.  The product was wildly successful and ubiquitous in its time, so much so that “walkman” became a generic term for any portable audio device.

walkman_blowout

The success of the Walkman was probably bolstered by the jogging/fitness craze that began in the late 1970s.  Health-conscious consumers could get in shape while listening to their favorite tunes.  This points to two of the main concepts that Sony highlighted in their marketing of the Walkman:  personalization and privatization.

portables1

Previously, the only widely available portable audio devices were transistor radios, meaning that the listener was at the mercy of the DJ or station manager’s musical tastes.  However, the Walkman user could choose from their own collection of commercially available albums, or take it a step further, and make custom mixtapes of their favorite songs.

lost walkman

The Walkman also allowed the user to “tune out” surrounding distractions and be immersed in their own private sonic environment.  In an increasingly noisy and urbanized world, the listener was able to carve out a small space in the cacophony and confusion.  Some models had two headphone jacks so you could even share this space with a friend.

walkman_smaller

One can see that these guiding concepts behind the Walkman and its successful marketing have only continued to proliferate and accelerate in the world today.  We now expect unlimited on-demand media on our handheld devices 24 hours a day.  Students of the 1980’s had to make do with a boombox and backpack full of cassette tapes.

boombox

 

508 Update

Web accessibility is something that I care a lot about. In the 15 some odd years that I’ve been doing professional web work, it’s been really satisfying to see accessibility increasingly becoming an area of focus and importance. While we’re not there yet, I am more and more confident that accessibility and universal design will be embraced not just an afterthought, but rather considered as essential and integrated at the first steps of a project.

Accessibility interests have been making headlines this past year, such as with the lawsuit filed against edX (MIT and Harvard). Whereas the edX lawsuit focused on section 504 of the Rehabilitation Act of 1973, the web world and accessibility are usually synonymous with section 508. The current guidelines were enacted in 1998 and badly in need of an update. In February of this year, the Access Board published a proposed update to the 508 standards. They are going to take a year or so to digest and evaluate all of the comments they have received. It’s expected that the new law will be published in the Federal Register around October of next year. Institutions will have six months to make sure they are compliant, which means everything needs to be ready to go around April of 2017.

I recently attended a webinar on the upcoming changes that was developed by the SSB Bart Group. Key areas of interest to me were as follows.

WCAG 2.0 will be base standard

The Web Content Accessibility Guidelines (WCAG) are general a more simplified yet also more strict set of guidelines for making content available to all users as compared to the existing 508 guidelines. The WCAG standard is adapted around the world, so the updated rule to section 508 means there will be an international focus on standards.

Focus on functional use instead of product type(s)

The rules will focus less on ‘prescriptive’ fixes and more on general approaches to making content accessible. The current rules are very detailed in terms of what sorts of devices need to do what. The new rule tends to favor user preferences in order to give users control. The goal being to try to enable the broadest range of users, including those with cognitive disabilities.

Non-web content is now covered

This applies to anything that will be publicly available from an institution, including things like PDFs, office documents, and so on. It also includes social media and email. One thing to note is that only the final document is covered, so working versions may not be accessible. Similarly, archival content is not covered unless it’s made available to the public.

Strengthened interoperability standards

These standards will apply to software and frameworks, as well as mobile and hybrid apps. However, it does not apply specifically to web apps, due to the WCAG safe harbor. But the end result should be that it’s easier for assistive technologies to communicate with other software.

Requirements for authoring tools to create accessible content

This means that editing tools like Microsoft office and Adobe Acrobat will need to output content that is accessible by default. Currently it can take a great deal of effort after the fact to make a document accessible. Often times content creators either lack the knowledge of how to make them, or can’t invest the time needed. I think this change should end up benefiting a lot of users.


In general, the intent and purpose of these changes help the 508 standards catch up to the modern world of technology. The hopeful outcome will be that accessibility is baked in to content from the start and not just included as an afterthought. I think the biggest motivator to consider is that making content accessible doesn’t just benefit disabled users, but rather it makes that content easier to use, find, etc. for everyone.

Star Wars: The Fans Strike Back

At the recent Association of Moving Image Archivists conference in Portland, Oregon, I saw a lot of great presentations related to film and video preservation. As a Star Wars fan, I found one session particularly interesting. It was presented by Jimi Jones, a doctoral student at the University of Illinois at Urbana-Champaign, and is the result of his research into the world of fan edits.

This is a fairly modern phenomenon, whereby fans of a particular film, music recording or television show, often frustrated by the unavailability of that work on modern media, take it upon themselves to make it available, irrespective of copyright and/or the original creator’s wishes. Some fan edits appropriate the work, and alter it significantly, to make their own unique version. Neither Jimi Jones nor AMIA is advocating for fan edits, but merely exploring the sociological and technological implications they may have in the world of film and video digitization and preservation.

An example is the original 1977 theatrical release of “Star Wars” (later retitled Star Wars Episode IV: A New Hope), a movie I spent my entire 1977 summer allowance on as a child, because I was so awestruck that I went back to my local theater to see it again and again. The version that I saw then, free of more recently superimposed CGI elements like Jabba The Hut, and the version in which Han Solo shoots Greedo in the Mos Eisley Cantina, before Greedo can shoot Solo, is not commercially available today via any modern high definition media such as Blu-Ray DVD or HD streaming.

The last time most fans saw the original, unaltered Star Wars Trilogy, it was likely on VHS tape (as shown above). George Lucas, the creator of Star Wars, insists that his more recent “Special Editions” of the Star Wars Trilogy, with the added CGI and the more politically-correct, less trigger-happy Han Solo, are the “definitive” versions. Thus Lucas has refused to allow any other version to be legally distributed for at least the past decade. Many Star Wars fans, however, find this unacceptable, and they are striking back.

Armed with sophisticated video digitization and editing software, a network of Star Wars fans have collaborated to create “Star Wars: Despecialized Edition,” a composite of assorted pre-existing elements that accurately presents the 1977-1983 theatrical versions of the original Star Wars Trilogy in high definition for the first time. The project is led by an English teacher in Czechoslovakia, who goes by the name of “Harmy” online and is referred to as a “guerilla restorationist.” Using BitTorrent, and other peer-to-peer networks, fans can now download “Despecialized,” burn it to Blu-Ray, print out high-quality cover art, and watch it on their modern widescreen TV sets in high definition.

The fans, rightly or wrongly, claim these are the versions of the films they grew up with, and they have a right to see them, regardless of what George Lucas thinks. Personally, I never liked the changes Lucas later made to the original trilogy, and I agree that “Han Shot First,” or to paraphrase Johnny Cash, “I shot a man named Greedo, just to watch him die.” We all know Greedo was a scumbag who was about to kill Solo anyway, so Han’s preemptive shot in the original Star Wars makes perfect sense. I’m not endorsing piracy, but, as a fan, I certainly understand the pent-up demand for “Star Wars: Despecialized Edition.”

tfa_vinyl

The psychology of nostalgia is interesting,  particularly when fans desire something so intensely, they will go to great lengths, technologically, and otherwise, to satiate that need. Absence makes the heart, or fan, grow stronger. This is not unique to Star Wars. For instance, Neil Young, one of the best songwriters of his generation, released a major-label record in 1973 called “Time Fades Away,” which, to this day, has never been released on compact disc.

The album, recorded on tour while his biggest hit single, “Heart of Gold,” was topping the charts, is an abrupt shift in mood and approach, and the beginning of a darker, more desolate string of albums that fans refer to as “The Ditch Trilogy.” Regarding this period, Neil said: “Heart of Gold put me in the middle of the road. Traveling there soon became a bore, so I headed for the ditch. A rougher ride but I saw more interesting people there.” Many fans, myself included, regard the three records that comprise the ditch trilogy as his greatest achievement, due to their brutal honesty, and Neil’s absolute refusal to play it safe by coasting on his recent mainstream success. But for Neil, Time Fades Away brings up so many bad memories, particularly regarding the death of his guitarist, Danny Whitten, that he has long refused to release it on CD.

In 2005, Neil Young fans began gathering at least 14,000 petition signatures to get the album released on compact disc, but that yielded no results. So many took it upon themselves, using modern technology, to meticulously transfer mint-condition vinyl copies of “Time Fades Away” from their turntable to desktop computer using widely available professional audio software, and then burn the album to CD. Fans also scanned the original cover art from the vinyl record, and made compact disc covers and labels that closely approximate what it would look like if the CD had been officially released.

Other fans, using peer-to-peer networks, were able to locate a digital “test pressing” of the audio for a future CD release that was nixed by Neil before it went into production. Combining that test pressing audio, free of vinyl static, with professional artwork, the fans were essentially able to produce what Neil refused to allow, a pristine-sounding, and professionally-looking version of Time Fades Away on compact disc. Perhaps in response, Neil, has, just in the last year, allowed Time Fades Away to be released in digital form via his high-resolution 192.0kHz/24bit music service, Pono Music.

It’s clear that the main intent of the fans of Star Wars, Time Fades Away and other works of art is not to profit off their hybrid creations, or to anger the original creators. It’s merely to finally have access to what they are so nostalgic about. Ironically, if it wasn’t for the unavailability of these works, a lot of this community, creativity, software mastery and “guerrilla restoration” would not be taking place. There’s something about the fact that certain works are missing from the marketplace, that makes fans hunger for them, talk about them, obsess about them, and then find creative ways of acquiring or reproducing them.

This is the same impulse that fuels the fire of toy collectors, book collectors, garage-sale hunters and eBay bidders. It’s this feeling that you had something, or experienced something magical when you were younger, and no one has the right to alter it, or take access to it away from you, not even the person who created it. If you can just find it again, watch it, listen to it and hold it in your hands, you can recapture that youthful feeling, share it with others, and protect the work from oblivion. It seems like just yesterday that I was watching Han Solo shoot Greedo first on the big screen, but that was almost 40 years ago. “’Cause you know how time fades away.”

Zoomable Hi-Res Images: Hopping Aboard the OpenSeadragon Bandwagon

Our new W. Duke & Sons digital collection (released a month ago) stands as an important milestone for us: our first collection constructed in the (Hydra-based) Duke Digital Repository, which is built on a suite of community-built open source software. Among that software is a remarkable image viewer tool called OpenSeadragon. Its website describes it as:

“an open-source, web-based viewer for high-resolution zoomable images, implemented in pure Javascript, for desktop and mobile.”

OpenSeadragon viewer in action on W. Duke & Sons collection.
OpenSeadragon viewer in action on W. Duke & Sons collection.
OpenSeadragon zoomed in, W. Duke & Sons collection.
OpenSeadragon zoomed in, W. Duke & Sons collection.

In concert with tiled digital images (we use Pyramid TIFFs), an image server (IIPImage), and a standard image data model (IIIF: International Image Interoperability Framework), OpenSeadragon considerably elevates the experience of viewing our image collections online. Its greatest virtues include:

  • smooth, continuous zooming and panning for high-resolution images
  • open source, built on web standards
  • extensible and well-documented

We can’t wait to get to share more of our image collections in the new platform.

OpenSeadragon Examples Elsewhere

Arthur C. Clarke’s Third Law states, “Any sufficiently advanced technology is indistinguishable from magic.” And looking at high-res images in OpenSeadragon feels pretty darn magical. Here are some of my favorite implementations from places that inspired us to use it:

  1. The Metropolitan Museum of Art. Zooming in close on this van Gogh self-portrait gives you a means to inspect the intense brushstrokes and texture of the canvas in a way that you couldn’t otherwise experience, even by visiting the museum in-person.

    Self-Portrait with a Straw Hat (obverse: The Potato Peeler). Vincent van Gogh, 1887.
    Self-Portrait with a Straw Hat (obverse: The Potato Peeler). Vincent van Gogh, 1887.
  2. Chronicling America: Historic American Newspapers (Library of Congress). For instance, zoom to read in the July 21, 1871 issue of “The Sun” (New York City) about my great-great-grandfather George Aery’s conquest being crowned the Schuetzen King, sharpshooting champion, at a popular annual festival of marksmen.
    The sun. (New York [N.Y.]), 21 July 1871. Chronicling America: Historic American Newspapers. Lib. of Congress.
    The sun. (New York [N.Y.]), 21 July 1871. Chronicling America: Historic American Newspapers. Lib. of Congress.
  3. Other GLAMs. See these other nice examples from The National Gallery of Art, The Smithsonian National Museum of American Museum, NYPL Digital Collections, and Digital Public Library of America (DPLA).

OpenSeadragon’s Microsoft Origins

OpenSeadragon

The software began with a company called Sand Codex, founded in Princeton, NJ in 2003. By 2005, the company had moved to Seattle and changed its name to Seadragon Software. Microsoft acquired the company in 2006 and positioned Seadragon within Microsoft Live Labs.

In March 2007, Seadragon founder Blaise Agüera y Arcase gave a TED Talk where he showcased the power of continuous multi-resolution deep-zooming for applications built on Seadragon. In the months that followed, we held a well-attended staff event at Duke Libraries to watch the talk. There was a lot of ooh-ing and aah-ing. Indeed, it looked like magic. But while it did foretell a real future for our image collections, at the time it felt unattainable and impractical for our needs. It was a Microsoft thing. It required special software to view. It wasn’t going to happen here, not when we were making a commitment to move away from proprietary platforms and plugins.

Sometime in 2008, Microsoft developed a more open Javascript-based version of Seadragon called Seadragon Ajax, and by 2009 had shared it as open-source software via a New BSD license.  That curtailed many barriers for use, however it still required a Microsoft server-side framework and Microsoft AJAX library.  So in the years since, the software has been re-engineered to be truly open, framework-agnostic, and has thus been rebranded as OpenSeadragon. Having a technology that’s this advanced–and so useful–be so open has been an incredible boon to cultural heritage institutions and, by extension, to the patrons we serve.

Setup

OpenSeadragon’s documentation is thorough, so that helped us get up and running quickly with adding and customizing features. W. Duke & Sons cards were scanned front & back, and the albums are paginated, so we knew we had to support navigation within multi-image items. These are the key features involved:

Customizations

Some aspects of the interface weren’t quite as we needed them to be out-of-the-box, so we added and customized a few features.

  • Custom Button Binding. Created our own navigation menu to match our site’s more modern aesthetic.
  • Page Indicator / Jump to Page. Developed a page indicator and direct-input page jump box using the OpenSeadragon API
  • Styling. Revised the look & feel with additional CSS & Javascript.

Future Directions: Page-Turning & IIIF

OpenSeadragon does have some limitations where we think that it alone won’t meet all our needs for image interfaces. When we have highly-structured paginated items with associated transcriptions or annotations, we’ll need to implement something a bit more complex. Mirador (example) and Universal Viewer (example) are two example open-source page-viewer tools that are built on top of OpenSeadragon. Both projects depend on “manifests” using the IIIF presentation API to model this additional data.

The Hydra Page Turner Interest Group recently produced a summary report that compares these page-viewer tools and features, and highlights strategies for creating the multi-image IIIF manifests they rely upon. Several Hydra partners are already off and running; at Duke we still have some additional research and development to do in this area.

We’ll be adding many more image collections in the coming months, including migrating all of our existing ones that predated our new platform. Exciting times lie ahead. Stay tuned.

Animated Demo

eye-ui-demo-4