Category Archives: Digital Collections

Moving the Needle: Bring on Phase 2 of the Tripod3/Digital Collections Migration

Last time I wrote for Bitstreams, I said “Today is the New Future.” It was a day of optimism, as we published for the first time in our next-generation platform for digital collections. The debut of the W. Duke, Sons & Co. Advertising Materials, 1880-1910 was the first visible success of a major effort to migrate our digital collections into the Duke Digital Repository. “Our current plan,” I propounded, “Is to have nearly all of the content of Duke Digital Collections available in the new platform by the end of March, 2016.”

Since then we’ve published a second collection – the Benjamin and Julia Stockton Rush Papers – in the new platform, but we’ve also done more extensive planning for the migration. We’ll divide the work into six-week phases or “supersprints” that overlay the shorter sprints of our software development cycle. The work will take longer than I suggested in October – we now project the bulk of it to be completed by the end of the fourth six-week phase, or toward the end of June of this year, with some continuing until deeper in the calendar year.

As it happens, today represents the rollover from Phase 1 to Phase 2 of our plan.  Phase 1 was relatively light in its payload. During the next phase – concluding in six weeks on March 28 – we plan to add 24 of the collections currently published in our older platform, as well as two new collections.

As team leader, I take upon myself the hugely important task of assigning mottos to each phase of the project. The motto for Phase 1 was “Plant the seeds in the bottle.” It derives from the story of David Latimer’s bottle garden, which he planted in 1960 and has not watered since Duke Law alum Richard Nixon was president.

This image from from the Friedrich Carl Peetz Photographs, along with many other items from our photography and manuscript collections, will be among those re-published in the Duke Digital Repository during Phase 2 of our migration process.

Imagine, I said to the group, we are creating self-sustaining environments for our collections, that we can stash under the staircase next to the wine rack. Maybe we tend to them once or twice, but they thrive without our constant curation and intervention. Everyone sort of looked at me as if I had suggested using a guillotine as a bagel slicer for a staff breakfast event. But they’re all good sports. We hunkered down, and expect to publish one new collection, and re-publish two of the older collections, in the new platform this week.

The motto for Phase 2 is “Move the needle.” The object here is to lean on our work in Phase 1 to complete a much larger batch of materials. We’ll extend our work on photography collections in Phase 1 to include many of the existing photography collections. We’ll also re-publish many of the “manuscript collections,” which is our way of referring to the dozen or so collections that we previously published by embedding content in collection guides.

If we are successful in this approach, by the end of Phase 2, we’ll have completed a significant portion of the digital collections migrated to the Duke Digital Repository. Each collection, presumably, will flourish, sealed in a fertile, self-regulating environment, like bottle gardens, or wine.

Here’s a page where we’ll track progress.

As we’ve written previously, we’re in the process of re-digitizing the William Gedney Photographs, so they will not be migrated to the Duke Digital Repository in Phase 2, but will wait until we’ve completed that project.

Future Retro: New Frontiers in Portability

Duke Libraries’ Digital Collections offer a wealth of primary source material, opening unique windows to cultural moments both long past and quickly closing.  In my work as an audio digitization specialist, I take a particular interest in current and historical audio technology and also how it is depicted in other media.  The digitized Duke Chronicle newspaper issues from the 1980’s provide a look at how students of the time were consuming and using ever-smaller audio devices in the early days of portable technology.

walkman_ii

Sony introduced the Walkman in the U.S. in 1980.  Roughly pocket-sized (actually somewhere around the size of a sandwich or small brick), it allowed the user to take their music on the go, listening to cassette tapes on lightweight headphones while walking, jogging, or travelling.  The product was wildly successful and ubiquitous in its time, so much so that “walkman” became a generic term for any portable audio device.

walkman_blowout

The success of the Walkman was probably bolstered by the jogging/fitness craze that began in the late 1970s.  Health-conscious consumers could get in shape while listening to their favorite tunes.  This points to two of the main concepts that Sony highlighted in their marketing of the Walkman:  personalization and privatization.

portables1

Previously, the only widely available portable audio devices were transistor radios, meaning that the listener was at the mercy of the DJ or station manager’s musical tastes.  However, the Walkman user could choose from their own collection of commercially available albums, or take it a step further, and make custom mixtapes of their favorite songs.

lost walkman

The Walkman also allowed the user to “tune out” surrounding distractions and be immersed in their own private sonic environment.  In an increasingly noisy and urbanized world, the listener was able to carve out a small space in the cacophony and confusion.  Some models had two headphone jacks so you could even share this space with a friend.

walkman_smaller

One can see that these guiding concepts behind the Walkman and its successful marketing have only continued to proliferate and accelerate in the world today.  We now expect unlimited on-demand media on our handheld devices 24 hours a day.  Students of the 1980’s had to make do with a boombox and backpack full of cassette tapes.

boombox

 

Digital Projects and Production Services’ “Best Of” List, 2015

Its that time of year when all the year end “best of” lists come out, best music, movies, books, etc.  Well, we could not resist following suit this year, so… Ladies in gentlemen, I give you in – no particular order – the 2015 best of list for the Digital Projects and Production Services department (DPPS).

Metadata!
Metadata!

Metadata Architect
In 2015, DPPS welcomed a new staff member to our team; Maggie Dickson came on board as our metadata architect! She is already leading a team to whip our digital collections metadata into shape, and is actively consulting with the digital repository team and others around the library.  Bringing metadata expertise into the DPPS portfolio ensures that collections are as discoverable, shareable, and re-purposable as possible.

An issue of the Chronicle from 1988
An issue of the Chronicle from 1988

King Intern for Digital Collections
DPPS started the year with two large University Archives projects on our plates: the ongoing Duke University Chronicle digitization and a grant to digitize hundreds of Chapel recordings.  Thankfully, University Archives allocated funding for us to hire an intern, and what a fabulous intern we found in Jessica Serrao (the proof is in her wonderful blogposts).  The internship has been an unqualified success, and we hope to be able to repeat such a collaboration with other units around the library.

 

dukeandsonsTripod 3
Our digital project developers have spent much of the year developing the new Tripod3 interface for the Duke Digital Repository. This process has been an excellent opportunity for cross departmental collaborative application development and implementing Agile methodology with sprints, scrums, and stand up meetings galore!  We launched our first collection not the new platform in October and we will have a second one out the door before the end of this year.   We plan on building on this success in 2016 as we migrate existing collections over to Tripod3.

Repository ingest planning
Speaking of Tripod3 and the Duke Digital Repository, we have ingesting digital collections into the Duke Digital Repository since 2014.  However, we have a plan to kick ingests up a notch (or 5).  Although the real work will happen in 2016, the planning has been a long time coming and we are all very excited to be at this phase of the Tripod3 / repository process (even if it will be a lot of work).   Stay tuned!

DCcardfrontDigital Collections Promotional Card
This is admittedly a small achievement, but it is one that has been on my to-do list for 2 years so it actually feels like a pretty big deal.  In 2015, we designed a 5 x 7 postcard to hand out during Digital Production Center (DPC) tours, at conferences, and to any visitors to the library.   Also, I just really love to see my UNC fan colleagues cringe every time they turn the card over and see Coach K’s face.  Its really the little things that make our work fun.

New Exhibits Website
In anticipation of opening of new exhibit spaces in the renovated Rubenstein library, DPPS collaborated with the exhibits coordinator to create a brand new library exhibits webpage.  This is your one stop shop for all library exhibits information in all its well-designed glory.

Aggressive cassette rehousing procedures
Aggressive cassette rehousing procedures

Audio and Video Preservation
In 2014, the Digital production Center bolstered workflows for preservation based digitization.  Unlike our digital collections projects, these preservation digitization efforts do not have a publication outcome so they often go unnoticed.  Over the past year, we have quietly digitized around 400 audio cassettes in house (this doesn’t count outsourced Chapel Recordings digitization), some of which need to be dramatically re-housed.

On the video side, efforts have been sidelined by digital preservation storage costs.  However some behind the scenes planning is in the works, which means we should be able to do more next year.  Also, we were able to purchase a Umatic tape cleaner this year, which while it doesn’t sound very glamorous to the rest of the world, thrills us to no end.

Revisiting the William Gedney Digital Collection
Fans of Duke Digital Collections are familiar with the current Gedney Digital Collection. Both the physical and digital collection have long needed an update.  So in recent years, the physical collection has been reprocessed, and this Fall we started an effort to digitized more materials in the collection and to higher standards than were practical in the late 1990s.

DPC's new work room
DPC’s new work room

Expanding DPC
When the Rubenstein Library re-opened, our neighbor moved into the new building, and the DPC got to expand into his office!   The extra breathing room means more space for our specialists and our equipment, which is not only more comfortable but also better for our digitization practices.  The two spaces are separate for now, but we are hoping to be able to combine them in the next year or two.

 

2015 was a great year in DPPS, and there are many more accomplishments we could add to this list.  One of our team mottos is: “great productivity and collaboration, business as usual”.  We look forward to more of the same in 2016!

Zoomable Hi-Res Images: Hopping Aboard the OpenSeadragon Bandwagon

Our new W. Duke & Sons digital collection (released a month ago) stands as an important milestone for us: our first collection constructed in the (Hydra-based) Duke Digital Repository, which is built on a suite of community-built open source software. Among that software is a remarkable image viewer tool called OpenSeadragon. Its website describes it as:

“an open-source, web-based viewer for high-resolution zoomable images, implemented in pure Javascript, for desktop and mobile.”

OpenSeadragon viewer in action on W. Duke & Sons collection.
OpenSeadragon viewer in action on W. Duke & Sons collection.
OpenSeadragon zoomed in, W. Duke & Sons collection.
OpenSeadragon zoomed in, W. Duke & Sons collection.

In concert with tiled digital images (we use Pyramid TIFFs), an image server (IIPImage), and a standard image data model (IIIF: International Image Interoperability Framework), OpenSeadragon considerably elevates the experience of viewing our image collections online. Its greatest virtues include:

  • smooth, continuous zooming and panning for high-resolution images
  • open source, built on web standards
  • extensible and well-documented

We can’t wait to get to share more of our image collections in the new platform.

OpenSeadragon Examples Elsewhere

Arthur C. Clarke’s Third Law states, “Any sufficiently advanced technology is indistinguishable from magic.” And looking at high-res images in OpenSeadragon feels pretty darn magical. Here are some of my favorite implementations from places that inspired us to use it:

  1. The Metropolitan Museum of Art. Zooming in close on this van Gogh self-portrait gives you a means to inspect the intense brushstrokes and texture of the canvas in a way that you couldn’t otherwise experience, even by visiting the museum in-person.

    Self-Portrait with a Straw Hat (obverse: The Potato Peeler). Vincent van Gogh, 1887.
    Self-Portrait with a Straw Hat (obverse: The Potato Peeler). Vincent van Gogh, 1887.
  2. Chronicling America: Historic American Newspapers (Library of Congress). For instance, zoom to read in the July 21, 1871 issue of “The Sun” (New York City) about my great-great-grandfather George Aery’s conquest being crowned the Schuetzen King, sharpshooting champion, at a popular annual festival of marksmen.
    The sun. (New York [N.Y.]), 21 July 1871. Chronicling America: Historic American Newspapers. Lib. of Congress.
    The sun. (New York [N.Y.]), 21 July 1871. Chronicling America: Historic American Newspapers. Lib. of Congress.
  3. Other GLAMs. See these other nice examples from The National Gallery of Art, The Smithsonian National Museum of American Museum, NYPL Digital Collections, and Digital Public Library of America (DPLA).

OpenSeadragon’s Microsoft Origins

OpenSeadragon

The software began with a company called Sand Codex, founded in Princeton, NJ in 2003. By 2005, the company had moved to Seattle and changed its name to Seadragon Software. Microsoft acquired the company in 2006 and positioned Seadragon within Microsoft Live Labs.

In March 2007, Seadragon founder Blaise Agüera y Arcase gave a TED Talk where he showcased the power of continuous multi-resolution deep-zooming for applications built on Seadragon. In the months that followed, we held a well-attended staff event at Duke Libraries to watch the talk. There was a lot of ooh-ing and aah-ing. Indeed, it looked like magic. But while it did foretell a real future for our image collections, at the time it felt unattainable and impractical for our needs. It was a Microsoft thing. It required special software to view. It wasn’t going to happen here, not when we were making a commitment to move away from proprietary platforms and plugins.

Sometime in 2008, Microsoft developed a more open Javascript-based version of Seadragon called Seadragon Ajax, and by 2009 had shared it as open-source software via a New BSD license.  That curtailed many barriers for use, however it still required a Microsoft server-side framework and Microsoft AJAX library.  So in the years since, the software has been re-engineered to be truly open, framework-agnostic, and has thus been rebranded as OpenSeadragon. Having a technology that’s this advanced–and so useful–be so open has been an incredible boon to cultural heritage institutions and, by extension, to the patrons we serve.

Setup

OpenSeadragon’s documentation is thorough, so that helped us get up and running quickly with adding and customizing features. W. Duke & Sons cards were scanned front & back, and the albums are paginated, so we knew we had to support navigation within multi-image items. These are the key features involved:

Customizations

Some aspects of the interface weren’t quite as we needed them to be out-of-the-box, so we added and customized a few features.

  • Custom Button Binding. Created our own navigation menu to match our site’s more modern aesthetic.
  • Page Indicator / Jump to Page. Developed a page indicator and direct-input page jump box using the OpenSeadragon API
  • Styling. Revised the look & feel with additional CSS & Javascript.

Future Directions: Page-Turning & IIIF

OpenSeadragon does have some limitations where we think that it alone won’t meet all our needs for image interfaces. When we have highly-structured paginated items with associated transcriptions or annotations, we’ll need to implement something a bit more complex. Mirador (example) and Universal Viewer (example) are two example open-source page-viewer tools that are built on top of OpenSeadragon. Both projects depend on “manifests” using the IIIF presentation API to model this additional data.

The Hydra Page Turner Interest Group recently produced a summary report that compares these page-viewer tools and features, and highlights strategies for creating the multi-image IIIF manifests they rely upon. Several Hydra partners are already off and running; at Duke we still have some additional research and development to do in this area.

We’ll be adding many more image collections in the coming months, including migrating all of our existing ones that predated our new platform. Exciting times lie ahead. Stay tuned.

Animated Demo

eye-ui-demo-4

 

Today is the New Future: The Tripod3 Project and our Next-Gen UI for Digital Collections

Yesterday was Back to the Future day, and the Internet had a lot of fun with it. I guess now it falls to each and every one of us, to determine whether or not today begins a new future. It’s certainly true for Duke Digital Collections.

Today we roll out – softly – the first release of Tripod3, the next-generation platform for digital collections. For now, the current version supports a single, new collection, the W. Duke, Sons & Co. Advertising Materials, 1880-1910. We’re excited about both the collection – which Noah Huffman previewed in this blog almost exactly a year ago – and the platform, which represents a major milestone in a project that began nearly a year ago.

The next few months will see a great deal more work on the project. We have new collections scheduled for December and the first quarter of 2016, we’ll gradually migrate the collections from our existing site, and we’ll be developing the features and the look of the new site in an iterative process of feedback, analysis, and implementation. Our current plan is to have nearly all of the content of Duke Digital Collections available in the new platform by the end of March, 2016.

The completion of the Tripod3 project will mean the end of life for the current-generation platform, which we call, to no one’s surprise, Tripod2. However, we have not set an exact timeline for sunsetting Tripod2. During the transitional phase, we will do everything we can to make the architecture of Duke Digital Collections transparent, and our plans clear.

After the jump, I’ll spend the rest of this post going into a little more depth about the project, but want to express my pride and gratitude to an excellent team – you know who you are – who helped us achieve this milestone.

Continue reading Today is the New Future: The Tripod3 Project and our Next-Gen UI for Digital Collections

Google Analytics and Digitized Cultural Heritage

For centuries, cultural heritage institutions—like libraries and archives—monitored the use of their collections through varying means of counting and recording.  From rare manuscripts used in special collections reading rooms to the copy of Moby Dick checked out at the circulation desk, we like to keep note of who is using what. But what about those digitized special collections that patrons use more and more often?  How do we monitor use of materials when they live on websites and are accessed remotely by computers, tablets, and smartphones?  That’s where web analytics comes into play.

Google Analytics is by far the largest analytics aggregator today, and it is what many cultural heritage institutions turn to for data on digital collections.  We can now rely on pageviews and sessions, and a plethora of other metrics, to inform us how patrons are using materials online.

Recently, I began examining the use of Duke University Archives’ digital collections to see what I could find.  I quickly found that I was lost.  Google Analytics is so overwhelmingly abundant with data, what I’d venture to call a statistical minefield (or ninja warrior obstacle course?), that I found myself in a fog of confusion.  Don’t get me wrong, these data sets can be extremely useful if you know what you’re doing.  It just took me a while to get my bearings and slowly crawl out of the fog.

With that said, if you’re interested in learning more, use every resource available to wrap your head around what Google Analytics offers and how it can help your institution.  Google provides a set of tutorials at Analytics Academy.  Another site, Lynda.com is a great subscription resource that may be accessible through institutional memberships.  Don’t rule out YouTube either.  I also learned a lot of the basics from Molly Bragg, my supervisor, who is on the Digital Library Federation Assessment Interest Group’s (DLF AIG) Analytics subcommittee.  They’ve been working on a white paper to lay out digital library analytics best practices, which they hope will help steer cultural heritage institutions in the right direction.

In my own experience scouring usage data from the Duke Chapel Recordings collection, I found many rather predictable results: most users come from North Carolina, Durham in particular.

analyticsblog_statemap

analyticsblog_citymap

But then there were strange statistics that can sometimes be hard to figure out.  Like why is Texas our third highest state for traffic, with 7% of our sessions originating there?

analyticsblog_topstatesTexas

Of Texas’ total sessions, 22% viewed webpages relating to Carlyle Marney’s sermons.  For much of the 1970s, Marney was a visiting professor at Duke’s Divinity School, but this web traffic all originated in Austin, TX.  Doing some internet digging, I found that in the 1940s and 1950s, Marney was a pastor and seminary professor in Austin.  It is understandable why the interest in his sermons comes from a region in Texas that is likely familiar with his pastoral work.

I also found that referrals from our very own Bitstreams blog make up a portion of the traffic to the collection.  That explains some of our spikes in pageviews, which correspond with blog post dates.  This is proof that social media does generate traffic!

analyticsblog_pageviewstimeline

Once that disorienting fog has lifted, and you have navigated the statistical minefield, you might just find that analytics can be fun.  Now it doesn’t look so much like a minefield but a gold mine.

Have you found analytics useful at your cultural heritage institution?  We’d love to hear from you!

Baby Steps towards Metadata Synchronization

How We Got Here: A terribly simplistic history of library metadata

Managing the description of library collections (especially “special” collections) is an increasingly complex task.  In the days of yore, we bought books and other things, typed up or purchased catalog cards describing those things (metadata), and filed the cards away.  It was tedious work, but fairly straightforward.  If you wanted to know something about anything in the library’s collection, you went to the card catalog.  Simple.

Some time in the 1970s or 1980s we migrated all (well, most) of that card catalog description to the ILS (Integrated Library System).  If you wanted to describe something in the library, you made a MARC record in the ILS.  Patrons searched those MARC records in the OPAC (the public-facing view of the ILS).  Still pretty simple.  Sure, we maintained other paper-based tools for managing description of manuscript and archival collections (printed finding aids, registers, etc.), but until somewhat recently, the ILS was really the only “system” in use in the library.

duke_online_catalog_1980s
Duke Online Catalog, 1980s

From the 1990s on things got complicated. We started making EAD and MARC records for archival collections. We started digitizing parts of those collections and creating Dublin Core records and sometimes TEI for the digital objects.  We created and stored library metadata in relational databases (MySQL), METS, MODS, and even flat HTML. As library metadata standards proliferated, so too did the systems we used the create, manage, and store that metadata.

Now, we have an ILS for managing MARC-based catalog records, ArchivesSpace for managing more detailed descriptions of manuscript collections, a Fedora (Hydra) repository for managing digital objects, CONTENTdm for managing some other digital objects, and lots of little intermediary descriptive tools (spreadsheets, databases, etc.).  Each of these systems stores library metadata in a different format and in varying levels of detail.

So what’s the problem and what are we doing about it?

The variety of metadata standards and systems isn’t the problem.  What is the problem–a very painful and time-consuming problem–is having to maintain and reconcile description of the same thing (a manuscript, a folder of letters, an image, an audio file, etc.) across all these disparate metadata formats and systems.  It’s a metadata synchronization problem and it’s a big one.

For the past four months or so, a group of archivists and developers here in the library have been meeting regularly to brainstorm ways to solve or at least help alleviate some of our metadata synchronization problems.  We’ve been calling our group “The Synchronizers.”

What have The Synchronizers been up to?  Well, so far we’ve been trying to tackle two pieces of the synchronization conundrum:

Problem 1 (the big one): Keeping metadata for special collections materials in sync across ArchivesSpace, the digitization process, and our Hydra repository.

Ideally, we’d like to re-purpose metadata from ArchivesSpace to facilitate the digitization process and also keep that metadata in sync as items are digitized, described more fully, and ingested into our Hydra repository. Fortunately, we’re not the only library trying to tackle this problem.  For more on AS/Hydra integration, see the work of the Hydra Archivists Interest Group.

Below are a couple of rough sketches we drafted to start thinking about this problem at Duke.

AS_Hydra_diagram
Hydra / ArchivesSpace Integration Sketch, take 1
sychronizers_will
Hydra / ArchivesSpace Integration Sketch, take 2

 

In addition to these systems integration diagrams, I’ve been working on some basic tools (scripts) that address two small pieces of this larger problem:

  • A script to auto-generate digitization guides by extracting metadata from ArchivesSpace-generated EAD files (digitization guides are simply spreadsheets we use to keep track of what we digitize and to assign identifiers to digital objects and files during the digitization process).
  • A script that uses a completed digitization guide to batch-create digital object records in ArchivesSpace and at the same time link those digital objects to the descriptions of the physical items (the archival object records in ArchivesSpace-speak).  Special thanks to Dallas Pillen at the University of Michigan for doing most of the heavy lifting on this script.

Problem 2 (the smaller one): Using ArchivesSpace to produce MARC records for archival collections (or, stopping all that cutting and pasting).

In the past, we’ve had two completely separate workflows in special collections for creating archival description in EAD and creating collection-level MARC records for those same collections.  Archivists churned out detailed EAD finding aids and catalogers took those finding aids, and cut-and-pasted relevant sections into collection-level MARC records.  It’s quite silly, really, and we need a better solution that saves time and keeps metadata consistent across platforms.

While we haven’t done much work in this area yet, we have formed a small working group of archivists/catalogers and developed the following work plan:

  1. Examine default ArchivesSpace MARC exports and compare those exports to current MARC cataloging practices (document differences).
  2. Examine differences between ArchivesSpace MARC and “native” MARC and decide which current practices are worth maintaining keeping in mind we’ll need to modify default ArchivesSpace MARC exports to meet current MARC authoring practices.
  3. Develop cross-walking scripts or modify the ArchivesSpace MARC exporter to generate usable MARC data from ArchivesSpace.
  4. Develop and document an efficient workflow for pushing or harvesting MARC data from ArchivesSpace to both OCLC and our local ILS.
  5. If possible, develop, test, and document tools and workflows for re-purposing container (instance) information in ArchivesSpace in order to batch-create item records in the ILS for archival containers (boxes, folders, etc).
  6. Develop training for staff on new ArchivesSpace to MARC workflows.
courtesy of xkcd.com

Conclusion

So far we’ve only taken baby steps towards our dream of TOTAL METADATA SYNCHRONIZATION, but we’re making progress.  Please let us know if you’re working on similar projects at your institution. We’d love to hear from you.

Future Retro: Images of Sound Technology in the 1960s Duke Chronicle

Many of my Bitstreams posts have featured old-school audio formats (wax cylinder, cassette and open reel tape, Minidisc) and discussed how we go about digitizing these obsolete media to bring them to present-day library users at the click of a mouse.  In this post, I will take a different tack and show how this sound technology was represented and marketed during its heyday.  The images used here are taken from one of our very own digital collections–the Duke Chronicle of the 1960s.

The Record Bar

Students of that era would have primarily listened to music on vinyl records purchased directly from a local retailer.  The advertisement  above boasts of “complete stocks, latest releases, finest variety” with sale albums going for as little as $2.98 apiece.  This is a far cry from the current music industry landscape where people consume most of their media via instant download and streaming from iTunes or Spotify and find new artists and songs via blogs, Youtube videos, or social media.  The curious listener of the 1960’s may have instead discovered a new band though word of mouth, radio, or print advertising.  If they were lucky, the local record shop would have the LP in stock and they could bring it home to play on their hi-fi phonograph (like the one shown below).  Notice that this small “portable” model takes up nearly the whole tabletop.

Phonograph

The Moon

Duke students of the 1960s would have also used magnetic tape-based media for recording and playing back sound.  The advertisement above uses Space Age imagery and claims that the recorder (“small enough to fit in the palm of your hand”) was used by astronauts on lunar missions.  Other advertisements suggest more grounded uses for the technology:  recording classroom lectures, practicing public speaking, improving foreign language comprehension and pronunciation, and “adding fun to parties, hayrides, and trips.”

Tape Your Notes

Add a Track

Creative uses of the technology are also suggested.  The “Add-A-Track” system allows you to record multiple layers of sound to create your own unique spoken word or musical composition.  You can even use your tape machine to record a special message for your Valentine (“the next best thing to you personally”).  Amplifier kits are also available for the ambitious electronics do-it-yourselfer to build at home.

Tell Her With Tape

Amplifier Kit

These newspaper ads demonstrate just how much audio technology and our relationship to it have changed over the past 50 years.  Everything is smaller, faster, and more “connected” now.  Despite these seismic shifts, one thing hasn’t changed.  As the following ad shows, the banjo never goes out of style.

Banjo