New Digitization Initiative and Call for Proposals

Two years ago, Duke Libraries Advisory Council for Digital Collections launched a new process for proposing digitization projects.  Previously the group accepted new digitization proposals every month. We decided to shift to a “digitization initiative” approach where the Council issues a time-based call for proposals focusing on a theme or format. This new method has allowed staff across different departments to plan and coordinate their efforts more effectively.  

This Fall we are inviting DUL staff to propose Audio and Video (A/V) based collections/items for digitization. DUL staff are welcome to partner with Duke Faculty on their proposals. We chose to focus on A/V formats this year due to the preservation risks associated with the material. Magnetic tape formats are especially fragile compared to film given their composition, and the low availability of players for accessing content.  

The complete call for proposals including criteria and a link to the proposal form is online.  Proposals should be submitted on or before November 18

What about non-A/V digitization proposals? 

The Advisory Council is working on another call for digitization proposals, which is intended to include non-A/V formats (manuscripts, photographs, and more).  We should be able to announce the new call before the end of the calendar year.  Stay tuned!

DUL staff can also submit proposals for small digitization projects anytime as long as they fit the criteria for an “easy” project. Easy projects are small in size and scope and include a wide range of formats; complete guidelines are online along with the proposal form.

 

Lighting and the PhaseOne: It’s More Than Point and Shoot

Last week, I went to go see the movie IT: Chapter 2. One thing I really appreciated about the movie was how it used a scene’s lighting to full effect. Some scenes are brightly lit to signify the friendship among the main characters. Conversely, there are dark scenes that signify the evil Pennywise the Clown. For the movie crew, no doubt it took a lot of time and manpower to light an individual scene – especially when the movie is nearly 3 hours long.

We do the same type of light setup and management inside the Digital Production Center (DPC) when we take photos of objects like books, letters, or manuscripts. Today, I will talk specifically about how we light the bound material that comes our way, like books or booklets. Generally, this type of material is always going to be shot on our PhaseOne camera, so I will particularly highlight that lighting setup today.

Before We Begin

It’s not enough to just turn the lights on in our camera room to do the trick. In order to properly light all the things that need to be shot on the PhaseOne, we have specific tools and products we use that you can see in the photo below.

We have 4 high-powered lights (two sets of two Buhl SoftCube SC-150 models) pointed directly in the camera’s field of view. There are two on the right and two on the left. These are stationed approximately 3.5 feet off the ground and approximately 2.5 feet away from the objects themselves. These lights are supported by Avenger A630B light stands. They allow for a wide range of movement, extension, and support if we need them.

But if bright, hot lights were pointed directly at sensitive documents for hours, it would damage them. So light diffusers are necessary. For both sets of lights, we have 3 layers of material to diffuse the light and prevent material from warping or text from fading. The first layer, directly attached to the light box itself, is an inexpensive sheet of diffusion fabric. This type of material is often made from nylon or silk, and are usually inexpensive.

The second diffusion layer is an FJ Westcott Scrim Jim, a similar thin fabric that is attached to a lightweight stand-up frame, the Manfrotto 156BLB. This frame can also be moved or extended if need be. The last layer is another sheet of diffusion fabric, attached to a makeshift “cube” held up by lightweight wooden rods. This cube can be picked up or carried, making it very convenient if we need to eventually move our lights.

So in total, we have 4 lights, 4 layers of diffusion fabric attached to the light boxes, two Scrim Jims, and the cube featuring 2 sides of additional diffusion fabric. After having all these items stationed, surely we can start taking pictures, right? Not yet.

Around the Room

There are still more things to be aware of – this time in the camera room itself. We gently place the materials themselves on a cradle lined with a black felt, similar to velvet. This cradle is visible in the bottom right part of the photo above. It is placed on top of a table, also coated in black felt. This is done so no background colors bounce back or reflect onto the object and change what it looks like in the final image itself. The walls of the camera room are also painted a neutral grey color for the same reason, as you can see in the background of the above photo. Finally, any tiny reflective segments between the ceiling tiles have been blacked out with gaffer tape. Having the room this muted and intentionally dark also helps us when we have to shoot multi-spectral images. No expense has been spared to make sure our colors and photos are correct.

Camera Settings

With all these precautions in place, can we finally take photos of our materials? Almost. Before we can start photographing, we have to run some tests to make sure everything looks correct to our computers. After making sure our objects are sharp and in focus, we use a program called DTDCH (see the photo to the right) to adjust the aperture and exposure of the PhaseOne so that nothing appears either way too dim or too bright. In our camera room, we use a PhaseOne IQ180 with a Schneider Kreuznach Apo-Digitar lens (visible in the top-right corner of the photo above). We also use the program CaptureOne to capture, save, and export our photos.

Once the shot is in focus and appropriately bright, we will check our colors against an X-Rite ColorChecker Classic card (see the photo on the left) to verify that our camera has a correct white balance. When we take a photo of the ColorChecker, CaptureOne displays a series of numbers, known as RGB values, found in the photo’s colors. We will check these numbers against what they should be, so we know that our photo looks accurate. If these numbers match up, we can continue. You could check our work by saving the photo on the left and opening it in a program like Adobe Photoshop.

Finally, we have specific color profiles that the DPC uses to ensure that all our colors appear accurate as well. For more information on how we consistently calibrate the color in our images, please check out this previous blog post.

After all this setup, now we can finally shoot photos! Lighting our materials for the PhaseOne is a lot of hard work and preparation. But it is well worth it to fulfill our mission of digitizing images for preservation.

What we talk about when we talk about digital preservation

(Header image: Illustration by Jørgen Stamp digitalbevaring.dk CC BY 2.5 Denmark)

Here at Duke University Libraries, we often talk about digital preservation as though everyone is familiar with the various corners and implications of the phrase, but “digital preservation” is, in fact, a large and occasionally mystifying topic. What does it mean to “preserve” a digital resource for the long term? What does “the long term” even mean with regard to digital objects? How are libraries engaging in preserving our digital resources? And what are some of the best ways to ensure that your personal documents will be reusable in the future? While the answers to some of these questions are still emerging, the library can help you begin to think about good strategies for keeping your content available to other users over time by highlighting agreed-upon best practices, as well as some of the services we are able to provide to the Duke community.

File formats

Not all file formats have proven to be equally robust over time! Have you ever tried to open a document created using a Microsoft Office product from several years ago, only to be greeted with a page full of strangely encoded gibberish? Proprietary software like the products in the Office suite can be convenient and produce polished contemporary documents. But software changes, and there is often no guarantee that the beautifully formatted paper you’ve written using Word will be legible without the appropriate software 5 years down the line. One solution to this problem is to always have a version of that software available to you to use. Libraries are beginning to investigate this strategy (often using a technique called emulation) as an important piece of the digital preservation puzzle. The Emulation as a Service (EaaS) architecture is an emerging tool designed to simplify access to preserved digital assets by allowing end users to interact with the original environments running on different emulators.

An alternative to emulation as a solution is to save your files in a format that can be consumed by different, changing versions of software. Experts at cultural heritage institutions like the Library of Congress and the US National Archives and Records Administration have identified an array of file formats about which they feel some degree of confidence that the software of the future will be able to consume. Formats like plain text or PDFs for textual data, value separated files (like comma-separated values, or CSVs), MP3s and MP4s for audio and video data respectively, and JPEGs for still images have all proven to have some measure of durability as formats. What’s more, they will help to make your content or your data more easily accessible to folks who do not have access to particular kinds of software. It can be helpful to keep these format recommendations in mind when working with your own materials.

File format migration

The formats recommended by the LIbrary of Congress and others have been selected not only because they are interoperable with a wide variety of software applications, but also because they have proven to be relatively stable over time, resisting format obsolescence. The process of moving data from an obsolete format to one that is usable in the present day is known as file format migration or format conversion. Libraries generally have yet to establish scalable strategies for extensive migration of obsolete file formats, though it is generally a subject of some concern.

Here at DUL, we encourage the use of one of these recommended formats for content that is submitted to us for preservation, and will even go so far as to convert your files prior to preservation in one of our repository platforms where possible and when appropriate to do so. This helps us ensure that your data will be usable in the future. What we can’t necessarily promise is that, should you give us content in a file format that isn’t one we recommend, a user who is interested in your materials will be able to read or otherwise use your files ten years from now. For some widely used formats, like MP3 and MP4, staff at the Libraries anticipate developing a strategy for migrating our data from this format, in the event that the format becomes superseded. However, the Libraries do not currently have the staff to monitor and convert rarer, and especially proprietary formats to one that is immediately consumable by contemporary software. The best we can promise is that we are able to deliver to the end users of the future the same digital bits you initially gave to us.

Bit-level preservation

Which brings me to a final component of digital preservation: bit-level preservation. At DUL, we calculate a checksum for each of the files we ingest into any of our preservation repositories. Briefly, a checksum is an algorithmically derived alphanumeric hash that is intended to surface errors that may have been introduced to the file during its transmission or storage. A checksum acts somewhat like a digital fingerprint, and is periodically recalculated for each file in the repository environment by the repository software to ensure that nothing has disrupted the bits that compose each individual file. In the event that the re-calculated checksum does not match the one supplied when the file has been ingested into the repository, we can conclude with some level of certainty that something has gone wrong with the file, and it may be necessary to revert to an earlier version of the data. THe process of generating, regenerating, and cross-checking these checksums is a way to ensure the file fixity, or file integrity, of the digital assets that DUL stewards.

Resonance of a Moment

Resonance: the reinforcement or prolongation of sound by reflection from a surface or by the synchronous vibration of a neighboring object

(Lexico, 2019)

Nearly 4 months have passed since I moved to Durham from my hometown Chicago to join Duke’s Digital Collections & Curation Services team. With feelings of reflection and nostalgia, I have been thinking on the stories and memories that journeys create.

I have always believed a library the perfect place to discover another’s story. Libraries and digital collections are dynamic storytelling channels that connect people through narrative and memory. What are libraries if not places dedicated to memories? Memory made incarnate in the turn of page, the capturing of an image.

Memory is sensation.

In my mind memory is ethereal – wispy and nebulous. Like trying to grasp mist or fog only to be left with the shimmer of dew on your hands. Until one focuses on a detail, then the vision sharpens. Such as the soothing warmth of a pet’s fur. A trace of familiar perfume in the air as a stranger walks by. Hearing the lilt of an accent from your hometown. That heavy, sticky feeling on a muggy summer day.

Memories are made of moments.

I do not recall the first time I visited a library. However, one day my parents took me to the library and I checked out 11 books on dinosaurs. As a child I was fascinated by them. Due to watching so much of The Land Before Time and Jurassic Park no doubt. One of the books had beautiful full-length pullout diagrams. I remember this.

Experiences tether individuals together across time and place. Place, like the telling of a story is subjective. It holds a finite precision which is absent in the vagueness and vastness of space. This personal aspect is what captures a person when a tale is well told.  A corresponding chord is struck, and the story resounds as listeners see themselves reflected.

When a narrative reaches someone with whom it resonates, its impact can be amplified beyond any expectations.

There are many unique memories and moments held in the Duke University Libraries digital collections. Come take a journey and explore a new story.

My humanity is bound up in yours, for we can only be human together. ~Desmond Tutu

Agile 101

Here in the DUL Information Technology Services organization, we continue to embrace Agile concepts, applied to many different types of projects, including the Integrated Library System (ILS), the development of specialized repositories, and even the exhibits hosted in the Libraries. Check out the amazing new Senses of Venice exhibit that opened last week.

I like to think of Agile as a mindset rather than a specific tool set or framework (like scrum).  The four values envisioned in the 2001 Agile Manifesto were devised in deliberate contrast to the rigor and slowness of erstwhile software development practices, and these concepts are still quite relevant today:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

Sometimes, when things develop as a backlash, the pendulum can swing too far the other way and we throw out some of the tried and true good bits.  On the other hand, we can slip back, as described in Steve Bank’s HBR piece, “When Waterfall Principles Sneak Back into Agile Workflows”.

Pendulums swing, but basically, when you face uncertainty, try something you think might work, get feedback, and adjust accordingly.

 

 

 

 

What happens when you click “Search?”

How many times each day to you type something into a search box on the web and click “Search?” Have you ever wondered what happens behind the scenes to make this possible? In this post I’ll show how search works on the Duke University Libraries Catalog. I’ll trace the journey of how search works from metadata in a MARC record (where our bibliographic data is stored), to transforming that data into something we can index for searching, to how the words you type into the search box are transformed, and then finally how the indexed records and your search interact to produce a relevance ranked list of search results. Let’s get into the weeds!

A MARC record stores bibliographic data that we purchase from vendors or are created by metadata specialists who work at Duke Libraries. These records look something like this:

In an attempt to keep this simple, let’s just focus on the main title of the record. This is information recorded in the MARC record’s 245 field in subfields a, b, f, g, h, k, n, p, and s. I’m not going to explain what each of the subfields is for but the Library of Congress maintains extensive documentation about MARC field specifications (see 245 – Title Statement (NR)). Here is an example of a MARC 245 field with a linked 880 field that contains the equivalent title in an alternate script (just to keep things interesting).

=245 10$6880-02$aUrbilder ;$bBlossoming ; Kalligraphie ; O Mensch, bewein' dein' Sünde gross (Arrangement) : for string quartet /$cToshio Hosokawa.
=880 10$6245-02/{dollar}1$a原像 ;$b開花 ; 書 (カリグラフィー) ほか : 弦楽四重奏のための /$c細川俊夫.

The first thing that has to happen is we need to get the data out of the MARC record into a more computer friendly data format — an array of hashes, which is just a fancy way of saying a list of key value pairs. The software reads the metadata from the MARC 245 field, joins all the subfields together, and cleans up some punctuation. The software also checks to see if the title field contains Arabic, Chinese, Japanese, Korean, or Cyrillic characters, which have to be handled separately from Roman character languages. From the MARC 245 field and its linked 880 field we end up with the following data structure.

"title_main": [
{
"value": "Urbilder ; Blossoming ; Kalligraphie ; O Mensch, bewein' dein' Sünde gross (Arrangement) : for string quartet"
},
{
"value": "原像 ; 開花 ; 書 (カリグラフィー) ほか : 弦楽四重奏のための",
"lang": "cjk"
}
]

We send this data off to an ingest service that prepares the metadata for indexing.

The data is first expanded to multiple fields.

{"title_main_indexed": "Urbilder ; Blossoming ; Kalligraphie ; O Mensch, bewein' dein' Sünde gross (Arrangement) : for string quartet",

"title_main_vernacular_value": "原像 ; 開花 ; 書 (カリグラフィー) ほか : 弦楽四重奏のための",

"title_main_vernacular_lang": "cjk",

"title_main_value": "原像 ; 開花 ; 書 (カリグラフィー) ほか : 弦楽四重奏のための / Urbilder ; Blossoming ; Kalligraphie ; O Mensch, bewein' dein' Sünde gross (Arrangement) : for string quartet"}

title_main_indexed will be indexed for searching.
title_main_vernacular_value holds the non Roman version of the title to be indexed for searching.
title_main_vernacular_lang holds information about the character set stored in title_main_vernacular_value.
title_main_value holds the data that will be stored for display purposes in the catalog user interface.

We take this flattened, expanded set of fields and apply a set of rules to prepare the data for the indexer (Solr). These rules append suffixes to each field and combine the two vernacular fields to produce the following field value pairs. The suffixes provide instructions to the indexer about what should be done with each field.

{"title_main_indexed_tsearchtp": "Urbilder ; Blossoming ; Kalligraphie ; O Mensch, bewein' dein' Sünde gross (Arrangement) : for string quartet",

"title_main_cjk_v": "原像 ; 開花 ; 書 (カリグラフィー) ほか : 弦楽四重奏のための",

"title_main_t_stored_single": "原像 ; 開花 ; 書 (カリグラフィー) ほか : 弦楽四重奏のための / Urbilder ; Blossoming ; Kalligraphie ; O Mensch, bewein' dein' Sünde gross (Arrangement) : for string quartet" }

When sent to the indexer the fields are further transformed.

Suffixed Source Field Solr Field Solr Field Type Solr Stored/Indexed Values
title_main_indexed_tsearchtp title_main_indexed_t text stemmed urbild blossom kalligraphi o mensch bewein dein sund gross arrang for string quartet
title_main_indexed_tsearchtp title_main_indexed_tp text unstemmed urbilder blossoming kalligraphie o mensch bewein dein sunde gross arrangement for string quartet
title_main_cjk_v title_main_cjk_v chinese, japanese, korean text 原 像 开花 书 か り く ら ふ ぃ い ほか 弦乐 亖 重奏 の ため の
title_main_t_stored_single title_main stored string 原像 ; 開花 ; 書 (カリグラフィー) ほか : 弦楽四重奏のための / Urbilder ; Blossoming ; Kalligraphie ; O Mensch, bewein’ dein’ Sünde gross (Arrangement) : for string quartet

These are all index time transformations. They occur when we send records into the index.

The query you enter into the search box also gets transformed in different ways and then compared to the indexed fields above. These are query time transformations. As an example, if I search for the terms “Urbilder Blossom Kalligraphie,” the following transformations and comparisons take place:

The values stored in the records for title_main_indexed_t are evaluated against my search string transformed to urbild blossom kalligraphi.

The values stored in the records for title_main_indexed_tp are evaluated against my search string transformed to urbilder blossom kalligraphie.

The values stored in the records for title_main_cjk_v are evaluated against my search string transformed to urbilder blossom kalligraphie.

Then Solr does some calculations based on relevance rules we configure to determine which documents are matches and how closely they match (signified by the relevance score calculated by Solr). The field value comparisons end up looking like this under the hood in Solr:

+(DisjunctionMaxQuery((
(title_main_cjk_v:urbilder)^50.0 |
(title_main_indexed_tp:urbilder)^500.0 |
(title_main_indexed_t:urbild)^100.0)~1.0)
DisjunctionMaxQuery((
(title_main_cjk_v:blossom)^50.0 |
(title_main_indexed_tp:blossom)^500.0 |
(title_main_indexed_t:blossom)^100.0)~1.0)
DisjunctionMaxQuery((
(title_main_cjk_v:kalligraphie)^50.0 |
(title_main_indexed_tp:kalligraphie)^500.0 |
(title_main_indexed_t:kalligraphi)^100.0)~1.0))~3
DisjunctionMaxQuery((
(title_main_cjk_v:"urbilder blossom kalligraphie")^150.0 |
(title_main_indexed_t:"urbild blossom kalligraphi")^600.0 |
(title_main_indexed_tp:"urbilder blossom kalligraphie")^5000.0)~1.0)
(DisjunctionMaxQuery((
(title_main_cjk_v:"urbilder blossom")^75.0 |
(title_main_indexed_t:"urbild blossom")^200.0 |
(title_main_indexed_tp:"urbilder blossom")^1000.0)~1.0)
DisjunctionMaxQuery((
(title_main_cjk_v:"blossom kalligraphie")^75.0 |
(title_main_indexed_t:"blossom kalligraphi")^200.0 |
(title_main_indexed_tp:"blossom kalligraphie")^1000.0)~1.0))
DisjunctionMaxQuery((
(title_main_cjk_v:"urbilder blossom kalligraphie")^100.0 |
(title_main_indexed_t:"urbild blossom kalligraphi")^350.0 |
(title_main_indexed_tp:"urbilder blossom kalligraphie")^3000.0)~1.0)

The ^nnnn indicates the relevance weight given to any matches it finds, while the ~n.n indicates the number of matches that are required from each clause to consider the document a match. Matches in fields with higher boosts count more than fields with lower boosts. You might notice another thing, that full phrase matches are boosted the most, two consecutive term matches are boosted slightly less, and then individual term matches are given the least boost. Furthermore unstemmed field matches (those that have been modified the least by the indexer, such as in the field title_main_indexed_tp) get more boost than stemmed field matches. This provides the best of both worlds — you still get a match if you search for “blossom” instead of “blossoming,” but if you had searched for “blossoming” the exact term match would boost the score of the document in results. Solr also considers how common the term is among all documents in the index so that very common words like “the” don’t boost the relevance score as much as less common words like “kalligraphie.”

I hope this provides some insight into what happens when you clicks search. Happy searching.

Building a new Staff Directory

The staff directory on the Library’s website was last overhauled in late 2014, which is to say that it has gotten a bit long in the tooth! For the past few months I’ve been working along with my colleagues Sean Aery, Tom Crichlow, and Derrek Croney on revamping the staff application to make it more functional, easier to use, and more visually compelling.

staff directory interface
View of the legacy staff directory interface

 

Our work was to be centered around three major components — an admin interface for HR staff, an edit form for staff members, and the public display for browsing people and departments. We spent a considerable amount of time discussing the best ways to approach the infrastructure for the project. In the end we settled on a hybrid approach in which the HR tool would be built as a Ruby-on-Rails application, and we would update our existing custom Drupal module for staff editing and public UI display.

We created a seed file for our Rails app based on the legacy data from the old application and then got to work building the HR interface. We decided to rely on the Rails Admin gem as it met most of our use cases and had worked well on some other internal projects. As we continued to add features, our database models became more and more complex, but working in Rails makes these kind of changes very straightforward. We ended up with two main tables (People and Departments) and four auxiliary tables to store extra attributes (External Contacts, Languages, Subject Areas, and Trainings).

rails admin
View of the Rails Admin dashboard

 

We also made use of the Ancestry gem and the Nestable gem to allow HR staff to visually sort department hierarchy. This makes it very easy to move departments around quickly and using a visual approach, so the next time we have a large department reorganization it will be very easy to represent the changes using this tool.

department sorting
Nestable gem allows for easy sorting of departments

 

After the HR interface was working well, we concentrated our efforts on the staff edit form in Drupal. We’d previously augmented the default Drupal profile editor with our extra data fields, but wanted to create a new form to make things cleaner and easier for staff to use. We created a new ‘Staff Profile’ tab and also included a link on the old ‘Edit’ tab that points to the new form. We’re enabling staff to include their subject areas, preferred personal pronouns, language expertise, and to tie into external services like ORCID and Libguides.

drupal edit form
Edit form for Staff Profile

 

The public UI in Drupal is where most of our work has gone. We’ve created four approaches to browsing; Departments, A–Z, Subject Specialists, and Executive Group. There is also a name search that incorporates typeahead for helping users find staff more efficiently.

The Department view displays a nested view of our complicated organizational structure which helps users to understand how a given department relates to another one. You can also drill down through departments when you’ve landed on a department page.

departments
View of departments

 

Department pages display all staff members therein and positions managers at the top of the display. We also display the contact information for the department and link to the department website if it exists.

department example
Example of a department page

 

The Staff A–Z list allows users to browse through an alphabetized list of all staff in the library. One challenge we’re still working through is staff photos. We are lacking photos for many of our staff, and many of the photos we do have are out of date and inconsistently formatted. We’ve included a default avatar for staff without photos to help with consistency, but they also serve the purpose of highlighting the number of staff without a photo. Stay tuned for improvements on this front!

a-to-z list
A-to-Z browse

 

The Subject Specialists view helps in finding specific subject librarians. We include links to relevant research guides and appointment scheduling. We also have a text filter at the top of the display that can help quickly narrow the results to whatever area you are looking for.

subject specialists
Subject Specialists view

 

The Executive Group display is a quick way to view the leadership of the library.

executive group
Executive Group display

 

One last thing to highlight is the staff display view. We spent considerable effort refining this, and I think our work has really paid off. The display is clean and modern and a great improvement from what we had before.

old profile
View of staff profile in legacy application
updated profile
View of the same profile in the new application

 

In addition to standard information like name, title, contact info, and department, we’re displaying:

  • a large photo of the staff person
  • personal pronouns
  • specialized trainings (like Duke’s P.R.I.D.E. program)
  • links our to ORCID, Libguides, and Libcal scheduling
  • customizable bio (with expandable text display)
  • language expertise
  • subject areas

Our plan is to roll out the new system at the end of the month, so you can look forward to a greatly improved staff directory experience soon!

Managing impermanence – migration of the Libraries’ digital exhibits

Post contributed by Claire Cahoon, student in the master’s program at the School of Information and Library Science, UNC-Chapel Hill.

This summer I worked as a field experience student in the Software Services department migrating digital exhibits into Omeka 2, Duke’s most current platform. The ultimate goal was to start and document the process of moving exhibits from legacy platforms into Omeka 2.

The reasoning behind the project became clear as we started creating an index of all of the digital exhibits on display in the exhibits website. Out of 97 total exhibits, there were varying degrees of functionality, from the most recent and up-to-date exhibits, to sites with broken links and pages where only text would display, leaving out crucial images. Centralizing these into a single platform should make it easier to create, support, and maintain all of these exhibits.

Screenshot of the sidebar of an exhibit, showing the link to the previous version of the exhibit in the Internet Archive
Screenshot of the sidebar of an exhibit, showing the link to the previous version of the exhibit in the Internet Archive

I found exhibits in Omeka 1, Cascade, Scriptorium, JAlbum, and even found a few mystery platforms that we never identified. Since it was the largest, we decided to work on the Omeka 1 group over the summer, and this week I finished migrating all 34 exhibits – that means that after a few adjustments to make the new exhibits available, Omeka 1 can be shut off!

We worked with Meg Brown, Exhibits Coordinator for the Libraries, and the exhibits department to figure out how each exhibit needed to be represented. Since we were managing expectations from lots of different stakeholders, we landed on the idea to include a link to the archived version of each exhibit in the WayBack machine, in case the look and feel of the new exhibits is limiting for anyone used to Omeka 1.

Working with the internet archive links and sorting through broken pieces of these exhibits really put into perspective how impermanent the internet is, even for seemingly static information. Without much maintenance, these exhibits lost some of the core content when video links changed, references were lost, and even the most well-written custom code stopped working. I hope that my work this summer will help keep these exhibit materials in working order while also eliminating the need to continue supporting for Omeka 1.

While migrating, I came across a few favorite exhibits and items that combined interesting content and some updated features in Omeka 2:

Cover of “Anxious homes: cursory-cleaning for the imminent arrival of visitors or how to give the impression of a clean house in under 20 minutes” by Jackie Batey.
Cover of “Anxious homes: cursory-cleaning for the imminent arrival of visitors or how to give the impression of a clean house in under 20 minutes” by Jackie Batey. Available in the Rubenstein Library: N7433.4.B38 A59 2006

Book + Art: Artists’ books from the Sallie Bingham Center for Women’s History and Culture (and the old version of Book + Art)

John Hope Franklin: Imprint of an American Scholar (and the old version of the John Hope Franklin exhibit)

Cheap Thrills: The Highs and Lows of Paris’s Cabaret Culture (and the old version of Cheap Thrills)

Medicology, or, Home encyclopedia of health: a complete family guide... Vol. I, by Joseph Gibbons Richardson (1904).
Medicology, or, Home encyclopedia of health: a complete family guide… Vol. I, by Joseph Gibbons Richardson (1904). Available in the Rubenstein Library: RC81 .R52 1904

Animated Anatomies: The Human Body in Anatomical Texts from the 16th to 21st Centuries (and the old version of Animated Anatomies)

Omeka still has some quirks to work out, and the accessibility of the pages and the metadata display are still in the works. However, migrating these exhibits into Omeka 2 will make them much easier to support and change for improvements. Thanks to the team that worked with me and taught me so much this summer: Will Sexton, Michael Daul, and Meg Brown!

Join our Team!

Do you have photography skills? Do you want to work with cultural heritage materials? Do you seek a highly collaborative work environment dedicated to preserving and making rare materials digitally available? If so, consider applying to be the next Digitization Specialist at Duke!

The Digitization Specialist produces digital surrogates of rare materials that include books, manuscripts, audio, and moving image collections. The ideal candidate should be detail-oriented, possess excellent organizational, project management skills and an ability to work independently and effectively in a team environment. The successful candidate will join the Digital Collections and Curation Services department and work under the direct supervision of the Digital Production Services manager.

The Digital Production Center (DPC) is a specialized unit dedicated to creating digital surrogates of primary resource materials from Duke University Libraries. Learn more about the DPC on our webpage, or through our department blog, Bitstreams. To get a sense of the variety of interesting and important collections we’ve digitized, immerse yourself in the Duke Digital Collections. We currently have over 640 digital collections comprising of 103,247 items – and we’re looking to do even more with your skills!

Duke is a diverse community committed to the principles of excellence, fairness, and respect for all people. As part of this commitment, we actively value diversity in our workplace and learning environments as we seek to take advantage of the rich backgrounds and abilities of everyone. We believe that when we understand, celebrate, and tap into our uniqueness to creatively solve problems and address shared goals, our possibilities are limitless. Duke University Libraries value diversity of thought, perspective, experience, and background and are actively committed to a culture of inclusion and respect.

Duke’s hometown is Durham, North Carolina, a city with vibrant research, medical and arts communities, and numerous shops, restaurants and theaters. Durham is located in the Research Triangle, a growing metropolitan area of more than one million people that provides a wide range of cultural, recreational and educational opportunities. The Triangle is conveniently located just a few hours from the mountains and the coast, offers a moderate climate, and has been ranked among the best places to live and to do business.

Duke offers a comprehensive benefit package, which includes traditional benefits such as health insurance, dental, leave time and retirement, as well as wide range of work/life and cultural benefits. More information can be found at: https://hr.duke.edu/benefits. For more information and to apply, please submit an electronic resume, cover letter, and a list of 3 references to https://library.duke.edu/about/jobs/digitizationspecialist. Search for Requisition ID #4778. Review of applications will begin immediately and will continue until the position is filled.

Notes from the Duke University Libraries Digital Projects Team