This post was written by Jen Jordan, a graduate student at Simmons University studying Library Science with a concentration in Archives Management. She is the Digital Collections intern with the Digital Collections and Curation Services Department. Jen will complete her masters degree in December 2021.
The Digital Production Center (DPC) is thrilled to announce that work is underway on a 3-year long National Endowment for the Humanities (NEH) grant-funded project to digitize the entirety of Behind the Veil: Documenting African-American Life in the Jim Crow South, an oral history project that produced 1,260 interviews spanning more than 1,800 audio cassette tapes. Accompanying the 2,000 plus hours of audio is a sizable collection of visual materials (e.g.- photographic prints and slides) that form a connection with the recorded voices.
We are here to summarize the logistical details relating to the digitization of this incredible collection. To learn more about its historical significance and the grant that is funding this project, titled “Documenting African American Life in the Jim Crow South: Digital Access to the Behind the Veil Project Archive,” please take some time to read the July announcementwritten by John Gartrell, Director of the John Hope Franklin Research Center and Principal Investigator for this project. Co-Principal Investigator of this grant is Giao Luong Baker, Digital Production Services Manager.
Digitizing Behind the Veil (BTV) will require, in part, the services of outside vendors to handle the audio digitization and subsequent captioning of the recordings. While the DPC regularly digitizes audio recordings, we are not equipped to do so at this scale (while balancing other existing priorities). The folks at Rubenstein Library have already been hard at work double checking the inventory to ensure that each cassette tape and case are labeled with identifiers. The DPC then received the tapes, filling 48 archival boxes, along with a digitization guide (i.e. – an Excel spreadsheet) containing detailed metadata for each tape in the collection. Upon receiving the tapes, DPC staff set to boxing them for shipment to the vendor. As of this writing, the boxes are snugly wrapped on a pallet in Perkins Shipping & Receiving, where they will soon begin their journey to a digital format.
The wait has begun! In eight to twelve weeks we anticipate receiving the digital files, at which point we will perform quality control (QC) on each one before sending them off for captioning. As the captions are returned, we will run through a second round of QC. From there, the files will be ingested into the Duke Digital Repository, at which point our job is complete. Of course, we still have the visual materials to contend with, but we’ll save that for another blog!
As we creep closer to the two-year mark of the COVID-19 pandemic and the varying degrees of restrictions that have come with it, the DPC will continue to focus on fulfilling patron reproduction requests, which have comprised the bulk of our work for some time now. We are proud to support researchers by facilitating digital access to materials, and we are equally excited to have begun work on a project of the scale and cultural impact that is Behind the Veil. When finished, this collection will be accessible for all to learn from and meditate on—and that’s what it’s all about.
Happy New Year from all of us at the Digital Production Center! In this pictorial posting, I figured we should start the New Year right with some images and collections that are inspiring, funny, and just stir my heart. It begins with “The Future Calls!”
I went down the “future” rabbit hole and stumbled upon Martin Luther King’s “The Look to the Future”:
And came upon this lovely image:
YES! THE FUTURE IS MY OWN MAKING!! And with that I came up with some resolutions!
Efficiency is important!
Maybe 5 minutes is a bit ambitious, but this will be good for my schedule and good for the environment. It’s good to have goals.
Exercise More! I definitely felt more inspired to hit the gym after seeing these images from the Anatomical Fugitive Sheets.
Learn about fashion, art, and architecture with Barbaralee Diamonstein-Spielvogel!
Self-care! This one-page advertisement from the Broadsides and Ephemera Collection of a Hot Springs spa sure is enticing!
This picturesque image from Reginald Sellman Negatives collection (which is predominantly of a family taking hikes, camping, and roadtripping!) made me quite envious. Why yes, I’d love to take a hike in a corseted dress!
And speaking of family activities, the Memory Project and Behind the Veil collections reminded me that I really need talk to my parents and other family members more to gather and document their stories.
Why not pick up a foreign language?
Support a cause!
Spend more time with my kids! They grow up so quickly.
Lastly, and probably most importantly, VOTE!
So…what are your resolutions? And don’t tell me 300 ppi!
Duke University Libraries (DUL) is always searching for new ways to increase access and make discovery easier for users. One area users frequently have trouble with is accessing online articles. Too often we hear from students that they cannot find an article PDF they are looking for, or even worse, that they end up paying to get through a journal paywall. To address this problem, DUL’s Assessment and User Experience (AUX) Department explored three possible tools: LibKey Discovery, Kopernio, and Lean Library. After user testing and internal review, LibKey Discovery emerged as the best available tool for the job.
LibKey Discovery is a suite of user-friendly application programming interfaces (APIs) used to enhance the library’s existing discovery system. The APIs enable one-click access to PDFs for subscribed and open-source content, one-click access to full journal browsing via the BrowZine application, and access to cover art for thousands of journals. The tool integrates fully with the existing discovery interface and does not require the use of additional plug-ins.
According to their website, LibKey Discovery has the potential to save users thousands of clicks per day by providing one-click access to millions of articles. The ability to streamline processes enabling the efficient and effective discovery and retrieval of academic journal content prompted the AUX department to investigate the tool and its capabilities further. An internal review of the system was preceded by an introduction of the tool to Duke’s subject librarians and followed with a preliminary round of student-based user testing.
One-Click Article and Full Journal Access
Both the AUX staff and the subject librarians who performed an initial review of the LibKey Discovery tools were impressed with the ease of article access and full journal browsing. Three members of the AUX department independently reviewed LibKey’s features and concluded the system does provide substantial utility in its ability to reduce the number of clicks necessary to access articles and journals.
The tool streamlines the appearance and formatting of all journals, thus removing ambiguity in how to access information from different sources within the catalog. This is beneficial in helping to direct users to the features they want without having to search for points of access. The AUX department review team all found this helpful.
LibKey Discovery’s APIs integrate fully into the existing DUL discovery interface without the need for users to download an additional plug-in. This provides users the benefit of the new system without asking them to go through extra steps or make any changes to their current search processes. Aside from the new one-click options available within the catalog’s search results page, the LibKey interface is indistinguishable from the current DUL interface helping users to benefit from the added functionality without feeling like they need to learn a new system.
LibKey Discovery carries a relatively hefty price tag, so its utility to the end-user must be weighed against its cost. While internal review and testing has indicated LibKey Discovery has the ability to streamline and optimize the discovery process, it must be determined if those benefits are universal enough to warrant the added annual expenditure.
Inconsistency in Options
A potential downside to LibKey Discovery is lack of consistency in one-click options between articles. While many articles provide the option for easy, one-click access to a PDF, the full text online, and full journal access, these options are not available for all content. As a result, this may cause confusion around the options that are available for users and may diminish the overall utility of the tool depending on what percentage of the catalog’s content is exempt from the one-click features.
LibKey Discovery User Testing Findings
An initial round of user testing was completed with ten student volunteers in the lobby of Perkins Library in early April. Half of the users were asked to access an article and browse a full journal in the existing DUL system; the other half were asked to perform the same tasks using the LibKey Discovery interface.
Initial testing indicated that student users had a high level of satisfaction with the LibKey interface; however, they were equally satisfied with the existing access points in the DUL catalog. The final recommendations from the user testing report suggest the need for additional testing to be completed. Specifically, it was recommended that more targeted testing be completed with graduate-level students and faculty as a majority of the original test’s participants were undergraduate students with limited experience searching for and accessing academic journal issues and articles. It was concluded that testing with a more experienced user group would likely produce better feedback as to the true value of LibKey Discovery.
LibKey Discovery is a promising addition to Duke’s existing discovery system. It allows for streamlined, one-click article and full journal access without disrupting the look and feel of the current interface or requiring the use of a plug-in. Initial reviews of the system by library staff have been glowing; however, preliminary user testing with student participants indicated the need for additional testing to determine if LibKey’s cost is sufficiently offset by its utility to the user.
Kopernio is a free browser plug-in which enables one-click access to academic journal articles. It searches the web for OA copies, institutional repository copies, and copies available through library subscriptions. The tool is designed to connect users to articles on and off campus by managing their subscription credentials and automatically finding the best version of an article no matter where a user is searching.
Given the potential of this tool to help increase access and make discovery easier for students, the AUX department initiated an internal review process. Four members of the department independently downloaded the Kopernio plug-in, thoroughly tested it in a variety of situations, and shared their general and specific notes about the tool.
OA Content + Library Subscription
By its design, Kopernio has an advantage over other plug-in tools that serve a similar function (i.e. Unpaywall). When users first download Kopernio they are asked to register their subscription credentials. This information is saved in the plug-in so users can automatically discover articles available from OA sources, as well as library subscriptions. This is an advantage over other plug-ins that only harvest from freely available sources.
Kopernio has highly visible and consistent branding. With bright green coloring, the plug-in stands out on a screen and attracts users to click on it to download articles.
Kopernio is advertised as a “one-click” service, and it pays off in this respect. Using Kopernio to access articles definitely cuts down on the number of clicks required to get to an article’s PDF. The process to download articles to a computer was instantaneous, and most of the time, downloading to the Kopernio storage cloud was just as fast.
Creates New Pain Points
Kopernio’s most advertised strength is its ability to manage subscription credentials. Unfortunately, this strength is also a major data privacy weakness. Security concerns ultimately led to the decision to disable the feature which allowed users to access DUL subscriptions via Kopernio when off-campus. Without this feature, Kopernio only pulls from OA sources and therefore performs the same function that many other tools currently do.
Similar to data privacy concerns, Kopernio also raises copyright concerns. One of Kopernio’s features is its sharing function. You can email articles to anyone, regardless of their university affiliation or if they have downloaded Kopernio already. We tested sending DUL subscription PDFs to users without Duke email addresses and they were able to view the full-text without logging in. It is unclear if they were viewing an OA copy of the article, or if they were seeing an article only meant for DUL authenticated users.
Running the Kopernio plug-in noticeably slowed down browser speed. We tested the browser on several different computers, both on campus and off, and we all noticed slower browser speeds. This slow speed led Kopernio to be occasionally buggy (freezing, error messages etc.).
Many Features Don’t Seem Useful
When articles are saved to Kopernio’s cloud storage, users can add descriptive tags. We found this feature awkward to use. Instead of adding tags as you go along, users have to add a tag globally before they can tag an article. Overall, it seemed like more hassle than it was worth.
Kopernio automatically imports article metadata to generate citations. There were too many problems with this feature to make it useful to users. It did not import metadata for all articles that we tested, and there was no way to manually add metadata yourself. Additionally, the citations were automatically formatted in Elsevier Harvard format and we had to go to our settings to change it to a more common citation style.
Lastly, the cloud storage which at first seemed like an asset, was actually a problem. All articles automatically download to cloud storage (called the “Kopernio Locker”) as soon as you click on the Kopernio button. This wouldn’t be a problem except for the limited storage size of the locker. With only 100MB of storage in the free version of Kopernio, we found that after downloading only 2 articles the locker was already 3% full. To make this limited storage work, we would have to go back to our locker and manually delete articles that we did not need, effectively negating the steps saved by having an automatic process.
Lean Library is a similar tool to Kopernio. It offers users one-click access to subscription and open access content through a browser extension. In Fall 2018, DUL staff were days away from purchasing this tool when Lean Library was acquired by SAGE Publishing. DUL staff had been excited to license a tool that was independent and vendor-neutral and so were disappointed to learn about its acquisition. We have found that industry consolidation in the publishing and library information systems environment has lowered competition and resulted in negative experiences for researchers and staff. Further, we take the privacy of our users very seriously and were concerned that Lean Library’s alignment with SAGE Publishing will compromise user security. Whenever possible, DUL aims to support products and services that are offered independently from those with already dominant market positions. For these reasons, we opted not to pursue Lean Library further.
Of the three tools the AUX Department explored, we believe LibKey Discovery to be the most user-friendly and effective option. If purchased, it should streamline journal browsing and article PDF downloads without adversely affecting the existing functionality of DUL’s discovery interfaces.
I am sure you have all been following the Library’s exploration into Multispectral Imaging (MSI) here on Bitstreams, Preservation Underground and the News & Observer. Previous posts have detailed our collaboration with R.B. Toth Associates and the Duke Eye Center, the basic process and equipment, and the wide range of departments that could benefit from MSI. In early December of last year (that sounds like it was so long ago!), we finished readying the room for MSI capture, installed the equipment, and went to MSI boot camp.
Well, boot camp came to us. Meghan Wilson, an independent contractor who has worked with R.B. Toth Associates for many years, started our training with an overview of the equipment and the basic science behind it. She covered the different lighting schemes and when they should be used. She explained MSI applications for identifying resins, adhesives and pigments and how to use UV lighting and filters to expose obscured text. We quickly went from talking to doing. As with any training session worth its salt, things went awry right off the bat (not Meghan’s fault). We had powered up the equipment but the camera would not communicate with the software and the lights would not fire when the shutter was triggered. This was actually a good experience because we had to troubleshoot on the spot and figure out what was going on together as a team. It turns out that there are six different pieces of equipment that have to be powered-up in a specific sequence in order for the system to communicate properly (tee up Apollo 13 soundtrack). Once we got the system up and running we took turns driving the software and hardware to capture a number of items that we had pre-selected. This is an involved process that produces a bunch of files that eventually produce an image stack that can be manipulated using specialized software. When it’s all said and done, files have been converted, cleaned, flattened, manipulated and variations produced that are somewhere in the neighborhood of 300 files. Whoa!
This is not your parents’ point and shoot—not the room, the lights, the curtains, the hardware, the software, the pricetag, none of it. But it is different in another more important way too. This process is team-driven and interdisciplinary. Our R&D working group is diverse and includes representatives from the following library departments.
The Digital Production Center (DPC) has expertise in high-end, full spectrum imaging for cultural heritage institutions along with a deep knowledge of the camera and lighting systems involved in MSI, file storage, naming and management of large sets of files with complex relationships.
The Rubenstein Library’s Collection Development brings a deep understanding of the collections, provenance and history of materials, and valuable contacts with researchers near and far.
To get the most out of MSI we need all of those skills and perspectives. What MSI really offers is the ability to ask—and we hope answer—strings of good questions. Is there ink beneath that paste-down or paint? Is this a palimpsest? What text is obscured by that stain or fire-damage or water damage? Can we recover it without having to intervene physically? What does the ‘invisible’ text say and what if anything does this tell us about the object’s history? Is the reflectance signature of the ink compatible with the proposed date or provenance of the object? That’s just for starters. But you can see how even framing the right question requires a range of perspectives; we have to understand what kinds of properties MSI is likely to illuminate, what kinds of questions the material objects themselves suggest or demand, what the historical and scholarly stakes are, what the wider implications for our and others’ collections are, and how best to facilitate human interface with the data that we collect. No single person on the team commands all of this.
Working in any large group can be a challenge. But when it all comes together, it is worth it. Below is a page from Jantz 723, one processed as a black and white image and the other a Principal Component Analysis produced by the MSI capture and processed using ImageJ and a set of tools created by Bill Christens-Barry of R.B. Toth Associates with false color applied using Photoshop. Using MSI we were able to better reveal this watermark which had previously been obscured.
I think we feel like 16-year-old kids with newly minted drivers’ licenses who have never driven a car on the highway or out of town. A whole new world has just opened up to us, and we are really excited and a little apprehensive!
Practice, experiment, document, refine. Over the next 12 (16? 18) months we will work together to hone our collective skills, driving the system, deepening our understanding of the scholarly, conservation, and curatorial use-cases for the technology, optimizing workflow, documenting best practices, getting a firm grip on scale, pace, and cost of what we can do. The team will assemble monthly, practice what we have learned, and lean on each other’s expertise to develop a solid workflow that includes the right expertise at the right time. We will select a wide variety of materials so that we can develop a feel for how far we can push the system and what we can expect day to day. During all of this practice, workflows, guidelines, policies and expectations will come into sharper focus.
As you can tell from the above, we are going to learn a lot over the coming months. We plan to share what we learn via regular posts here and elsewhere. Although we are not prepared yet to offer MSI as a standard library service, we are interested to hear your suggestions for Duke Library collection items that may benefit from MSI imaging. We have a long queue of items that we would like to shoot, and are excited to add more research questions, use cases, and new opportunities to push our skills forward. To suggest materials, contact Molly Bragg, Digital Collections Program Manager (molly.bragg at Duke.edu), Joshua Sosin, Associate Professor in Classical Studies & History (jds15 at Duke.edu) or Curator of Collections (andrew.armacost at Duke.edu).
In previous posts I have referred to the FADGI standard for still image capture when describing still image creation in the Digital Production Center in support of our Digital Collections Program. We follow this standard in order to create archival files for preservation, long-term retention and access to our materials online. These guidelines help us create digital content in a consistent, scalable and efficient way. The most common cited part of the standard is the PPI guidelines for capturing various types of material. It is a collection of charts that contain various material types, physical dimensions and recommended capture specifications. The charts are very useful and relatively easy to read and understand. But this standard includes 93 “exciting” pages of all things still image capture including file specifications, color encoding, data storage, physical environment, backup strategies, metadata and workflows. Below I will boil down the first 50 or so pages.
Full disclosure. Perkins Library and our digitization program didn’t start with any part of these guidelines in place. In fact, these guidelines didn’t exist at the time of our first attempt at in-house digitization in 1993. We didn’t even have an official digitization lab until early 2005. We started with one Epson flatbed scanner and one high end CRT monitor. As our Digital Collections Program has matured, we have been able to add equipment and implement more of the standard starting with scanner and monitor calibration and benchmark testing of capture equipment before purchase. We then established more consistent workflows and technical metadata capture, developed a more robust file naming scheme, file movement and data storage strategies. We now work hard to synchronize our efforts between all of the departments involved in our Digital Collections Program. We are always refining our workflows and processes to become more efficient at publishing and preserving Digital Collections.
Dive Deep. For those of you who would like to take a deep dive into image capture for cultural heritage institutions, here is the full standard. For those of you who don’t fall into this category, I’ve boiled down the standard below. I believe that it’s necessary to use the whole standard in order for a program to become stable and mature. As we did, this can be implemented over time.
Boil It Down. The FADGI standard provides a tiered approach for still image capture, from 1 to 4 stars, with four stars being the highest. The 1 and 2 star tiers are used when imaging for access and tiers 3 and 4 are used for archival imaging and preservation at the focus.
The physical environment: The environment should be color neutral. Walls should be painted a neutral gray to minimize color shifts and flare that might come from a wall color that is not neutral. Monitors should be positioned to avoid glare on the screens (This is why most professional monitors have hoods). Overhead lighting should be around 5000K (Tungsten, florescent and other bulbs can have yellow, magenta and green color shifts which can affect the perception of the color of an image). Each capture device should be separated so that light spillover doesn’t affect another capture device.
Monitors and Light boxes and viewing of originals: Overhead light or a viewing booth should be set up for viewing of originals and should be a neutral 5000K. A light box used for viewing transmissive material should also be 5000K.
Digital images should be viewed in the colorspace they were captured in and the monitor should be able to display that colorspace. Most monitors display in the sRGB colorspace. However, professional monitors use the AdobeRGB colorspace which is commonly used in cultural heritage image capture. The color temperature of your monitor should be set to the Kelvin temperature that most closely matches the viewing environment. If the overhead lights are 5000K, then the monitor’s color temperature should also be set to 5000K.
Calibrating packages that consist of hardware and software that read and evaluate color is an essential piece of equipment. These packages normalize the luminosity, color temperature and color balance of a monitor and create an ICC display profile that is used by the computer’s operating system to display colors correctly so that accurate color assessment can be made.
Capture Devices: The market is flooded with capture devices of varying quality. It is important to do research on any new capture device. I recommend skipping the marketing schemes that tout all the bells and whistles and just stick to talking to institutions that have established digital collections programs. This will help to focus research on the few contenders that will produce the files that you need. They will help you slog through how many megapixels are necessary, what lens are best for which application, what scanner driver is easiest to use while balanced with getting the best color out of your scanner. Beyond the capture device, other things that come into play are effective scanner drivers that produce the most accurate and consistent results, upgrade paths for your equipment and service packages that help maintain your equipment.
Capture Specifications: I’ll keep this part short because there are a wide variety of charts covering many formats, capture specifications and their corresponding tiers. Below I have simplified the information from the charts. These specification hover between tier 3 and 4 mostly leaning toward 4.
Always use a FADGI compliant reference target at the beginning of a session to ensure the capture device is within acceptable deviation. The target values differ depending on which reference targets are used. Most targets come with a chart representing numerical value of each swatch in the target. Our lab uses a classic Gretagmacbeth target and our acceptable color deviation is +/- 5 units of color.
Our general technical specs for reflective material including books, documents, photographs and maps are:
Master File Format: TIFF
Resolution: 300 ppi
Bit Depth: 8
Color Depth: 24 bit RGB
Color Space: Adobe 1998
These specifications generally follow the standard. If the materials being scanned are smaller than 5×7 inches we increase the PPI to 400 or 600 depending on the font size and dimensions of the object.
Our general technical specs for transmissive material including acetate, nitrate and glass plate negatives, slides and other positive transmissive material are:
Master File Format: TIFF
Resolution: 3000 – 4000 ppi
Bit Depth: 16
Color Depth: 24 bit RGB
Color Space: Adobe 1998
These specifications generally follow the standard. If the transmissive materials being scanned are larger than 4×5 we decrease the PPI to 1500 or 2000 depending on negative size and condition.
Recommended capture devices: The standard goes into detail on what capture devices to use and not to use when digitizing different types of material. It describes when to use manually operated planetary scanners as opposed to a digital scan back, when to use a digital scan back instead of a flatbed scanner, when and when not to use a sheet fed scanner. Not every device can capture every type of material. In our lab we have 6 different devices to capture a wide variety of material in different states of fragility. We work with our Conservation Department when making decisions on what capture device to use.
General Guidelines for still image capture
Do not apply pressure with a glass platen or otherwise unless approved by a paper conservator.
Do not use vacuum boards or high UV light sources unless approved by a paper conservator.
Do not use auto page turning devices unless approved by a paper conservator.
For master files, pages, documents and photographs should be imaged to include the entire area of the page, document or photograph.
For bound items the digital image should capture as far into the gutter as practical but must include all of the content that is visible to the eye.
If a backing sheet is used on a translucent piece of paper to increase contrast and readability, it must extend beyond the edge of the page to the end of the image on all open sides of the page.
For master files, documents should be imaged to include the entire area and a small amount beyond to define the area.
Do not use lighting systems that raise the surface temperature of the original more than 6 degrees F(3 degrees C)in the total imaging process.
When capturing oversized material, if the sections of a multiple scan item are compiled into a single image, the separate images should be retained for archival and printing purposes.
The use of glass or other materials to hold photographic images flat during capture is allowed, but only when the original will not be harmed by doing so. Care must be taken to assure that flattening a photograph will not result in emulsion cracking, or the base material being damaged. Tightly curled materials must not be forced to lay flat.
For original color transparencies, the tonal scale and color balance of the digital image should match the original transparency being scanned to provide accurate representation of the image.
When scanning negatives, for master files the tonal orientation may be inverted to produce a positive The resulting image will need to be adjusted to produce a visually-pleasing representation. Digitizing negatives is very analogous to printing negatives in a darkroom and it is very dependent on the photographer’s/ technician’s skill and visual literacy to produce a good image. There are few objective metrics for evaluating the overall representation of digital images produced from negatives.
The lack of dynamic range in a film scanning system will result in poor highlight and shadow detail and poor color reproduction.
No image retouching is permitted to master files.
These details were pulled directly from the standard. They cover a lot of ground but there are always decisions to be made that are uniquely related to the material to be digitized. There are 50 or so more pages of this standard related to workflow, color management, data storage, file naming and technical metadata. I’ll have to cover that in my next blog post.
The FADGI standard for still image capture is very thorough but also leaves room to adapt. While we don’t follow everything outlined in the standard we do follow the majority. This standard, years of experience and a lot of trial and error have helped make our program more sound, consistent and scalable.
This sermon struck me because of its direct reference to specific events related to the Civil Rights Movement (at least more than the others) and how closely it echoes current events across the nation, particularly the story of Emmett Till’s horrific murder and the fact that his mother chose to have an open casket so that everyone could see the brutality of racism.
I am in awe of the strength it must have taken Emmett’s mother, Mamie Till, to make the decision to have an open casket at her son’s funeral.
Duke has many collections related to the history of the Civil Rights Movement. This collection provides a religious context to the events of our relatively recent past, not only of the Civil Rights Movement but of many social, political and spiritual issues of our time.
We all probably remember having to pose for an annual class photograph in primary school. If you made the mistake of telling your mother about the looming photograph beforehand you probably had to wear something “nice” and had your hair plastered to your head by your mother while she informed you of the trouble you’d be in if you made a funny face. Everyone looks a little awkward in these photographs and only a few of us wanted to have the picture taken in the first place. Frankly, I’m amazed that they got us all to sit still long enough to take the photograph. Some of us also had similar photographs taken while participating in team sports which also led to some interesting photographs.
These are some of the memories that have been popping up this past month as I digitize nitrate negatives from the Sports Information Office: Photographic Negatives collection circa 1924-1992, 1995 and undated. The collection contains photographic negatives related to sports at Duke. I’ve digitized about half of the negatives and seen images from mostly football, basketball, baseball and boxing. The majority of these photographs are of individuals but there are also team shots, group shots and coaches. While you may have to wait a bit for the publication of these negatives through the Digital Collections website I had to share some of these gems with you.
Some of the images strike me as funny for the expressions, some for the pose and others for the totally out of context background. It makes me wonder what the photographer’s intention/ instruction was.
To capture these wonderful images we are using a recently purchased Hasselblad FlexTight X5. The Hasselblad is a dedicated high-end film scanner that uses glassless drum scanning technology. Glassless drum scanning takes advantage of all the benefits of a classic drum scanner (high resolution, sharpness, better D-max/ D-min) without all the disadvantages (wet mounting messiness, newton rings, time consuming, price, speed). This device produces extremely sharp reproductions of which the film grain in the digital image can be seen. A few more important factors about this scanner are: a wide variety of standard film sizes can be digitized along with custom sizes and it captures in a raw file format. This is significant because negatives contain a significant amount of tonal information that printed photographs do not. Once this information is captured we have to adjust each digital image as if we were printing the negative in a traditional dark room. When using image editing software to adjust an image an algorithm is at work making decisions about compressing, expanding, keeping or discarding tonal information in the digital image. This type of adjustment causes data loss. Because we are following archival imaging standards, retaining the largest amount of data is important. Sometimes the data loss is not visible to the naked eye but making adjustments renders the image data “thin”. The more adjustments to an image the less data there is to work with.
It kind of reminds me of the scene in Shawshank Redemption (spoiler alert) where the warden is in Andy Dufresne’s (Tim Robbins) cell after discovering he has escaped. The warden throws a rock at a poster on the wall in anger only to find there is a hole in the wall behind the poster. An adjusted digital image is similar in that the image looks normal and solid but there is no depth to it. This becomes a problem if anyone, after digitization, wants to reuse the image in some other context where they will need to make adjustments to suit their purposes. They won’t have a whole lot of latitude to make adjustments before digital artifacts start appearing. By using the Hasselblad RAW file format and capturing in 16 bit RGB we are able to make adjustments to the raw file without data loss. This enables us to create a robust file that will be more useful in the future.
I’m sure there will be many uses for the negatives in this collection. Who wouldn’t want a picture of a former Duke athlete in an odd pose in an out of context environment with a funny look on their face? Right?
Notes from the Duke University Libraries Digital Projects Team