Just in time for Summer – New Digital Collections!

Looking for something to keep you company on your Summer vacation?  Why not direct your devices to a Duke Digital collections! Seriously! Here are a few of the compelling collections we debuted earlier this Spring, and we have have more coming in late June.

Hayti-Elizabeth Street renewal area

These maps and 2 volume report document Durham’s Hayti-Elizabeth st neighborhood infrastructure prior to the construction of the Durham Freeway, as well as the justifications for the redevelopment of the area.  This is an excellent resource for folks studying Durham history and/or the urban renewal initiatives of the mid-20th century. 

Map of the Hayti-Elizabeth Street renewal area
One map from “Hayti-Elizabeth Street renewal area : general neighborhood renewal plan, map 1”

African American Soldiers’ Photography albums

We launched 8 collections of photograph albums created by African American soldiers serving in the military across the world including Japan, Vietnam and Iowa.  Together these albums help “document the complexity of the African American military experience” (Bennett Carpenter from his blog post, “War in Black and White: African American Soldiers’ Photograph Albums”).  

One page from the African American soldier’s World War II photograph album of Munich, Germany

 

Sir Percy Moleworth Sykes Photograph Album

This photograph album contains pictures taken by Sir Percy Moleworth Sykes during his travels in a mountainous region of Central Asia, now the Xinjiang Uyghur Autonomous Region of China, with his sister, Ella Sykes.  According to the collection guide, the album’s “images are large, crisp, and rich with detail, offering views of a remote area and its culture during tensions in the decades following the Russo-Turkish War”.

A Sidenote

Both the Hayti-Elizabeth and soldiers’ albums collections were proposed in response to our 2017 call for digitization proposals related to diversity and inclusion.  Other collections in that batch include the Emma Goldman papers, Josephine Leary papers, and the ReImagining collection.  

Coming soon

Our work never stops, and we have several large projects in the works that are scheduled to launch by the end of June. They are the first batch of video recordings from the Memory Project. We are busy migrating the incredible photographs from the Sydney Gamble collection into the digital repository.  Finally there is one last batch of Radio Haiti recordings on the way.  

Advertisement for the American AirlinesKeeping in touch

We launch new digital collections just about every quarter, and have been investigating new ways to promote our collections as part of an assessment project.  We are thinking of starting a newsletter – would you subscribe? What other ways would you like to keep in touch with Duke Digital Collections? Post a comment or contact me directly

Features and Gaps and Bees, Oh My!

Since my last post about our integrated library system (ILS), there’s been a few changes. First, my team is now the Library Systems and Integration Support Department. We’ve also added three business analysts to our team and we have a developer coming on board this summer. We continue to work on FOLIO as a replacement for our current ILS. So what work are we doing on FOLIO?

FOLIO is a community-sourced product. There are currently more than 30 institutions, over a dozen developer organizations, and vendors such as EBSCO and IndexData involved. The members of the community come together in Special Interest Groups (SIGs). The SIGs discuss what functionality and data is needed, write the user stories, and develop workflows so the library staff will be able to do their tasks. There are ten main SIGs, an Implementation Group, and Product and Technical Councils. Here at Duke, we have staff from all over the libraries involved in the SIGs. They speak up to be sure the product will work for Duke Libraries.

Features

The institutions planning to implement FOLIO in Summer 2020 spent April ranking 468 open features. They needed to choose  whether the feature was needed at the time the institution planned to go live, or if they could wait for the feature to be added (one quarter later or one year later). Duke voted for 62% of the features be available at the time we go live with FOLIO. These features include things like  default reports, user experience enhancements, and more detailed permission settings, to name a few.

Gaps

After the feature prioritization was complete, we conducted a gap analysis. The gap analysis required our business analysts to take what they’ve learned from conducting interviews with library staff across the University and compare it to what FOLIO can currently do and what is planned. The Duke Libraries’ staff who have been active on the SIGs were extremely helpful in identifying gaps. Some feature requests that came out of the gap analysis included making sure a user has an expiration date associated with it. Another was being able to re-print notices to patrons. Others had to do with workflow, for example, making sure that when a holdings record is “empty” (no items attached), that an alert is sent so a staff person can decide to delete the empty record or not.

Bees?

So where to the bees come into all of this? Well, the logo for FOLIO includes a bee!folio: future of libraries is open. Bee icon

The release names and logos are flowers. And we’re working together in a community toward a single goal – a new Library Services Platform that is community-sourced and works for the future of libraries.

Learn more about FOLIO@Duke by visiting our site: https://sites.duke.edu/folioatduke/. We’ve posted newsletters, presentations, and videos from the FOLIO project team.

hexagon badge, image of aster flower, words folio aster release Jan 2019

hexagon badge, image of bellis flower, words folio bellis release Apr  2019

hexagon badge, image of clove flower, words folio clover release May 2019

hexagon badge, image of daisy flower, words folio daisy release Oct 2019

A simple tool with a lot of power: Project Estimates

It takes a lot to build and publish digital collections as you can see from the variety and scope of the blog posts here on Bitstreams.  We all have our internal workflows and tools we use to make our jobs easier and more efficient.  The number and scale of activities going on behind the scenes is mind-boggling and we would never be able to do as much as we do if we didn’t continually refine our workflows and create tools and systems that help manage our data and work.  Some of these tools are big, like the Duke Digital Repository (DDR), with its public, staff and backend interface used to preserve, secure, and provide access to digital resources, while others are small, like scripts built to transform ArchiveSpace output into a starter digitization guides.  In the Digital Production Center (DPC) we use a homegrown tool that not only tracks production statistics but is also used to do project projections and to help isolate problems that occur during the digitization process.  This tool is a relational database that is affectionately named the Daily Work Report and has collected over 9 years of data on nearly every project in that time.

A long time ago, in a newly minted DPC, supervisors and other Library staff often asked me, “How long will that take?”, “How many students will we need to digitize this collection?”, “What will the data foot print of this project be?”, “How fast does this scanner go?”, “How many scans did we do last year?”, “How many items is that?”.  While I used to provide general information and anecdotal evidence to answer all of these questions, along with some manual hunting down of this information, it became more and more difficult to answer these questions as the number of projects multiplied, our services grew, the number of capture devices multiplied and the types of projects grew to include preservation projects, donor requests, patron request and exhibits.  Answering these seemingly simple questions became more complicated and time consuming as the department grew.  I thought to myself, I need a simple way to track the work being done on these projects that would help me answer these recurring common questions.

We were already using a FileMaker Pro database with a GUI interface as a checkout system to assign students batches of material to scan, but it was only tracking what student worked on what material.  I decided I could build out this concept to include all of the data points needed to answer the questions above.  I decided to use Microsoft Access because it was a common tool installed on every workstation in the department, I had used it before, and classes and instructional videos abound if I wanted to do anything fancy.

Enter the Daily Work Report (DWR).  I created a number of discrete tables to hold various types of data: project names, digitization tasks, employee names and so on.  These fields are connected to a datasheet represented as a form, which allowed for dropdown lists and auto filling for rapid and consistent entry of information. 

At the end of each shift students and professionals alike fill out the DWR for each task they performed on each project and how long they worked on each task.  These range from the obvious tasks of scanning and quality control to more minute tasks of derivative creation, equipment cleaning, calibration, documentation, material transfer, file movement, file renaming, ingest prep, and ingest.

Some of these tasks may seem minor and possibly too insignificant to record but they add up.  They add up to ~30% of the time it takes to complete a project.   When projecting the time it will take to complete a project we collect Scanning and Quality Control data from a similar project, calculate the time and add 30%.

Common Digitization Tasks

Task
Hours Overall % of project
Scanning 406.5 57.9
Quality Control 1 133 19
Running Scripts 24.5 3.5
Collection Analysis 21 3
Derivative Creation 20.5 2.9
File Renaming 15.5 2.2
Material Transfer 14 2
Testing 12.5 1.8
Documentation 10 1.4
File Movement 9.75 1.4
Digitization Guide 7 1
Quality Control 2 6.75 1
Training 6 0.9
Quality Control 3 5.5 0.9
Stitching 3 0.4
Rescanning 1.5 0.2
Finalize 1.5 0.2
Troubleshooting 1.5 0.2
Conservation Consultation 1 0.1
Total 701 100

New Project Estimates

Using the Daily Work Report’s Datasheet View, the database can be filtered by project, then by the “Scanning” task to get the total number of scans and the hours worked to complete those scans.  The same can be done for the Quality Control task.  With this information the average number of scans per hour can be calculated for the project and applied to the new project estimate.

Gather information from an existing project that is most similar to the project you are creating the estimate for.  For example, if you need to develop an estimate for a collection of bound volumes that will be captured on the Zeutschel you should find a similar collection in the DWR to run your numbers.

Gather data from an existing project:

Scanning

  • Number of scans = 3,473
  • Number of hours = 78.5
  • 3,473/78.5 = 2/hr

Quality Control

  • Number of scans = 3,473
  • Number of hours = 52.75
  • 3,473/52.75 = 8/hr

Apply the per hour rates to the new project:

Estimated number of scans: 7,800

  • Scanning: 7,800 / 44.2/hr = 176.5 hrs
  • QC: 7,800 / 68.8/hr = 113.4 hrs
  • Total: 290 hrs
  • + 30%: 87 hrs
  • Grand Total: 377 hrs

Rolling Production Rate

When an update is required for an ongoing project the Daily Work Report can be used to see how much has been done and calculate how much longer it will take.  The number of images scanned in a collection can be found by filtering by project then by the “Scanning” Task.  That number can then be subtracted from the total number of scans in the project.  Then, using a similar project to the one above you can calculate the production rate for the project and estimate the number of hours it will take to complete the project.

Scanning

  • Number of scans in the project = 7,800
  • Number of scans completed = 4,951
  • Number of scans left to do = 7,800 – 4,951 = 2,849

Scanning time to completion

  • Number of scans left = 2,849
  • 2,849/42.4/hr = 2 hrs

Quality Control

  • Number of files to QC in the project = 7,800
  • Number of files completed = 3,712
  • Number of files left to do = 7,800 – 3,712 = 4,088

QC hours to completion

  • Number of scans left to scan = 4,088
  • 4,088/68.8 = 4 hrs

The amount of time left to complete the project

  • Scanning – 67.2 hrs
  • Quality Control – 59.4 hrs
  • Total = 126.2 hrs
  • + 30% = 38
  • Grand Total = 164.2 hrs

Isolate an error

Errors inevitably occur during most digitization projects.  The DWR can be used to identify how widespread the error is by using a combination of filtering, the digitization guide (which is an inventory of images captured along with other metadata about the capture process), and inspecting the images.  As an example, a set of files may be found to have no color profile.  The digitization guide can be used to identify the day the erroneous images were created and who created them. The DWR can be used to filter by the scanner operator and date to see if the error is isolated to a particular person, a particular machine or a particular day.  This information can then be used to filter by the same variables across collections to see if the error exists elsewhere.  The result of this search can facilitate retraining, recalibrating of capture devices and also identify groups of images that need to be rescanned without having to comb through an entire collection.

While I’ve only touched on the uses of the Daily Work Report, we have used this database in many different ways over the years.  It has continued to answer those recurring questions that come up year after year.  How many scans did we do last year?  How many students worked on that multiyear project?  How many patron requests did we complete last quarter?  This database has helped us do our estimates, isolate problems and provide accurate updates over the years.  For such a simple tool it sure does come in handy.

Web Accessibility: Values and Vigilance

The Duke Libraries are committed to providing outstanding service based on respect and empathy for the diverse backgrounds and needs in our community. Our guiding principles make clear how critically important diversity and inclusion are to the library, and the extent to which we strive to break down barriers to scholarship.

One of the biggest and most important barriers for us to tackle is the accessibility of our web content. Duke University’s Web Accessibility site sums it up well:

Duke believes web content needs to be accessible to people with a wide range of abilities, including visual, auditory, physical, speech, cognitive, language, learning, and neurological abilities.

Screenshot of Duke Web Accessibility homepage
The Duke Web Accessibility website is a tremendous resource for the Duke community.

This belief is also consistent with the core values expressed by the American Library Association (ALA). A library’s website and online resources should be available in formats accessible to people of all ages and abilities.

Web Content

As one of the largest research libraries in the U.S., we have a whole lot of content on the web to consider.

Our website alone comprises over a thousand pages with more than fifty staff contributors. The library catalog interface displays records for over 13 million items at Duke and partner libraries. Our various digital repositories and digital exhibits platforms host hundreds of thousands of interactive digital objects of different types, including images, A/V, documents, datasets, and more. The list goes on.

Any attempt to take a full inventory of the library’s digital content reveals potentially several million web pages under the library’s purview, and all that content is managed and rendered via a dizzying array of technology platforms. We have upwards of a hundred web applications with public-facing interfaces. We built some of these ourselves, some are community-developed (with local customizations), and others we have licensed from vendors. Some interfaces are new, some are old. And some are really old, dating all the way back to the mid-90s.

Ensuring that this content is equally accessible to everyone is important, and it is indeed a significant undertaking. We must also be vigilant to ensure that it stays accessible over time.

With that as our context, I’d like to highlight a few recent efforts in the library to improve the accessibility of our digital resources.

Style Guide With Color Contrast Checks

In January 2019, we launched a new catalog, replacing a decade-old platform and its outdated interface. As we began developing the front-end, we knew we wanted to be consistent, constrained, and intentional in how we styled elements of the interface. We were especially focused on ensuring that any text in the UI had sufficient contrast with its background to be accessible to users with low vision or color-blindness.

We tried out a few existing “living style guide” frameworks. But none of them proved to be a good fit, particularly for color contrast management. So we ended up taking a DIY approach and developed our own living style guide using Javascript and Ruby.

Screenshot of the library catalog style guide showing a color palette.
The library catalog’s living style guide dynamically checks for color contrast accessibility.

Here’s how it works. In our templates we specify the array of color variable names for each category. Then we use client-side Javascript to dynamically measure the hex & RGB values and the luminance of each color in the guide. From those figures, we return score labels for black and white contrast ratios, color-coded for WCAG 2.0 compliance.

This style guide is “living” in that it’s a real-time up-to-date reflection of how elements of the UI will appear when using particular color variable names and CSS classes. It helps to guide developers and other project team members to make good decisions about colors from our palette to stay in compliance with accessibility guidelines.

Audiovisual Captions & Interactive Transcripts

In fall 2017, I wrote about an innovative, custom-developed feature in our Digital Repository that renders interactive caption text for A/V within and below our media player. At that time, however, none of our A/V items making use of that feature were available to the public.  In the months since then, we have debuted several captioned items for public access.

We extended these features in 2018, including: 1) exporting captions on-the-fly as Text, PDF, or original WebVTT files, and 2) accommodating transcript files that originated as documents (PDF, Word)

Screenshot of an interactive transcript with export options
WebVTT caption files for A/V are rendered as interactive HTML transcripts and can be exported into text or PDF.

Two of my talented colleagues have shared more about our A/V accessibility efforts at conferences over the past year. Noah Huffman presented at ARCHIVES*RECORDS (Joint Annual Meeting of CoSA, NAGARA, and SAA) in Aug 2018. And Molly Bragg presented at Digital Library Federation (DLF) Forum (slides) in Nov 2018.

Institutional Repository Accessibility

We have documented our work over 2018 revitalizing DSpace at Duke, and then subsequently developing a new set of innovative features that highlight Duke researchers and the impact of their work. This spring, we took a closer look at our new UI’s accessibility following Duke’s helpful guide.
In the course of this assessment, we were able to identify (and then fix!) several accessibility issues in DukeSpace. I’ll share two strategies in particular from the guide that proved to be really effective. I highly recommend using them frequently.

The Keyboard Test

How easy is it to navigate your site using only your keyboard? Can you get where you want to go using TAB, ENTER, SPACE, UP, and DOWN?  Is it clear which element of the page current has the focus?
Screenshot of DukeSpace homepage showing skip to content link
A “Skip to main content” feature in DukeSpace improves navigation via keyboard or assistive devices.
This test illuminated several problems. But with a few modest tweaks to our UI markup, we were able to add semantic markers to designate page sections and a skip to main content link, making the content much more navigable for users with keyboards and assistive devices alike.

A Browser Extension

If you’re a developer like me, chances are you already spend a lot of time using your browser’s Developer Tools pane to look under the hood of web pages, reverse-engineer UIs, mess with styles and markup, or troubleshoot problems.
The Deque Systems aXe Chrome Extension (also available for Firefox) integrates seamlessly into existing Dev Tools. It’s a remarkably useful tool to have in your toolset to help quickly find and fix accessibility issues. Its interface is clear and easy to understand. It finds and succinctly describes accessibility problems, and even tells you how to fix them in your code.
An image from the Deque aXe Chrome extension site showing the tool in action.
With aXe testing, we quickly learned we had some major issues to fix. The biggest problems revealed were missing form labels and page landmarks, and low contrast on color pairings. Again, these were not hard to fix since the tool explained what to do, and where.
Turning away from DSpace for a moment, see this example article published on a popular academic journal’s website. Note how it fares with an automated aXe accessibility test (197 violations of various types found).  And if you were using a keyboard, you’d have to press Tab over 100 times in order to download a PDF of the article.
Screenshot of aXe Chrome extension running on a journal website.
UI for a published journal article in a publisher’s website after running the aXe accessibility test. Violations found: 197.

Now, let’s look at the open access copy of that same article that resides in our DukeSpace site. With our spring 2019 DukeSpace accessibility revisions in place, when we run an aXe test, we see zero accessibility violations. Our interface is also now easily navigated without a mouse.

Screenshot or DukeSpace UI showing no violations found by aXe accessibility checker
Open access copy of an article in DukeSpace: No accessibility violations found.

Here’s another example of an open access article in DukeSpace vs. its published counterpart in the website of a popular journal (PNAS).  While the publisher’s site markup addresses many common accessibility issues, it still shows seven violations in aXe. And perhaps most concerning is that it’s completely unnavigable via a keyboard: the stylesheets have removed all focus styles from displaying.

Concluding Thoughts

Libraries are increasingly becoming champions for open access to scholarly research. The overlap in aims between the open access movement and web accessibility in general is quite striking. It all boils down to removing barriers and making access to information as inclusive as possible.

Our open access repository UIs may never be able to match all the feature-rich bells and whistles present in many academic journal websites. But accessibility, well, that’s right up our alley. We can and should do better. It’s all about being true to our values, collaborating with our community of peers, and being vigilant in prioritizing the work.

Look for many more accessibility improvements throughout many of the library’s digital resources as the year progresses.


Brief explanatory note about the A11Y++ image in this post: A11Y is a numeronym — shorthand for the word “accessibility” and conveniently also visually resembling the word “ally.” The “++” is an increment operator in many programming languages, adding one to a variable. 

Mythical Beasts of Audio

Gear. Kit. Hardware. Rig. Equipment.

In the audio world, we take our tools seriously, sometimes to an unhealthy and obsessive degree. We give them pet names, endow them with human qualities, and imbue them with magical powers. In this context, it’s not really strange that a manufacturer of professional audio interfaces would call themselves “Mark of the Unicorn.”

Here at the Digital Production Center, we recently upgraded our audio interface to a MOTU 896 mk3 from an ancient (in tech years) Edirol UA-101. The audio interface, which converts analog signals to digital and vice-versa, is the heart of any computer-based audio system. It controls all of the routing from the analog sources (mostly cassette and open reel tape decks in our case) to the computer workstation and the audio recording/editing software. If the audio interface isn’t seamlessly performing analog to digital conversion at archival standards, we have no hope of fulfilling our mission of creating high-quality digital surrogates of library A/V materials.

Edirol UA-101
The Edirol enjoying its retirement with some other pieces of kit

While the Edirol served us well from the very beginning of the Library’s forays into audio digitization, it had recently begun to cause issues resulting in crashes, restarts, and lost work. Given that the Edirol is over 10 years old and has been discontinued, it is expected that it would eventually fail to keep up with continued OS and software updates. After re-assessing our needs and doing a bit of research, we settled on the MOTU 896 mk3 as its replacement. The 896 had the input, output, and sync options we needed along with plenty of other bells and whistles.

I’ve been using the MOTU for several weeks now, and here are some things that I’m liking about it:

  • Easy installation of drivers
  • Designed to fit into standard audio rack
  • Choice of USB or Firewire connection to PC workstation
  • Good visual feedback on audio levels, sample rate, etc. via LED meters on front panel
  • Clarity and definition of sound
MOTU 896mk3
The MOTU sitting atop the audio tower

I haven’t had a chance to explore all of the additional features of the MOTU yet, but so far it has lived up to expectations and improved our digitization workflow. However, in a production environment such as ours, each piece of equipment needs to be a workhorse that can perform its function day in and day out as we work our way through the vaults. Only time can tell if the Mark of the Unicorn will be elevated to the pantheon of gear that its whimsical name suggests!

News Feeds, Microfilm, and the Stories We Tell Ourselves

A little over a week ago, I watched the searing and provocative TED talk by British journalist Carole Cadwalladr, “Facebook’s role in Brexit – and the threat to democracy.” It got me thinking about a few library things, which I thought might make for an interesting blog post. Then thinking about these library things took me down a series of rabbit holes, interconnecting and nuanced and compelling enough to chew up the entirety of the time I’d set aside for my turn in the Bitstreams blog rotation. There is no breezy, concise blog post that could pull them all together so I’m just going to do with it what I can with two of the maybe four or five rabbit holes that I fell into.

Cadwalladr took the stage at a TED conference sponsored by Facebook and Google, and spoke about her investigations into the role of Facebook and Cambridge Analytica in the Brexit vote in 2016. Addressing the big tech leaders present – the “Gods of Silicon Valley: Mark Zuckerberg, Sheryl Sandberg, Larry Page, Sergey Brin and Jack Dorsey” – she levelled a devastating j’accuse – “[W]hat the Brexit vote demonstrates is that liberal democracy is broken. And you broke it. This is not democracy — spreading lies in darkness, paid for with illegal cash, from God knows where. It’s subversion, and you are accessories to it.”

It was a courageous act, and Cadwalladr deserves celebration and recognition for it, even if the place it leaves us is a bleak one. As she would admit later, she felt massive pressure as she spoke. I had a number of reactions to her talk, but there was a line in particular got me thinking about library things. It occurred when she explained to that audience that “this entire referendum took place in darkness, because it took place on Facebook…, because only you see your news feed, and then it vanishes, so it’s impossible to research anything.” It provoked me to think about how we use “news feeds” – in the form of newspapers themselves – in the study of history, and the role that libraries play in preserving them.

Continue reading News Feeds, Microfilm, and the Stories We Tell Ourselves

Is there an app for that? The seemingly endless quest to make discovery easier for users

Contributed by Assessment & User Experience Department Practicum Students Amelia Midgett-Nicholson and Allison Cruse 

Duke University Libraries (DUL) is always searching for new ways to increase access and make discovery easier for users. One area users frequently have trouble with is accessing online articles. Too often we hear from students that they cannot find an article PDF they are looking for, or even worse, that they end up paying to get through a journal paywall. To address this problem, DUL’s Assessment and User Experience (AUX) Department explored three possible tools: LibKey Discovery, Kopernio, and Lean Library. After user testing and internal review, LibKey Discovery emerged as the best available tool for the job.  

LibKey Discovery logo

LibKey Discovery is a suite of user-friendly application programming interfaces (APIs) used to enhance the library’s existing discovery system.  The APIs enable one-click access to PDFs for subscribed and open-source content, one-click access to full journal browsing via the BrowZine application, and access to cover art for thousands of journals.  The tool integrates fully with the existing discovery interface and does not require the use of additional plug-ins.

According to their website, LibKey Discovery has the potential to save users thousands of clicks per day by providing one-click access to millions of articles.  The ability to streamline processes enabling the efficient and effective discovery and retrieval of academic journal content prompted the AUX department to investigate the tool and its capabilities further.  An internal review of the system was preceded by an introduction of the tool to Duke’s subject librarians and followed with a preliminary round of student-based user testing.

Current DUL interface
Current DUL discovery interface
LibKey interface
LibKey discovery interface

Pros

  • One-Click Article and Full Journal Access

Both the AUX staff and the subject librarians who performed an initial review of the LibKey Discovery tools were impressed with the ease of article access and full journal browsing.  Three members of the AUX department independently reviewed LibKey’s features and concluded the system does provide substantial utility in its ability to reduce the number of clicks necessary to access articles and journals.

  • Streamlined Appearance

The tool streamlines the appearance and formatting of all journals, thus removing ambiguity in how to access information from different sources within the catalog.  This is beneficial in helping to direct users to the features they want without having to search for points of access. The AUX department review team all found this helpful.

  • Seamless Integration

LibKey Discovery’s APIs integrate fully into the existing DUL discovery interface without the need for users to download an additional plug-in.  This provides users the benefit of the new system without asking them to go through extra steps or make any changes to their current search processes.  Aside from the new one-click options available within the catalog’s search results page, the LibKey interface is indistinguishable from the current DUL interface helping users to benefit from the added functionality without feeling like they need to learn a new system.  

Cons

  • Cost

LibKey Discovery carries a relatively hefty price tag, so its utility to the end-user must be weighed against its cost.  While internal review and testing has indicated LibKey Discovery has the ability to streamline and optimize the discovery process, it must be determined if those benefits are universal enough to warrant the added annual expenditure.

  • Inconsistency in Options

A potential downside to LibKey Discovery is lack of consistency in one-click options between articles.  While many articles provide the option for easy, one-click access to a PDF, the full text online, and full journal access, these options are not available for all content.  As a result, this may cause confusion around the options that are available for users and may diminish the overall utility of the tool depending on what percentage of the catalog’s content is exempt from the one-click features.

LibKey Discovery User Testing Findings

An initial round of user testing was completed with ten student volunteers in the lobby of Perkins Library in early April.  Half of the users were asked to access an article and browse a full journal in the existing DUL system; the other half were asked to perform the same tasks using the LibKey Discovery interface.

Initial testing indicated that student users had a high level of satisfaction with the LibKey interface; however, they were equally satisfied with the existing access points in the DUL catalog.  The final recommendations from the user testing report suggest the need for additional testing to be completed. Specifically, it was recommended that more targeted testing be completed with graduate-level students and faculty as a majority of the original test’s participants were undergraduate students with limited experience searching for and accessing academic journal issues and articles.  It was concluded that testing with a more experienced user group would likely produce better feedback as to the true value of LibKey Discovery.

LibKey Summary

LibKey Discovery is a promising addition to Duke’s existing discovery system.  It allows for streamlined, one-click article and full journal access without disrupting the look and feel of the current interface or requiring the use of a plug-in.  Initial reviews of the system by library staff have been glowing; however, preliminary user testing with student participants indicated the need for additional testing to determine if LibKey’s cost is sufficiently offset by its utility to the user.

Kopernio logo

Kopernio is a free browser plug-in which enables one-click access to academic journal articles. It searches the web for OA copies, institutional repository copies, and copies available through library subscriptions. The tool is designed to connect users to articles on and off campus by managing their subscription credentials and automatically finding the best version of an article no matter where a user is searching.

Given the potential of this tool to help increase access and make discovery easier for students, the AUX department initiated an internal review process. Four members of the department independently downloaded the Kopernio plug-in, thoroughly tested it in a variety of situations, and shared their general and specific notes about the tool.

Pros

  • OA Content + Library Subscription

By its design, Kopernio has an advantage over other plug-in tools that serve a similar function (i.e. Unpaywall). When users first download Kopernio they are asked to register their subscription credentials. This information is saved in the plug-in so users can automatically discover articles available from OA sources, as well as library subscriptions. This is an advantage over other plug-ins that only harvest from freely available sources.

Screenshot: Kopernio sign-in page
Kopernio sign-in page
  • Branding

Kopernio has highly visible and consistent branding. With bright green coloring, the plug-in stands out on a screen and attracts users to click on it to download articles.

  • One-Click

Kopernio is advertised as a “one-click” service, and it pays off in this respect. Using Kopernio to access articles definitely cuts down on the number of clicks required to get to an article’s PDF. The process to download articles to a computer was instantaneous, and most of the time, downloading to the Kopernio storage cloud was just as fast.

Cons

  • Creates New Pain Points

Kopernio’s most advertised strength is its ability to manage subscription credentials. Unfortunately, this strength is also a major data privacy weakness. Security concerns ultimately led to the decision to disable the feature which allowed users to access DUL subscriptions via Kopernio when off-campus. Without this feature, Kopernio only pulls from OA sources and therefore performs the same function that many other tools currently do.

Similar to data privacy concerns, Kopernio also raises copyright concerns. One of Kopernio’s features is its sharing function. You can email articles to anyone, regardless of their university affiliation or if they have downloaded Kopernio already. We tested sending DUL subscription PDFs to users without Duke email addresses and they were able to view the full-text without logging in. It is unclear if they were viewing an OA copy of the article, or if they were seeing an article only meant for DUL authenticated users.

Screenshot: Sharing articles through Kopernio
Sharing an article through Kopernio

Running the Kopernio plug-in noticeably slowed down browser speed. We tested the browser on several different computers, both on campus and off, and we all noticed slower browser speeds. This slow speed led Kopernio to be occasionally buggy (freezing, error messages etc.).

Screenshot showing a buggy screen from Kopernio
Buggy screen while using Kopernio
  • Many Features Don’t Seem Useful

When articles are saved to Kopernio’s cloud storage, users can add descriptive tags. We found this feature awkward to use. Instead of adding tags as you go along, users have to add a tag globally before they can tag an article. Overall, it seemed like more hassle than it was worth.

Kopernio automatically imports article metadata to generate citations. There were too many problems with this feature to make it useful to users. It did not import metadata for all articles that we tested, and there was no way to manually add metadata yourself. Additionally, the citations were automatically formatted in Elsevier Harvard format and we had to go to our settings to change it to a more common citation style.

Lastly, the cloud storage which at first seemed like an asset, was actually a problem. All articles automatically download to cloud storage (called the “Kopernio Locker”) as soon as you click on the Kopernio button. This wouldn’t be a problem except for the limited storage size of the locker. With only 100MB of storage in the free version of Kopernio, we found that after downloading only 2 articles the locker was already 3% full. To make this limited storage work, we would have to go back to our locker and manually delete articles that we did not need, effectively negating the steps saved by having an automatic process.

Lean Library Logo

Lean Library is a similar tool to Kopernio. It offers users one-click access to subscription and open access content through a browser extension. In Fall 2018, DUL staff were days away from purchasing this tool when Lean Library was acquired by SAGE Publishing. DUL staff had been excited to license a tool that was independent and vendor-neutral and so were disappointed to learn about its acquisition. We have found that industry consolidation in the publishing and library information systems environment has lowered competition and resulted in negative experiences for researchers and staff. Further, we take the privacy of our users very seriously and were concerned that Lean Library’s alignment with SAGE Publishing will compromise user security. Whenever possible, DUL aims to support products and services that are offered independently from those with already dominant market positions. For these reasons, we opted not to pursue Lean Library further.

Conclusion

Of the three tools the AUX Department explored, we believe LibKey Discovery to be the most user-friendly and effective option. If purchased, it should streamline journal browsing and article PDF downloads without adversely affecting the existing functionality of DUL’s discovery interfaces.

Smart People Who Care

It’s that time of year at the university when we’re working on our PEPs (Performance Evaluation and Planning forms) and I’m thinking about how grateful I am to have such smart staff who really care about their work, their colleagues, and the people they serve, as we advance technology across the libraries. In contrast to some corporate environments, the process here really does aim to help us improve, rather than rank us as a setup for “resource actions” (firings). This excellent article, The Feedback Fallacy by Marcus Buckingham and Ashley Goodall, reminds me to emphasize the things people do well, and encourage them to build on their strengths.

Attuned to ethical practices within organizations, I’m also excited about increasing awareness of ethics in the effects of the technologies we produce. Justin Sherman, co-founder of the Ethical Tech initiative here at Duke, did a stimulating talk at the Edge Workshop this month about ethical issues that surround technology, such as search engine bias, and AI tools that judges use to determine sentencing for crimes.  Justin recommends this podcast, with Christopher Lydon on Open Source, called Real Education About Artificial Intelligence.  Library staff are participating in the Kenan Institute for Ethics book club program (KIE), where the spring selection is Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble.

And, I’m pleased to exercise my hiring mantra, “smart people who care”, which has served me well for over 30 years, as we’re recruiting candidates with I/T and team leadership experience for a new position, Computing Services Supervisor.

Happy Spring!
Laura Cappelletti
Director, Information Technology Services

The Commons Approach

Earlier this month, I was invited to give some remarks on “The Commons Approach” at the LYRASIS Leaders Forum, which was held at the Duke Gardens.  We have a great privilege and opportunity as part of the Duke University Libraries to participate in many different communities and projects, and it is one of the many reasons I love working at Duke.  The following is the talk I gave, which shares some personal and professional reflections of the Commons.


The Commons Approach is something that I have been committed to for almost the entirety of my library career, which is approaching twenty years.  When I start working in libraries at Lehigh University, I came into the community with little comprehensive about the inner workings of libraries.  I had no formal library training, and my technology education, training, and work experience had been developed through experimentation and learning by doing.  Little did I know at the time, I was benefiting from small models of the Commons, or even how to define it.

There are a number of definitions of Commons, but one definition I like best that I found in Wikipedia is “a general term for shared resources in which each stakeholder has an equal interest”.  There are many contexts:

  • natural resources – air, water, soil
  • cultural norms and shared values
  • public lands that no one owns
  • information resources that are created collectively and shared among communities, free for re-use

The librarians and library staff around me at Lehigh brought me into their Commons, and gave me the time and space to learn about the norms, shared values, terminology, language, jargon, and collective priorities of libraries so that I could begin to apply them with my skills and experiences and be an active contributor to the Commons of Libraries.  As I began to become more unified with them in the commons approach, my diverse work experience and skill added to our community.  I recognized that I belonged, that I was now a stakeholder with an equal interest, and that I could create and share among the broader library community.

If we start with the foundation that libraries are naturally driven towards a Commons approach within their own campus or organization, we can examine the variety of models of projects and communities that extend the Commons Approach.

  • Open Source Software Projects (Apache Model)
    • earned trust through contributions over time
    • primarily a complete volunteering of time and effort
    • peer accountability, limited risk of power struggles
    • tends to choose the most open license possible
  • Community Source Projects (Kuali OLE, Fedora)
    • “those who bring the gold set the rules”
    • The more commitment you make in resources, the more privilege and opportunity you receive
    • Dependent on defined governance around both rank of commitment and representation
    • Tends to choose open licensing that offers most protection and control
  • Membership initiatives (OLE [before and after Kuali], APTrust, DPN, SPN)
    • Typically tiered, proportional model focused primarily on financial contributions
    • Convergence around a strategic initiative, project, or outcome
    • Pooled financial resources to develop and sustain a solution
  • Consortial Partnerships (Informal, Formal, Mixed)
    • Location- or context-based partnership to collaborate
    • Defined governance structure
    • Informal or formal
  • National Initiatives (Code4Lib, DPLA, DLF)
    • Annual conferences or meetings
    • Distributed communications commons (listserv, Slack, website)
    • Coordinated around large ideas or contexts and sharing local ideas / projects to build grass roots change
    • Focused on democratizing opportunity for sharing and collaboration
  • Community Projects with Corporate Sponsors (for-profit, not-for-profit)
    • Hybrid or mixed models of community source or membership initiatives
    • Corporate services support to implementers
    • Challenges in governance of priorities between sponsor and community

NOTE: There are more models and nuances to these models.

Benefits and Challenges

Each of these models has benefits and challenges.  One of the issues that I have become particularly interested in, and consistently advocate for, is creating an environment that promotes diversity of participants.  The Community Source model of privileging those who bring the gold, for example, tends to bias larger organizations that have financial and human resource flexibility and requires clear proportional investment tiers that recognize varied sizes of organizations wanting to join the community.  But while contribution levels can be defined at varied tiers, costs are constant and usually fixed, especially staffing costs, which will put pressure on the community to sustain. A smaller, startup community thus needs larger investments to incubate the project no matter how equal they intend to share the ownership across all of the stakeholders. Thus:

  • How do we develop our communities to fully embrace a Commons Approach that gives each stakeholder equal opportunity that also embraces differences of the organizations within the community?
  • How do we setup our communities that empower smaller organizations not only to join but to lead?
  • How we do setup our communities to encourage well-resourced organizations to contribute without automatically assuming leadership or control?

Open Source

My first opportunity to experience the Commons Approach outside of my own library was as a member of the VuFind project.  While Villanova University led and sponsored the project, Andrew Nagy, the founding developer, contributed his hard work to the whole community and invited others to co-develop.  As you gained the trust of the lead developers in what you contributed, you earned more responsibility and opportunity to work on the core code.  If you chose to focus on specific contributions, you became the lead developers of that part of the code.  It was all voluntary and all contributing to a common purpose: to develop an open source faceted discovery platform.  As leaders transitioned to new jobs or new organizations, some stayed on the project, and some were replaced with other community members who had earned their opportunity.

Community Source

While I was working at Lehigh and participating in the Open Library Environment as a membership model, it was simpler for me to feel like I belonged because each member had a single member on the governance group.  I was an equal member, and Lehigh had an equal stake.  We held in Common priorities for our community, for the project, and for the outcome.  We held in Common that each of us represented libraries from different contexts: private, public, large, small, US-based, and International.  We held in Common the priority to grow and attract other libraries of various sizes, contexts, and geographic locations.  It felt idealistic.

The ideal shattered when OLE joined the Kuali Foundation and the model changed from membership to Community Source.  The rules of that model were different, and thus the foundation of their Commons was also different.  While tiers of financial contribution were still in place, it was clear that the more resources your organization brought, the more influence your organization would have.  Vendors were also members of the community, and they put in a different level and category of resources.

Moreover, there were joint governance committees overseeing projects that multiple projects were using at the same time.  Which project’s priorities would be addressed first depended on which project was paying more into that project.  I quickly realized that OLE, which was not paying as much as others, would not be getting its needs addressed.  To be fair, this structure worked for some of the Kuali projects and worked well.  But it not a Commons approach that the Open Library Environment had been committed to, and it was not the best model for that community to be successful.

Consortial Partnerships

Consortial partnerships are critical for local, regional, and national collaboration, and these partnerships are centered on a variety of common strategies, from buying or licensing collections to resource sharing to digital projects.  There are formal and informal consortia, some that are decades old and some that are very new.  As libraries continue to face constrained resources, banding together through these common constraints will be more and more critical to providing our users the level of service we expect to provide – another common thread: excellence.

National Initiatives

National initiatives have been getting a lot of press this year, mostly for difficult reasons.  There are also a variety of contexts for national initiatives, but most of the ones we likely consider are joined by a shared commitment to a topic, theme, or challenge.  Code4Lib began in 2003 as a listserv of library programmers hoping to find community with others in the library, museum, and archives community.  Code4Lib started meeting annually when it became clear it would be beneficial to share projects and ideas in person, hack and design things together, and find new ways to collaborate out in the open.

The Digital Library Federation is a community of practitioners who advance research, learning, social justice, and the public good through the creative design and wise application of digital library technologies.  The Digital Public Library of America was founded to maximize public access to the collections of historical and cultural organizations across the country.

The methods each of these national initiatives are quite different from each other, but they have focus on a Commons Approach that recognizes their collective effort is greater than the sum of their individual results.

Community Projects with Corporate Sponsors

The newest model of the Commons Approach is the hybrid open-source or community-source projects that include corporate sponsorship, hosting, or services. There are example of both for-profit and not-for-profit sponsorships, as well as a not-for-profit who tends to act at times like a for-profit, and the library community is continuing to have mixed reactions.  Some libraries embrace this interest by corporate partners, while others outright reject the notion as a type of Trojan horse.  Some are skeptical of specific corporations, while some have biases towards or against specific partners.  This new paradigm challenges our notion of openness, but it also offers an opportunity to explore different means to the Commons Approach.  The same elements apply – what are norms, values, terminology, language, jargon, and collective priorities that we share together?  What benefits can each stakeholder bring to the community?  Is there a diversity of participation, leadership, and contribution that creates inclusion?  Are there new aspects to having corporate sponsors join that the library community cannot do on its own?

Economies of scale

One of these new aspects is developing new means of economies of scale, which is a good step to sustaining a Commons Approach.  Economies of scale allows greater opportunity for libraries of different sizes and financial resources to work together. Open and community source projects in particular need solid financial planning, but great ideas and leadership are not limited to libraries with larger budgets or staff size.  Continuing to increase the opportunity for diversity of the community will be a great outcome and encourage a broader adoption of the Commons Approach.

Yet project staffing and resources are usually fixed costs that are not kind to the attempts to enable libraries to make variable contributions.  It requires a balance of large and small contributions from the community, and the entry of corporate sponsors has enable some new financial and infrastructure security missing in many projects and initiatives.  Yet, as a community focused on common values, norms, and priorities, it is not disingenuous to use due diligence to ensure all members of the community, library and sponsor alike, are committed to the Commons Approach and not in it for a free ride or an ulterior motive.

New Paradigm as a Disruptive Force

And it is accurate to call this new paradigm a disruptive force to open- and community-source projects.  It is up to the community to decide for itself what the best is for their future: embrace the disruption and adapt for the positive gains; hold true to their origins and continue on their path; or be torn apart by change, ignoring or forgetting their Common Approach foundation in the wake of the disruptive force.

Each of the models above have had some manner of corporate sponsorship examples, so we know there is success to be found regardless of the model.  And there are still many examples that remain strong in their original framework.  What I find encouraging, even in the difficult situations of the past year for many organizations, is that we are challenging our notions of how to develop these communities so that we can develop greater sustainability, greater participation from a more diverse and representative community, and achieve broader success of the Commons together – for our users, the most important connecting element of all.

My Family Story through the Duke Digital Collections Program

Hello! This is my first blog as the new Digital Production Service Manager, and I’d like to take this opportunity to take you, the reader, through my journey of discovering the treasures that the Duke Digital Collections program offers. To personalize this task, I  explored the materials related to my family’s journey to the United States. First, I should contextualize. After migrating from south China in the mid-1800s, my family fled Vietnam in the late 1970s and we left with the bare necessities – mainly food, clothes, and essential documents. All I have now are a few family pictures from that era and vividly told stories from my parents to help me connect the dots of my family’s history.

When I started delving into Duke’s Digital Collections, it was heartening to find materials of China, Vietnam, and even anti-war materials in the U.S. The following are some materials and collections that I’d like to highlight.

The Sidney D. Gamble Photographs offer over 5,000 photographs of China in the early 20th century. Images of everyday life in China and landscapes are available in this collection.The above image from the Gamble collection, is that of a junk, or houseboat, photographed in the early 1900s. When my family fled Vietnam, fifty people crammed into a similar vessel and sailed in the dead of night along the Gulf of Tonkin. My parents spoke of how they were guided by the moonlight and how fearful they were of the junk catching fire from cooking rice.

The African American Soldier’s Vietnam War photograph album collection offers these gorgeous images of Vietnam. This is the country that was home for multiple generations for my family, and up until the war, it was a good life. I am astounded and grateful that these postcards were collected by an American soldier in the middle of war. Considering that I grew up in Los Angeles, California, I have no sense of the world that my parents inhabited, and these images help me appreciate their stories even more. On the other side of the planet, there were efforts to stop the war and it was intriguing to see a variety of digital collections depicting these perspectives through art and documentary photography. The image below is that of a poster from the Italian Cultural Posters collection depicting Uncle Sam and the Viet Cong.

In addition to capturing street scenes in London, the Ronald Reis Collection, includes images of Vietnam during the war and anti-war effort in the United States. The image below is that of a demonstration in Bryant Park in New York City. I recognize that the conflict was fought on multiple fronts and am grateful for these demonstrations, as they ultimately led to the end of the war.Lastly, the James Karales Photos collection depicts Vietnam during the war. The image below, titled “Soldiers leaving on helicopter” is one that reminds me of my uncle who left with the American soldiers and started a new life in the United States. In 1980, thanks to the Family Reunification Act, the aid of the American Red Cross, and my uncle’s sponsorship, we started a new chapter in America.

Perhaps this is typical of the immigrant experience, but it still is important to put into words. Not every community has the resources and the privilege to be remembered, and where there are materials to help piece those stories together, they are absolutely valued and appreciated. Thank you, Duke University Libraries, for making these materials available.

Notes from the Duke University Libraries Digital Projects Team