On TRAC: Assessment Tools and Trustworthiness

Duke Digital Repository is, among other things, a digital preservation platform and the locus of much of our work in that area.  As such, we often ponder the big questions:

  1. What is the repository?
  2. What is digital preservation?
  3. How are we doing?

 

 

 

What is the repository?

Fortunately, Ginny gave us a good start on defining the repository in Revisiting: What is the Repository?  It’s software, hardware, and  collaboration.  It’s processes, policies, attention, and intention.  While digital preservation is one of the focuses of the repository, digital preservation extends beyond the repository and should far outlive the repository.

What is digital preservation?

There are scores of definitions, but this Medium Definition from ALCTS is representative:

Digital preservation combines policies, strategies and actions to ensure access to reformatted and born digital content regardless of the challenges of media failure and technological change. The goal of digital preservation is the accurate rendering of authenticated content over time.

This is the short answer to the question: Accurate rendering of authenticated digital content over time.  This is the motivation behind the work described in Preservation Architecture: Phase 2 – Moving Forward with Duke Digital Repository.

How are we doing?

There are 2 basic methodologies for assessing this work- reactive and proactive.  A reactive approach to digital preservation might be characterized by “Hey!  We haven’t lost anything yet!”, which is why we like the proactive approach.

Digital preservation can be be a pretty deep rabbit hole and it can be an expensive proposition to attempt to mitigate the long tail of risk.  Fortunately, the community of practice has developed tools to assist in the planning and execution of trustworthy repositories.  At Duke, we’ve got several years experience working in the framework of the Center for Research Libraries’ Trustworthy Repositories Audit & Certification: Criteria and Checklist (TRAC) as the primary assessment tool by which we measure our efforts.  Much of the work to document our preservation environment and the supporting institutional commitment was focused on our DSpace repository, DukeSpace.  A great deal has changed in the recent 3 years including significant growth in our team and scope.  So, once again we’re working to measure ourselves against the standards of our profession and to use that process to inform our work.

There are 3 areas of focus in TRAC: Organizational Infrastructure, Digital Object Management, and Technologies, Technical Infrastructure, & Security.  These cover a very wide and deep field and include things like:

  • Securing Service Level of Agreements for all service providers
  • Documenting the organizational commitments of both Duke University and Duke University Libraries and sustainability plans relating to the repository
  • Creating and implementing routine testing of backup, remote replication, and restoration of data and relevant infrastructure
  • Creating and approving documentation on a wide variety of subjects for internal and external audiences

 

Back to the question: How are we doing?

Well, we’re making progress!  Naturally we’re starting with ensuring the basic needs are met first- successfully preserving the bits, maximizing transparency and external validation that we’re not losing the bits, and working on a sustainable, scalable architecture.  We have a lot of work ahead of us, of course.  The boxes in the illustration are all the same size, but the work they represent is not.  For example, the Disaster Recovery Plan at Hathi Trust is 61 pages of highly detailed thoughtfulness.  However, these works build on each other so we’re confident that the work we’re doing on the supporting bodies of policy, procedure, and documentation will make ease the work to a complete Disaster Recovery Plan.

The New and Improved SNCC Digital Gateway

It may only be 6 months old, but as of May 31, the SNCC Digital Gateway is sporting a new look. Since going live in December 2016, we’ve been doing assessment, talking to contemporary activists and movement veterans and conducting user testing and student surveys. The feedback’s been overwhelmingly positive, but a few suggestions kept coming up. Give people a better sense of who SNCC was right from the homepage, and make it more active. Connect SNCC’s history to organizing today. As one of the young organizers put it, “What is it about SNCC’s legacy now that matters for people?” So we took those suggestions to heart and are proud to present a reworked, redesigned SNCC Digital Gateway. Keep reading for a breakdown of what’s new and why.

Today Section

The new Today section highlights important strategies and lessons from SNCC’s work and explores their usefulness to today’s struggles. Through short, engaging videos, contemporary activists talk about how SNCC’s work continues to be relevant to their organizing today. The nine framing questions and answers of today’s organizers speak to enduring themes at the heart of SNCC’s work: uniting with local people to build a grassroots movement for change that empowered Black communities and transformed the nation. Check out this example:

More Expansive Homepage

The new homepage is longer and gives visitors to the site more context and direction. It includes descriptions of who SNCC was and links users to The Story of SNCC, which tells an expansive but concise history of SNCC’s work. It features videos from the new Today section, and gives users a way to explore the site through themes like voting rights, the organizing tradition, and Black Power.

Themes


Want to know more about voting rights? Black Power? Or are you not as familiar with SNCC’s history and need an entry point? The theme buttons on the homepage give users a window into SNCC’s history through particular aspects of the organization’s work. Theme pages feature select profiles and events focused on a central component of SNCC’s organizing. From there, click through the documents or follow the links to dig deeper into the story.

Navigation Updates

To improve navigation for the site, we’ve changed the name of the History section to Timeline and the former Perspectives to Our Voices. We’ve also moved the About section to the footer to make space for the new Today section.

Have suggestions? Comments? We’re always interested in what you’re thinking. Add a comment or send us an e-mail to snccdigital@gmail.org.

The ABCs of Digitizing Section A

I’m not sure anyone who currently works in the library has any idea when the phrase “Section A” was first coined as a call number for small manuscript collections. Before the library’s renovation, before we barcoded all our books and boxes — back when the Rubenstein was still RBMSCL, and our reading room carpet was a very bright blue — there was a range of boxes holding single-folder manuscript collections, arranged alphabetically by collection creator. And this range was called Section A.

Box 175 of Section A
Box 175 of Section A

Presumably there used to be a Section B, Section C, and so on — and it could be that the old shelf ranges were tracked this way, I’m not sure — but the only one that has persisted through all our subsequent stacks moves and barcoding projects has been Section A. Today there are about 3900 small collections held in 175 boxes that make up the Section A call number. We continue to add new single-folder collections to this call number, although thanks to the miracle of barcodes in the catalog, we no longer have to shift files to keep things in perfect alphabetical order. The collections themselves have no relationship to one another except that they are all small. Each collection has a distinct provenance, and the range of topics and time periods is enormous — we have everything from the 17th to the 21st century filed in Section A boxes. Small manuscript collections can also contain a variety of formats: correspondence, writings, receipts, diaries or other volumes, accounts, some photographs, drawings, printed ephemera, and so on. The bang-for-your-buck ratio is pretty high in Section A: though small, the collections tend to be well-described, meaning that there are regular reproduction and reference requests. Section A is used so often that in 2016, Rubenstein Research Services staff approached Digital Collections to propose a mass digitization project, re-purposing the existing catalog description into digital collections within our repository. This will allow remote researchers to browse all the collections easily, and also reduce repetitive reproduction requests.

This project has been met with enthusiasm and trepidation from staff since last summer, when we began to develop a cross-departmental plan to appraise, enhance description, and digitize the 3900 small manuscript collections that are housed in Section A. It took us a bit of time, partially due to the migration and other pressing IT priorities, but this month we are celebrating a major milestone: we have finally launched our first 2 Section A collections, meant to serve as a proof of concept, as well as a chance for us to firmly define the project’s goals and scope. Check them out: Abolitionist Speech, approximately 1850, and the A. Brouseau and Co. Records, 1864-1866. (Appropriately, we started by digitizing the collections that began with the letter A.)

A. Brouseau & Co. Records carpet receipts, 1865

Why has it been so complicated? First, the sheer number of collections is daunting; while there are plenty of digital collections with huge item counts already in the repository, they tend to come from a single or a few archival collections. Each newly-digitized Section A collection will be a new collection in the repository, which has significant workflow repercussions for the Digital Collections team. There is no unifying thread for Section A collections, so we are not able to apply metadata in batch like we would normally do for outdoor advertising or women’s diaries. Rubenstein Research Services and Library Conservation Department staff have been going box by box through the collections (there are about 25 collections per box) to identify out-of-scope collections (typically reference material, not primary sources), preservation concerns, and copyright concerns. These are excluded from the digitization process. Technical Services staff are also reviewing and editing the Section A collections’ description. This project has led to our enhancing some of our oldest catalog records — updating titles, adding subject or name access, and upgrading the records to RDA, a relatively new standard. Using scripts and batch processes (details on GitHub), the refreshed MARC records are converted to EAD files for each collection, and the digitized folder is linked through ArchivesSpace, our collection management system. We crosswalk the catalog’s name and subject access data to both the finding aid and the repository’s metadata fields, allowing the collection to be discoverable through the Rubenstein finding aid portal, the Duke Libraries catalog, and the Duke Digital Repository.

It has been really exciting to see the first two collections go live, and there are many more already digitized and just waiting in the wings for us to automate some of our linking and publishing processes. Another future development that we expect will speed up the project is a batch ingest feature for collections entering the repository. With over 3000 collections to ingest, we are eager to streamline our processes and make things as efficient as possible. Stay tuned here for more updates on the Section A project, and keep an eye on Digital Collections if you’d like to explore some of these newly-digitized collections.

The Research Data Team: Hitting the Ground Running

There has been a lot of blogging over the last year about the Duke Digital Repository’s development and implementation, about its growth as a platform and a program, and about the creation of new positions to support research data management and curation. My fellow digital content analyst also recently posted about how we four new hires have been creating and refining our research data curation workflow since beginning our positions at Duke this past January. It’s obviously been (and continues to be) a very busy time here for the repository team at Duke Libraries, including both seasoned and new staff alike.

Besides the research data workflows between our two departments, what other things have the data management consultants and the digital content analysts been doing? In short, we’ve been busy!

 

In addition to envisioning stakeholder needs (which is an exercise we continuously do), we’ve received and ingested several data collections this year, which has given us an opportunity to also learn from experience. We have been tracking and documenting the types of data we’re receiving, the various needs that these types of data and depositors have, how we approach these needs (including investigating and implementing any additional tools that may help us better address these), how our repository displays the data and associated metadata, and the time spent on our management and curation tasks. Some of these are in the form of spreadsheets, others as draft policies that will first be reviewed by the library’s research data working group and then by a program committee, and others simply as brain dumps for things that require a further, more structured investigation by developers, the metadata architect, subject librarians, and other stakeholders. These documents live in either our shared online folder or our shared Box account, and, if a wider Duke library and/or public audience are required, are moved to our departments’ content collaboration software platforms (currently Confluence/Jira and Basecamp). The collaborative environments of these platforms support the dynamic nature of our work, particularly as our program takes form.

We also value the importance of face-to-face discussions, so we hold weekly meetings to talk through all of this work (we prefer outside when the weather is nice, and because squirrels are awesome).

One of the most exciting, and at times challenging, aspects of where we are is that we are essentially starting from the ground up and therefore able to develop procedures and features (and re-develop, and on and on again) until we find fits that best accommodate our users and their data. We rely heavily on each other’s knowledge about the research data field, and we also engage in periodic environmental scans of other institutions that offer data management and curation services.

When we began in January, we all considered the first 6-9 months as a “pilot phase”, though this description may not be accurate. In the minds of the data management consultants and the digital content analysts, we’re here and ready. Will we run into situations that require an adjustment to our procedures? Absolutely. It’s the nature of our work. Do we want feedback from the Duke community about how our services are (or are not) meeting their needs? Without a doubt. And will the DDR team continue to identify and implement features to better meet end-user needs? Certainly. We fully expect to adjust and readjust our tools and services, with the overall goal of fulfilling future needs before they’re even evident to our users. So, as always, keep watching to see how we grow!

A Tough Nut to Crack: Developing Digital Repositories

Folks, developing digital repositories is hard.  There are so many different layers of complexity built into the stack, compounded by the unique variety of end-users, or stakeholders, that we serve.

Consider the breadth of this work:

Starting at the bottom of the stack, you have our Preservation layer.  This is where we capture your bits, and ensure the long-term preservation of your digital assets.  But it goes well beyond just logging a single record in a database.  It involves capturing the data stream, writing that file and all associated files (metadata) to storage, replicating the data to various geographically dispersed servers, validating the ingest, logging the validation, ensuring successful recovery of replicated assets, and more.

All of that comes post-ingest.  I’ll not even belabor the complexities of data modeling and ingest here, but you get the idea… it’s hairy stuff.  Receiving and massaging a highly diverse body of data into a package appropriate for homogeneous ingest is a monumental effort in normalization.

Move up the stack into our Curation layer.  Currently we have a single administrative application that facilitates management and curatorial activities of our digital objects following ingest.  Roles or access controls can be managed here, in addition to various types of metadata (description about the item), etc.  There are a variety of other applications that are managed at this layer, which interact with, and store, various values that fuel display and functionality within the user interface.  This layer is quickly evolving in a way that necessitates diversification.  We have found that a single monolithic application is not a one-size-fits-all solution for our stakeholders who are in the business of data production/curation; it is at this layer where we are getting increasingly more pressure to integrate and inter-operate with a myriad of other tools and platforms for resource/data management.  This is tricky business as each of these tools handle data in different ways.

Finally, we have the Discovery layer.  The user interface.  This is what the public sees and consumes.  It’s where access to ingested materials occurs.  It is itself an application requiring significant custom development to meet the needs of various programs and collections of materials.  It is tightly coupled with the Curation layer, and therefore highly complex and customized to meet the needs of different focal areas.  Search functionality is yet another piece of complexity that requires maintenance and customization of a central index.  Nothing is OOTB (out of the box).  Everything requires configuration and customization.

And ALL of this- all of it- is inter-related.  Highly coupled and complex.  Few things reap easy wins, and often our work challenges foundational assumptions that have come well before.  It’s an exercise in balancing technical debt and moving forward without re-inventing the wheel every six months.

What I have presented here is a simplistic view of our software eco-system.  It’s just a snapshot of the various puzzle pieces that support the operation of a production repository.  In general, digital repositories are still fairly new on the scene.  No one has them figured out entirely and everyone does them a little bit differently.  There’s a strength to that which manifests in diverse platforms and a breadth of development possibilities.  There’s a weakness to it because there is no cookie-cutter approach that defines an easy path forward.

So it’s an exercise in evolution.  In iteration.  In patience.  In requirements definition.  We’re not going to always get it right, and our efforts will largely take a bit of time and experimentation, but we’re constantly working to improve, to enhance, and to mature our repository platform to meet the growing and evolving needs of our University.

So, here’s to many years of hard work ahead!  And many successful collaborations with our Duke community to realize our repository’s future.  We’re ready if you are!

Multispectral Imaging: What’s it good for?

At the beginning of March, the multispectral imaging working group presented details about the imaging system and the group’s progress so far to other library staff at a First Wednesday event. Representatives from Conservation Services, Data and Visualization Services, the Digital Production Center, the Duke Collaboratory for Classics Computing (DC3) and the Rubenstein Library each shared their involvement and interest in the imaging technology. Our presentation attempted to answer some basic questions about how the equipment works and how it can be used to benefit the scholarly community. You can view a video of that presentation here

Some of the images we have already shared illustrate a basic benefit or goal of spectral imaging for books and manuscripts: making obscured text visible. But what else can this technology tell us about the objects in our collections? As a library conservator, I am very interested in the ways that this technology can provide more information about the composition and condition of objects, as well as inform conservation treatment decisions and document their efficacy.

Conservators and conservation scientists have been using spectral imaging to help distinguish between and to characterize materials for some time. For example, pigments, adhesives, or coatings may appear very differently under ultraviolet or infrared radiation. Many labs have the equipment to image under a few wavelengths of light, but our new imaging system allows us to capture at a much broader range of wavelengths and compare them in an image stack.

Adhesive samples under visible and UV light.
(Photo credit Art Conservation Department, SUNY Buffalo State)

Spectral imaging  can help to identify the materials used to make or repair an object by the way they react under different light sources. Correct identification of components is important in making the best conservation treatment decisions and might also be used to establish the relative age of a particular object or to verify its authenticity.  While spectral imaging offers the promise of providing a non-destructive tool for identification, it does have limitations and other analytical techniques may be required.

Pigment and dye-based inks under visible and infrared light.
(Photo credit Image Permanence Institute)

Multispectral imaging offers new opportunities to evaluate and document the condition of objects within our collections. Previous repairs may be so well executed or intentionally obscured that the location or extent of the repair is not obvious under visible light. Areas of paper or parchment that have been replaced or have added reinforcements (such as linings) may appear different from the original when viewed under UV radiation. Spectral imaging can provide better visual documentation of the degradation of inks ( see image below) or damage from mold or water that is not apparent under visible light.

Iron gall ink degredation. Jantz MS#124, Rubenstein Library
(Jantz MS#124, Rubenstein Library)

This imaging equipment provides opportunities for better measuring the effectiveness of the treatments that conservators perform in-house. For example, a common treatment that we perform in our lab is the removal of pressure sensitive tape repairs from paper documents using solvents. Spectral imaging before, during, and after treatment could document the effectiveness of the solvents or other employed techniques in removing the tape carrier and adhesive from the paper.

Tape removal before and during treatment under visible and UV light.
(Photo credit Art Conservation Department, SUNY Buffalo State)

Staff from the Conservation Services department have a long history of participating in the library’s digitization program in order to ensure the safety of fragile collection materials. Our department will continue to serve in this capacity for special collections objects undergoing multispectral imaging to answer specific research questions; however, we are also excited to employ this same technology to better care for the cultural heritage within our collections.

______

Want to learn even more about MSI at DUL?

 

Going with the Flow: building a research data curation workflow

Why research data? Data generated by scholars in the course of investigation are increasingly being recognized as outputs nearly equal in importance to the scholarly publications they support. Among other benefits, the open sharing of research data reinforces unfettered intellectual inquiry, fosters reproducibility and broader analysis, and permits the creation of new data sets when data from multiple sources are combined. Data sharing, though, starts with data curation.

In January of this year, Duke University Libraries brought on four new staff members–two Research Data Management Consultants and two Digital Content Analysts–to engage in this curatorial effort, and we have spent the last few months mapping out and refining a research data curation workflow to ensure best practices are applied to managing data before, during, and after ingest into the Duke Digital Repository.

What does this workflow entail? A high level overview of the process looks something like the following:

After collecting their data, the researcher will take what steps they are able to prepare it for deposit. This generally means tasks like cleaning and de-identifying the data, arranging files in a structure expected by the system, and compiling documentation to ensure that the data is comprehensible to future researchers. The Research Data Management Consultants will be on hand to help guide these efforts and provide researchers with feedback about data management best practices as they prepare their materials.

Our form for metadata capture

Depositors will then be asked to complete a metadata form and electronically sign a deposit agreement defining the terms of deposit. After we receive this information, someone from our team will invite the depositor to transfer their files to us, usually through Box.

Consultant tasks

As this stage, the Research Data Management Consultants will begin a preliminary review of the researcher’s data by performing a cursory examination for personally identifying or protected health information, inspecting the researcher’s documentation for comprehension and completeness, analyzing the submitted metadata for compliance with the research data application profile, and evaluating file formats for preservation suitability. If they have any concerns, they will contact the researcher to make some suggestions about ways to better align the deposit with best practices.

Analyst tasks

When the deposit is in good shape, the Research Data Management Consultants will notify the Digital Content Analysts, who will finalize the file arrangement and migrate some file formats, generate and normalize any necessary or missing metadata, ingest the files into the repository, and assign the deposit a DOI. After the ingest is complete, the Digital Content Analysts will carry out some quality assurance on the data to verify that the deposit was appropriately and coherently structured and that metadata has been correctly assigned. When this is confirmed, they will publish the data in the repository and notify the depositor.

Of course, this workflow isn’t a finished piece–we hope to continue to clarify and optimize the process as we develop relationships with researchers at Duke and receive more data. The Research Data Management Consultants in particular are enthusiastic about the opportunity to engage with scholars earlier in the research life cycle in order to help them better incorporate data curation standards in the beginning phases of their projects. All of us are looking forward to growing into our new roles, while helping to preserve Duke’s research output for some time to come.

Rethinking Repositories at CNI Spring ’17

One of the main areas of emphasis for the CNI Spring 2017 meeting was “new strategies and approaches for institutional repositories (IR).” A few of us at UNC and Duke decided to plug into the zeitgeist by proposing a panel to reflect on some of the ways that we have been rethinking – or even just thinking about – our repositories.

Continue reading Rethinking Repositories at CNI Spring ’17

Photographing the Movement

Ella Baker, 1964, Danny Lyon, Memories of the Southern Civil Rights Movement 12, dektol.wordpress.com

You never know what to expect with the SNCC Digital Gateway Project project. This three-and-a-half year collaboration with veterans of the Civil Rights Movement, scholars, and archivists has brought constant surprises, one of which came this past January.

The SNCC Digital Gateway website went live on December 13, 2016. One of the 20th century’s most influential activists, Ella Baker, brought the Student Nonviolent Coordinating Committee (SNCC) into being out of the student sit-in movement in 1960, and it would have been her 113th birthday. The SNCC Digital Gateway tells the story of how young activists in SNCC united with local people in the Deep South to build a grassroots movement for change that empowered the Black community and transformed the nation.

Bob Dylan, Courtland Cox, Pete Seeger, and James Forman sitting outside the SNCC office in Greenwood, Mississippi, July 1963, Danny Lyon, dektol.wordpress.com

Over 175 SNCC staff, local activists, mentors, and allies are profiled on the site. The entire project is built on open communication and collaboration between movement veterans and scholars, so after the website went live, SNCC Legacy Project president Courtland Cox sent all living SNCC veterans a link to their profile and requested their feedback. And this is where the unexpected happened. Danny Lyon, SNCC’s first staff photographer, wrote back, offering the use of his photographs in the SNCC Digital Gateway.

Photograph of Danny Lyon with his Nikon F Reflex, Chicago 1960, Danny Lyon Photography, Magnum Photos
Photograph of Danny Lyon with his Nikon F Reflex in Chicago, 1960, Danny Lyon, dektol.wordpress.com

In many ways, Danny Lyon’s photos are the iconic photos of SNCC. In the summer of 1962, Lyon, then a student at the University of Chicago, hitchhiked to Cairo, Illinois with his camera to photograph the desegregation movement that SNCC was helping to organize. After SNCC’s executive secretary James Forman brought Lyon onto staff, he spent next two years documenting SNCC organizing work and demonstrations across the South. Many SNCC posters and pamphlets featured Lyon’s photographs, helping SNCC develop its public image and garner sympathy for the Movement. Julian Bond described Lyon’s photos  as helping “to make the movement move.”

For Danny Lyon to offer these images to the SNCC Digital Gateway was something like winning the lottery. The ties between SNCC veterans run deep, and Lyon explained that he wanted to help. Due to the website’s sustainability standards, our sources for photographs have been limited. While the number of movement-related digital collections are growing, the vast majority are made up of documents, not images. Lyon agreed to give us digital copies of any his movement photos for use in the site. These included rarely photographed people, like Prathia Hall, Worth Long, Euvester Simpson, and many others.

We spent two months working with Danny Lyon and the people at Magnum Photos to identify images, hammer out terms of use, and embed the photos in the site. Today, the SNCC Digital Gateway website proudly features over 70 of Danny Lyon’s photographs. Spend some time, and check them out. And thank you, Mr. Danny Lyon!

James Forman leads singing with staffers in the SNCC office on Raymond Street, 1963, Danny Lyon, Memories of the Southern Civil Rights Movement 123, dektol.wordpress.com

Nuts, Bolts, and Bits: Further Down the Preservation Path

It’s been awhile since we last wrote about the preservation architecture underlying the repository in Preservation Architecture: Phase 2 – Moving Forward with Duke Digital Repository.   Iceberg.  Fickr user: pere.We’ve made some terrific progress in the interim, but most of that is invisible to our users not unlike our chilly friends, icebergs.

Let’s take a brief tour to surface some these changes!

 

Policy and Procedure Development

The recently formed Digital Preservation Advisory Group has been working on policy and procedure to bring DDR into compliance with the ISO 16363 Audit and Certification of Trustworthy Digital Repositories Minimum Criteria. We’ve been working on diverse policy areas like defining how embargoes may be set; how often fixity must be checked and reported to stakeholders; in what situations may content be removed and who must be involved in that decision; and what conditions necessitate a ‘tombstone’ to explain the removal of an object.   Some of these policies are internal and some have already been made publicly available.  For example, see our Deaccession Policy and our Preservation Policy.   We’ve made great progress due to the fantastic example set by our friends at Purdue University Research Repository and others.

Preservation Infrastructure

Duke, DuraCloud, and GlacierDurham, North Carolina, is a lovely city– close to mountains, the beach, and full of fantastic restaurants!  Sometimes, though, your digital assets just need to get away from it all.  Digital preservation demands some geographic diversity.  No repository wants all of its data to be subject to a hurricane, of course!  That’s why we’ve partnered with DuraCloud, a preservation-focused cloud provider, to store copies of our digital assets in geographically diverse locations.  Our data now enjoys homes at Duke, at DuraCloud, and in Amazon Glacier!

To bring transparency to the process of remotely replicating our assets and validating the local and remote assets, we’ve recently implemented a process that externalizes these tasks from Fedora and delivers scheduled reports to stakeholders enumerating and detailing the health of their assets.

 

Research and Development

The DDR has grown tremendously in the last year and with it has grown the need to standardize and scale to demand.  Writing Python to arrange files to conform to our Standard Ingest Format was a perfectly reasonable solution in early 2016.  Likewise, programmatic reformatting of endangered file formats wasn’t feasible with the resources available at the time.  We also did need to worry about traffic scaling back then.  Times have changed!

DDR staff are exploring tools to allow non-developers to easily ingest large amounts of material, methods to identify and migrate files to better supported formats, and are planning for more sustainable and durable architecture like increased inter-application messaging to allow us to externalize processes that have been handled within the repository to external servers.

Notes from the Duke University Libraries Digital Projects Team