All posts by Maggie Dickson

Casting a Critical Eye on the Hayti-Elizabeth Street Renewal Area Maps

In 2019, one of the digital collections we made available to the public was a small set of architectural maps and plans titled the ‘Hayti-Elizabeth Street Renewal Area’. The short description of the maps indicates they ‘depict existing and proposed structures and modifications to the Hayti neighborhood in Durham, NC.’ Sounds pretty benign, right? Perhaps even kind of hopeful, given the word ‘renewal’?

Hayti-Elizabeth Street Renewal Area, Existing Land Use Map

Nope. This anodyne description does not tell the story of the harm caused by the Durham Urban Renewal project of the 1960s and 1970s. The Durham Redevelopment Commission intended to eliminate ‘urban blight’ via this project, which ultimately resulted in the destruction of more than 4,000 households and 500 businesses in predominantly African American areas of the city. The Hayti District, once a flourishing and self-sufficient neighborhood filled with Black-owned businesses, was largely demolished, divided, and effectively severed from what is now downtown Durham by the construction of NC Highway 147. 

Bull City 150, a “public history, geography and community engagement project” based here at Duke University, hosts a suite of excellent multi-media public history exhibitions about housing inequality in Durham on its website. One of these is Dismantling Hayti, which focuses in particular on the effects of urban renewal on the neighborhood and the city.

Dismantling Hayti, Bull City 150

But this story of so-called urban renewal is not just about Durham – it’s about the United States as a whole. From the 1950s to the 1980s, municipalities across the country demolished roughly 7.5 million dwelling units, with a vastly disproportionate impact on Black and low-income neighborhoods, in the name of revitalization. Bulldozing for highway corridors was frequently a part of urban renewal projects, happening in San Francisco, Memphis, Boston, Atlanta, Syracuse, Baltimore, everywhere in the country – the list goes on and on. And it includes Saint Paul, Minnesota, the city where, mourning and protesting the killing of yet another Black person at the hands of a white police officer, thousands of people occupied Interstate 94 in recent weeks, marching from the state capitol to Minneapolis, over a highway that was once the African American neighborhood of Rondo.  

Urban renewal projects led to what social psychiatrist Dr. Mindy Fulilove refers to as root shock – “a traumatic stress reaction related to the destruction of one’s emotional ecosystem”. This is but one thread in the fabric of white supremacy out of which our country was woven, among other twentieth century practices of redlining, discriminatory mortgage lending practices, denial of access to unemployment benefits, and rampant Jim Crow laws, which are still causing harm today. This is why it is important to interrogate the historical context of resources like the Hayti-Elizabeth Street Renewal Area maps – we should all accept the invitation extended on the Bull City 150 website to Durhamites to “reckon with the racial and economic injustices of the past 150 years and commit to building a more equitable future”.

Sustainability Planning for a Better Tomorrow

In March of last year I wrote about efforts of the Resource Discovery Systems and Strategies team (RDSS, previously called the Discovery Strategy Team) to map Duke University Libraries’ discovery system environment in a visual way. As part of this project we created supporting documentation for each system that appeared in a visualization, including identifying functional and technical owners as well as links to supporting documentation. Gathering this information wasn’t as straightforward as it ideally should have been, however. When attempting to identify ownership, for example, we were often asked questions like, “what IS a functional owner, anyway?”, or told “I guess I’m the owner… I don’t know who else it would be”. And for many systems, local documentation was outdated, distributed across platforms, or simply nonexistent.

As a quick glance through the Networked Discovery Systems document will evince, we work with a LOT of different systems here at DUL, supporting a great breadth of processes and workflows. And we’ve been steadily adding to the list of systems we support every year, without necessarily articulating how we will manage the ever-growing list. This has led to situations of benign neglect, confusion as to roles and responsibilities and, in a few cases, we’ve hung onto systems for too long because we hadn’t defined a plan for responsible decommission.

So, to promote the healthier management of our Networked Discovery Systems, the RDSS team developed a set of best practices for sustainability planning. Originally we framed this document as best practices for maintenance planning, but in conversations with other groups in the Libraries, we realized that this didn’t quite capture our intention. While maintenance planning is often considered from a technical standpoint, we wanted to convey that the responsible management of our systems involves stakeholders beyond just those in ITS, to include the perspective and engagement of non-technical staff. So, we landed on the term sustainability, which we hope captures the full lifecycle of a system in our suite of tools, from implementation, through maintenance, to sunsetting, when necessary.

The best practices are fairly short, intended to be a high-level guide rather than overly prescriptive, recognizing that every system has unique needs. Each section of the framework is described, and key terms are defined. Functional and technical ownership are described, including the types of activities that may attend each role, and we acknowledge that ownership responsibilities may be jointly accomplished by groups or teams of stakeholders. We lay out the following suggested framework for developing a sustainability plan, which we define as “a living document that addresses the major components of a system’s life cycle”:

  • Governance:
    • Ownership
    • Stakeholders
    • Users
  • Maintenance:
    • System Updates
    • Training
    • Documentation
  • Review:
    • Assessments
    • Enhancements
    • Sunsetting

Interestingly, and perhaps tellingly, many of the conversations we had about the framework ended up focusing on the last part – sunsetting. How to responsibly decommission or sunset a system in a methodical, process-oriented way is something we haven’t really tackled yet, but we’re not alone in this, and the topic is one that is garnering some attention in project management circles.

So far, the best practices have been used to create a sustainability plan for one of our systems, Dukespace, and the feedback was positive. We hope that these guidelines will facilitate the work we do to sustain our system, and in so doing lead to better communication and understanding throughout the organization. And we didn’t forget to create a sustainability plan for the best practices themselves – the RDSS team has committed to reviewing and updating it at least annually!

Mapping Duke University Libraries’ Discovery System Environment

Just over one year ago, Duke University Library’s Web Experience team charged a new subgroup – the Discovery Strategy Team – with “providing cohesion for the Libraries’ discovery environment and facilitate discussion and activity across the units responsible for the various systems and policies that support discovery for DUL users.” Jacquie Samples, head of the Metadata and Discovery Strategy Department in our Technical Services Unit, and I teamed up to co-chair the group, and we were excited to take on this critical work along with 8 of our colleagues from across the libraries.

Our first task was one that had long been recognized as a need by many people throughout the library – to create an up-to-date visualization of the systems that underpin DUL’s discovery environment, including the data sources, data flows, connections, and technical/functional ownership for each of these systems. Our goal was not to depict an ideal discovery landscape but rather to depict things as they are now (ideal could come later).

Before we could create a visualization of these systems and how they interacted, however, we realized we needed to identify what they were! This part of the process involved creating a giant laundry list of all of systems in the form of a google spreadsheet, so we could work on it collaboratively and iteratively. This spreadsheet became the foundation of the document we eventually produced, containing contextual information about the systems including:

  • Name(s) of the system
  • Description/Notes
  • Host
  • Path
  • Links to documentation
  • Technical & functional owners

Once we had our list of systems to work from, we began the process of visualizing how they work here at DUL. Each meeting of the team involved doing a lot of drawing on the whiteboard as we hashed out how a given system works – how staff & other systems interact with it, whether processes are automated or not, frequency of those processes, among other attributes. At the end of these meetings we would have a messy whiteboard drawing like this one:

We were very lucky to have the talented (and patient!) developer and designer Michael Daul on the team for this project, and his role was to take our whiteboard drawings and turn them into beautiful, legible visualizations using Lucidchart:

Once we had created visualizations that represented all of the systems in our spreadsheet, and shared them with stakeholders for feedback, we (ahem, Michael) compiled them into an interactive PDF using Adobe InDesign. We originally had high hopes of creating a super cool interactive and zoomable website where you could move in and out to create dynamic views of the visualizations, but ultimately realized this wouldn’t be easily updatable or sustainable. So, PDF it is, which may not be the fanciest of vehicles but is certainly easily consumed.

We’ve titled our document ‘Networked Discovery Systems at DUL”, and it contains two main sections: the visualizations that graphically depict the systems, and documentation derived from the spreadsheet we created to provide more information and context for each system. Users can click from a high-level view of the discovery system universe to documentation pages, to granular views of particular ‘constellations’ of systems. Anyone interested in checking it out can download it from this link

We’ve identified a number of potential use cases for this documentation, and hope that others will surface:

  • New staff orientation
  • Systems transparency
  • Improved communication
  • Planning
  • Troubleshooting

We’re going to keep iterating and updating the PDF as our discovery environment shifts and changes, and hope that having this documentation will help us to identify areas for improvement and get us closer to achieving that ideal discovery environment.

Metadata Year-in-Review

As 2017 comes to a close and we gear up for the new year, I’ve spent some time reflecting on the past twelve months. Because we set ambitious goals and are usually looking forward, not back, we can often feel defeated by all the work we haven’t gotten done. So I was pleasantly surprised when my perusal of the past year’s metadata work surfaced a good deal of impressive work. Here are a few of the highlights:

Development and Dissemination of the DDR MAP

Perhaps our biggest achievement of the year was the development of the DDR’s very first formally documented metadata application profile, the DDR MAP. A metadata application profile defines the metadata elements and properties your system uses, documenting predicates, obligations & requirements, and input guidelines. Having a documented and shared metadata application profile promotes healthy metadata practices and facilitates communication.

In addition to the generalized DDR MAP, we also developed a Research Data Metadata Profile and a Digital Collections Metadata Profile for those specific collecting areas.

Rights Management Metadata

It’s been written about on this blog a couple of times already (here and here) but I think it bears repeating: this year we rolled out a new rights management metadata strategy that employs the application of either a Creative Commons or RightsStatement.org URI to all DDR resources, as well as the option of applying an additional free text rights note to provide context:

We feel great about finally being able to communicate the rights statuses of DDR resources in a clear and consistent way to our end-users (and ourselves!).

Programmatic Linking to Catalog Records

Sometimes a resource in the DDR also has a record in the library catalog, and sometimes that record contains description that is either not easily accommodated by the DDR MAP, or it is not desirable to include it in the repository metadata record for a particular reason. It wasn’t in the cards to develop a synchronization or feed of the MARC metadata, but we were able to implement a solution wherein we store the identifier for the catalog record on the resource in the repository, and then use that identifier to construct and display a link back to the catalog record.

And Lots of Other Cool Stuff

There were a lot of other cool metadata developments this year, including building out our ability to represent relationships between related items in the DDR, developing policies regarding the storage and display of identifiers, and a fancy new structural metadata solution for representing the hierarchical structure of born digital archives. We also got to work on some amazing new and revamped digital collections!

Looking Ahead

Of course, we are setting ambitious goals for the coming year as well – plans to upgrade our current Dspace repository, DukeSpace, and implement the new RT2 connector to Elements, will involve substantial metadata work, and the current project to build a Hyrax-based repository for research data presents and opportunity for us to revisit and improve our Research Data Metadata Profile. And ideally we will be able to make some real headway tackling the problem of identity management – leveraging unique identifiers for people (ORCIDs, for example), rather than relying on name strings, which is inherently error prone.

And there is a whole slew of interesting metadata work for the Digital Collections program slated for 2018 , including adding enhanced homiletic metadata to the Duke Chapel Recordings digital collection.

The Inaugural TRLN Institute – an Experiment in Consortial Collaboration

In June of this year I was fortunate to have participated in the inaugural TRLN Institute. Modeled as a sort of Scholarly Communication Institute for TRLN (Triangle Research Libraries Network, a consortium located in the Triangle region of North Carolina), the Institute provided space (the magnificent Hunt Library on North Carolina State University’s campus), time (three full days), and food (Breakfast! Lunch! Coffee!) for groups of 4-6 people from member libraries to get together to exclusively focus on developing innovative solutions to shared problems. Not only was it productive, it was truly delightful to spend time with colleagues from member institutions who, although we are geographically close, don’t get together often enough.

Six projects were chosen from a pool of applicants who proposed topics around this year’s theme of Scholarly Communication:

  • Supporting Scholarly Communications in Libraries through Project Management Best Practices
  • Locating Research Data in an Age of Open Access
  • Clarifying Rights and Maximizing Reuse with RightsStatements.org
  • Building a Research Data Community of Practice in NC
  • Building the 21st Century Researcher Brand
  • Scholarship in the Sandbox: Showcasing Student Works

You can read descriptions of the projects as well as group membership here.

The 2017 TRLN Institute participants and organizers, a happy bunch.

Having this much dedicated and unencumbered time to thoughtfully and intentionally address a problem area with colleagues was invaluable. And the open schedule allowed groups to be flexible as their ideas and expectations changed throughout the course of the three-day program. My own group – Clarifying Rights and Maximizing Reuse with RightsStatements.org – was originally focused on developing practices for the application and representation of RightsStatements.org statements for TRLN libraries’ online digitized collections. Through talking as a group, however, we realized early on that some of the stickiest issues regarding the implementation of a new rights management strategy involves the work an institution has to do to identify appropriate staff to do the work, allocate resources, plan, and document the process.

So, we pivoted! Instead of developing a decision matrix for applying the RS.org statements in digital collections (which is what we originally thought our output would be), we instead spent our time drafting a report – a roadmap of sorts – that describes the following important components when implementing RightsStatements.org:

  • roles and responsibilities (including questions that a person in a role would need to ask)
  • necessary planning and documentation
  • technical decisions
  • example implementations (including steps taken and staff involved – perhaps the most useful section of the report)

This week, we put the finishing touches on our report: TRLN Rights Statements Report – A Roadmap for Implementing RightsStatements.org Statements (yep, yet another google doc).  We’re excited to get feedback from the community, as well as hear about how other institutions are handling rights management metadata, especially as it relates to upstream archival information management. This is an area rife for future exploration!

I’d say that the first TRLN Institute was a success. I can’t imagine my group having self-organized and produced a document in just over a month without having first had three days to work together in the same space and unencumbered by other responsibilities. I think other groups have found valuable traction via the Institute as well, which will result in more collaborative efforts. I look forward to seeing what future TRLN Institute produce – this is definitely a model to continue!