Header Image: Collection of extinct and extant turtle skull microCT scans in MorphoSource: bit.ly/3DFossilTurtles
MorphoSource (www.morphosource.org) is a publicly accessible repository for 3D research data, especially data that represents biological specimens. Developers in Evolutionary Anthropology and the Library’s Software Services department have been working to rebuild the application, improving upon the current site’s technology and features. An important part of this rebuild is implementing a more robust data model that will let our users efficiently discover, curate, disseminate, and preserve their data.
A typical deposit in MorphoSource is a file or files that represent a scan of all or part of an organism – such as a bone, tooth, or entire animal. The files may be a mesh or series of images produced through a CT scan. In order to collect all the information necessary to understand the files, the specimen that the files represent, and the processes that created the data, the improved site will guide the researcher in providing additional context for their deposit at the same time that they upload their files. The following describes what kind of metadata the depositor can expect to provide as part of the submission process.
The first step is to determine whether the researcher’s current deposit is derived in some way from data that is already in MorphoSource, or if the depositor would like to also submit those files and metadata. For example, they may be depositing a mesh file that was created from original photographs that are already available through the site. By including links to the raw data in the repository, users can reprocess the files if needed, or run different processes in the future.
Next, the researcher is asked to identify or describe the biological specimen that was imaged to create their data, either by entering the information themselves or importing it from another site like iDigBio. Metadata entered at this stage includes the information about the institution that owns the specimen, a taxonomy for the specimen, and additional identifying information such as the institution’s collection or catalog number. When the depositor fills in these fields, other users will be able to search for and compare data sets for the same specimen or species.
Moving on from the description of the organism, the depositor then provides information about the device that was used to image the specimen, either by selecting a device that is already in the repository’s database, or by creating a new record, including the manufacturer, model, and modality (MRI, photography, laser scan, etc.) of the device.
Once they have described the specimen and device used for imaging, the depositor then enters metadata about the imaging event itself, such as the technician who did the imaging, the date, and the software used.
With the imaging of the specimen described, the depositor then enters data about any processing that was done to create the files being deposited, including who was responsible, what software was used, and what the process was – for example, creating a mesh or point cloud from photographs. This metadata is important in case there is a need to reprocess the data in the future.
Finally, the researcher completes their deposit by uploading the files themselves. While some technical metadata is extracted automatically, MorphoSource will rely on data depositors to provide other information that is helpful for display, such as the orientation of the scan, or to identify the files, like an external id number. This technical metadata is important for long term preservation of the data sets.
While the submission process asks the researcher to enter quite a bit of metadata, when users view the data on MorphoSource they have an understanding of what the data represents, how it was created, and how it relates to other data in the repository. It becomes easy to discover other media files representing the same specimen, or the same species, or to explore other items from the institution or researcher’s collections.
Happy New Year from all of us at the Digital Production Center! In this pictorial posting, I figured we should start the New Year right with some images and collections that are inspiring, funny, and just stir my heart. It begins with “The Future Calls!”
I went down the “future” rabbit hole and stumbled upon Martin Luther King’s “The Look to the Future”:
And came upon this lovely image:
YES! THE FUTURE IS MY OWN MAKING!! And with that I came up with some resolutions!
Efficiency is important!
Maybe 5 minutes is a bit ambitious, but this will be good for my schedule and good for the environment. It’s good to have goals.
Exercise More! I definitely felt more inspired to hit the gym after seeing these images from the Anatomical Fugitive Sheets.
Learn about fashion, art, and architecture with Barbaralee Diamonstein-Spielvogel!
Self-care! This one-page advertisement from the Broadsides and Ephemera Collection of a Hot Springs spa sure is enticing!
This picturesque image from Reginald Sellman Negatives collection (which is predominantly of a family taking hikes, camping, and roadtripping!) made me quite envious. Why yes, I’d love to take a hike in a corseted dress!
And speaking of family activities, the Memory Project and Behind the Veil collections reminded me that I really need talk to my parents and other family members more to gather and document their stories.
Why not pick up a foreign language?
Support a cause!
Spend more time with my kids! They grow up so quickly.
Lastly, and probably most importantly, VOTE!
So…what are your resolutions? And don’t tell me 300 ppi!
The video digitization system in Duke Libraries’Digital Production Center utilizes many different pieces of equipment: power distributors, waveform and vectorscope monitors, analog & digital routers, audio splitters & decibel meters, proc-amps, analog (BNC, XLR and RCA) to digital (SDI) converters, CRT & LCD video monitors, and of course an array of analog video playback decks of varying flavors (U-matic-NTSC, U-matic-PAL, Betacam SP, DigiBeta, VHS-NTSC and VHS-PAL/SECAM). We also transfer content directly from born-digital DV and MiniDV tapes.
One additional component that is crucial to videotape digitization is the Time Base Corrector (TBC). Each of our analog video playback decks must have either an internal or external TBC, in order to generate an image of acceptable quality. At the recent Association of Moving Image Archivist’s Conference in Baltimore, George Blood (of George Blood Audio/Video/Film/Data) gave a great presentation on exactly what a Time Base Corrector is, appropriately entitled “WTF is a TBC?” Thanks to George for letting me relay some of his presentation points here.
A time base is a consistent reference point that one can utilize to stay in sync. For example, The Earth rotating around the Sun is a time base that the entire human race relies on, to stay on schedule. A grandfather clock is also a time base. And so is a metronome, which a musical ensemble might use to all stay “in time.”
Frequency is defined as the number of occurrences of a repeating event per unit of time. So, the frequency of the Earth rotating around the Sun is once per 24 hrs. The frequency of a grandfather clock is one pendulum swing per second. The clock example can also be defined as one “cycle per second” or one hertz (Hz), named after Heinrich Hertz, who first conclusively proved the existence of electromagnetic waves in the late 1800’s.
But anything mechanical, like grandfather clocks and videotape decks, can be inconsistent. The age and condition of gears and rods and springs, as well as temperature and humidity, can significantly affect a grandfather clock’s ability to display the time correctly.
Videotape decks are similar, full of numerous mechanical and electrical parts that produce infinite variables in performance, affecting the deck’s ability to play the videotape’s frames-per-second (frequency) in correct time.
NTSC video is supposed to play at 29.97 frames-per-second, but due to mechanical and electro-magnetic variables, some frames may be delayed, or some may come too fast. One second of video might not have enough frames, another second may have too many. Even the videotape itself can stretch, expand and contract during playback, throwing off the timing, and making the image wobbly, jittery, too bright or dark, too blue, red or green.
A Time Base Corrector does something awesome. As the videotape plays, the TBC stores the unstable video content briefly, fixes the timing errors, and then outputs the corrected analog video signal to the DPC’s analog-to-digital converters. Some of our videotape decks have internal TBCs, which look like a computer circuit board (shown below). Others need an external TBC, which is a smaller box that attaches to the output cables coming from the videotape deck (shown above, right). Either way, the TBC can delay or advance the video frames to lock them into correct time, which fixes all the errors.
An internal TBC is actually able to “talk” to the videotape deck, and give it instructions, like this…
“Could you slow down a little? You’re starting to catch up with me.”
“Hey, the frames are arriving at a strange time. Please adjust the timing between the capstan and the head drum.”
“There’s a wobble in the rate the frames are arriving. Can you counter-wobble the capstan speed to smooth that out?”
“Looks like this tape was recorded with bad heads. Please increase gain on the horizontal sync pulse so I can get a clearer lock.”
Without the mighty TBC, video digitization would not be possible, because all those errors would be permanently embedded in the digitized file. Thanks to the TBC, we can capture a nice, clean, stable image to share with generations to come, long after the magnetic videotape, and playback decks, have reached the end of their shelf life.
‘Tis the time of year for top 10 lists. Here at Duke Digital Collections HQ, we cannot just pick 10, because all our digital collections are tops! What follows is a list of all the digital collections we have launched for public access this calendar year.
Our newest collections include a range of formats and subject areas from 19th Century manuscripts to African American soldiers photograph albums to Duke Mens Basketball posters to our first Multispectral Images of papyrus to be ingested into the repository. We also added new content to 4 existing digital collections. Lastly, our platform migration is still ongoing, but we made some incredible progress this year as you will see below. Our goal is to finish the migration by the end of 2020.
New Digital Collections
African American Soldiers Photo Albums (browse all 8 or 1 by 1 using the links below):
One topic that comes up regularly among our patrons is the navigation of our physical spaces. Like many libraries, our buildings have evolved over time, and that can make navigating our spaces a bit complicated. On Duke’s West Campus, we have three library buildings that are interconnected – Rubenstein Library, Perkins Library, and Bostock Library. Responses and comments on our biennial survey confirm what we hear anecdotally – patrons have trouble navigating these three buildings.
Deep Dive into Navigation Concerns
But how can we follow up on these reports to improve navigation in our spaces? Ideally, we would gather data from a large number of people over a long period of time to find very common and problematic navigation issues. Our biennial survey offers data from a large number of people over time, but it isn’t a great format for gathering detailed data about narrow subjects like navigation. Conducting an observational study of our spaces would explore navigation directly, but it would only include a small number of people, and the likelihood that we would catch individuals having trouble with navigation is low. We could try conducting ad hoc surveys of patrons in our spaces, but it would be difficult to ensure we are including people who have had navigation trouble, and it may be difficult for patrons to recall their navigation trouble on the spot.
What we needed was a way of capturing common examples of patrons having trouble with navigation. We decided that, instead of asking patrons themselves, our best resource would be library staff. We know that when patrons are lost in our buildings, they may reach out to staff members they see nearby. By surveying staff instead of patrons, we take advantage of staff who know the buildings well and who are commonly in particular areas of the buildings, noticing and offering help to our struggling patrons.
We decided to send a very simple survey to all staff in these library buildings. Staff could fill it out multiple times, and the only two questions were:
What is a common question you have helped patrons with?
Where are the patrons when they have this question, typically?
We had a great response from staff (72 responses from 36 individuals), and analyzing the responses showed several sources of confusion.
Focusing on questions where patrons are in library spaces and not near a help desk, two concerns account for over 60% of reported patron navigation issues:
Trouble Between Buildings
Our three connected library buildings, unfortunately, connect in ways that are not obvious to new visitors. Because buildings only connect on certain levels, it is easy for patrons to be looking for a location on the right floor but the wrong building. By asking staff for specific locations of both patrons and their desired destination, we could compile the most frequent problems that involve being in the wrong building. Unsurprisingly, the locations that cause the most difficulty are our large meeting rooms and classroom spaces, especially those that are not on the ground floor of the buildings.
The most common problems seem to happen when patrons leave the first floor while in the wrong building, expecting the buildings to connect on the other floors (or not realizing which building they are in). As you can see from the side-view of our buildings below, the Perkins and Bostock library building have easy connections on all floors, but the Rubenstein Library only connects to Perkins on the first floor. Our survey confirmed that this causes many issues for patrons looking for 2nd floor or Lower Level meeting rooms in Perkins and upper level meeting rooms in Rubenstein.
While we are still in the process of developing and testing possible solutions, we hope to redesign signage in a way that better signposts when patrons should proceed onward on the current floor and when they should transition up or down.
Trouble on the Same Floor
Our survey suggests, unfortunately, that it is not enough to get patrons to the correct floor. Depending on the route the patron takes, there are still common destinations that are difficult to see from stairwells, elevators, and main hallways. Again, this difficulty tends to arise when patrons are looking for meeting rooms. This makes sense, as events held in our meeting rooms can attract patrons who have not yet been to our buildings.
Staff reports for same-floor confusion focus largely on floors where room entrances are hidden in recesses or around corners and where rooms are spaced apart such that it is hard to simply follow room number signage. As a notable example, the 2nd floor of Perkins Library seems especially confusing to patrons, with many different types of destinations, few of which are visible from main entrances and hallways. In the diagram below, you can see some of the main places patrons get lost, indicating a need for better signage visible from these locations. (Pink question marks indicate the lost patrons. Red arrowheads indicate the desired destinations.)
As we develop solutions to highlight locations of hidden rooms, we are considering options like large vinyl lettering or perpendicular corridor signs that alert people to rooms around corners.
This technique worked really well for this informal study – it gave us a great place to start exploring new design solutions, and we can be more proactive about testing new navigation signage before we make permanent changes. Thanks for your great information, DUL staff!
Here at Duke, the buzz continues around FOLIO. We have continued to contribute to the international project as active participants on the FOLIO Product Council, special interest groups, and contribute development resources. You can find links to the various groups on the FOLIO wiki.
We’ve also committed to implementing the electronic resource management (ERM)-focused apps in summer of 2020. Starting with the ERM-focused apps will give us the opportunity to use FOLIO in a production environment, and will be a benefit to our Continuing Resource Acquisitions Department since they are not currently using software dedicated to electronic resources to keep track of licences and terms.
Our local project planning has come more into focus as well. We have gathered names for team participants and will be kicking off our project teams in January. As we’ve talked about the implementation here, we’ve realized that we have a number of tasks that will need to be addressed, regardless of subject matter. For example, we’re going to need to map data – not just bibliographic, holdings and item data, but users, orders, invoices, etc. We’ll also need to set up configurations and user permissions for each of the apps, and document, train, and develop new workflows. Since our work is not siloed in functional areas, we need to facilitate discussions among the functional areas. To do that, we’re going to create a set of functional area implementation teams, and work groups around the task areas that need to be addressed.
To learn more about the FOLIO project at Duke, fly on over to our WordPress site and read through our past newsletters, look through slides from past presentations, and check out some fun links to bee facts.
In 2020, we’ll be making significant changes to our systems supporting archival discovery and access. The main impetus for this shift is that our current platform has grown outdated and is no longer sustainable going forward. We intend to replace our platform with ArcLight, open source software backed by a community of peer institutions.
Finding Aids at Duke: Innovations Past
At Duke, we’re no strangers to pushing the boundaries of archival discovery through advances in technology. Way back in the mid 1990s, Duke was among pioneers rendering SGML-encoded finding aids into HTML. For most of the 90s and aughts we used a commercial platform, but we decided to develop our own homegrown finding aids front-end in 2007 (using the Apache Cocoon framework). We then replaced it in 2012 with another in-house platform built on the Django web framework.
Since going home-grown in 2007, we have been able to find some key opportunities to innovate within our platforms. Here are a few examples:
Our current platform was pretty good for its time, but a lot has changed in eight years. The way we build web applications today is much different than it used to be. And beyond desiring a modern toolset, there are major concerns going forward around size, search/indexing, and support.
We have some enormous finding aids. And we have added more big ones over the years. This causes problems of scale, particularly with an interface like ours that renders each collection as a single web page with all of the text of its contents written in the markup. One of our finding aids contains over 21,000 components; all told it is 9MB of raw EAD transformed into 15MB of HTML.
No amount of caching or server wizardry can change the fact that this is simply too much data to be delivered and rendered in a single webpage, especially for researchers in lower-bandwidth conditions. We need a solution that divides the data for any given finding aid into smaller payloads.
Google Custom Search does a pretty nice job of relevance ranking and highlighting where in a finding aid a term matches (after all, that’s Google’s bread-and-butter). However, when used to power search in an application like this, it has some serious limitations. It only returns a maximum of one hundred results per query. Google doesn’t index 100% of the text, especially for our larger finding aids. And some finding aids are just mysteriously omitted despite our best efforts optimizing our markup for SEO and providing a sitemap.
We need search functionality where we have complete control of what gets indexed, when, and how. And we need assurance that the entirety of the materials described will be discoverable.
This is a familiar story. Homegrown applications used for several years by organizations with a small number of developers and a large number of projects to support become difficult to sustain over time. We have only one developer remaining who can fix our finding aids platform when it breaks, or prevent it from breaking when the systems around it change. Many of the software components powering the system are at or nearing end-of-life and they can’t be easily upgraded.
Where to Go From Here?
It has been clear for awhile that we would soon need a new platform for finding aids, but not as clear what platform we should pursue. We had been eyeing the progress of two promising open source community-built solutions emerging from our peer institutions: the ArchivesSpace Public UI (PUI), and ArcLight.
Over 2018-19, my colleague Noah Huffman and I co-led a project to install pilot instances of the ASpace PUI and ArcLight, index all of our finding aids in them, and then evaluate the platforms for their suitability to meet Duke’s needs going forward. The project involved gathering feedback from Duke archivists, curators, research services staff, and our digital collections implementation team. We looked at six criteria: 1) features; 2) ease of migration/customization; 3) integration with other systems; 4) data cleanup considerations; 5) impact on existing workflows; 6) sustainability/maintenance.
There’s a lot to like about both the ASpace PUI and ArcLight. Feature-wise, they’re fairly comparable. Both are backed by a community of talented, respected peers, and either would be a suitable foundation for a usable, accessible interface to archives. In the end, we recommended that Duke pursue ArcLight, in large part due to its similarity to so much of the other software in our IT portfolio.
Duke is certainly not alone in our desire to replace an outdated, unsustainable homegrown finding aids platform, and intention to use ArcLight as a replacement.
This fall, with tremendous leadership from Stanford University Libraries, five universities collaborated on developing the ArcLight software further to address shared needs. Over a nine week work cycle from August to October, we had the good fortune of working alongside Stanford, Princeton, Michigan, and Indiana. The team addressed needs on several fronts, especially: usability, accessibility, indexing, context/navigation, and integrations.
Three Duke staff members participated: I was a member of the Development Team, Noah Huffman a member of the Product Owners Team, and Will Sexton on the Steering Group.
The work cycle is complete and you can try out the current state of the core ArcLight demo application. It includes several finding aids from each of the participating partner institutions. Here are just a few highlights that have us excited about bringing ArcLight to Duke:
Here’s a final demo video (37 min) that nicely summarizes the work completed in the fall 2019 work cycle.
Lighting the Way
With some serious momentum from the fall ArcLight work cycle and plans taking shape to implement the software in 2020, the Duke Libraries intend to participate in the Stanford-led, IMLS grant-funded Lighting the Way project, a platform-agnostic National Forum on Archival Discovery and Delivery. Per the project website:
Lighting the Way is a year-long project led by Stanford University Libraries running from September 2019-August 2020 focused on convening a series of meetings focused on improving discovery and delivery for archives and special collections.
Coming in 2020: ArcLight Implementation at Duke
There’ll be much more share about this in the new year, but we are gearing up now for a 2020 ArcLight launch at Duke. As good as the platform is now out-of-the-box, we’ll have to do additional development to address some local needs, including:
An efficient preview/publication workflow
Digital object viewing / repository integration
Some data cleanup
Building these local customizations will be time well-spent. We’ll also look for more opportunities to collaborate with peers and contribute code back to the community. The future looks bright for Duke with ArcLight lighting the way.
The featured image is from a mockup of a new repositories home page that we’re working on in the Libraries, planned for rollout in January of 2020.
Working at the Libraries, it can be dizzying to think about all of our commitments.
There’s what we owe our patrons, a body of so many distinct and overlapping communities, all seeking to learn and discover, that we could split the library along an infinite number of lines to meet them where they work and think.
There’s what we owe the future, in our efforts to preserve and share the artifacts of knowledge that we acquire on the market, that scholars create on our own campus, or that seem to form from history and find us somehow.
There’s what we owe the field, and the network of peer libraries that serve their own communities, each of them linked in a web of scholarship with our own. Within our professional network, we seek to support and complement one another, to compete sometimes in ways that move our field forward, and to share what we learn from our experiences.
The needs of information technology underlie nearly all of these activities, and to meet those needs, we have an IT staff that’s modest in size, but prodigious in its skill and its dedication to the mission of the Libraries. Within that group, the responsibility for creating new software, and maintaining what we have, falls to a small team of developers and devops engineers. We depend on them to enhance and support a wide range of platforms, including our web services, our discovery platforms, and our digital repositories.
This fall, we did some reflection on how we want to approach support for our repository platforms. The result of that reflection was a Statement of Commitment to Repositories Support and Development, a document of roughly a page that expresses what we consider to be our values in this area, and the context of priorities in which we do that work.
The statement is explicit that we will not seek to find alternative platforms for our repository services in the next several years, and in particular while the FOLIO transition is underway. This decision is informed by our recognition that migration of content and services across platforms is complex and expensive. It’s also a recognition that we have invested a lot into these existing platforms, and we want to carve out as much space as we can for our talented staff to focus on maintaining and improving them, rather than locking ourselves into all-consuming cycles of content migration.
From a practical perspective, and speaking as the manager who oversees software development in the Libraries, I see this statement as part of an overall strategy to bring focus to our work. It’s a small but important symbolic measure that recognizes the drag that we create for our software team when give in to our urge to prioritize everything.
The phrase “context switching” is one that we have borrowed from the parlance of operating systems to describe the effects on a developer of working on multiple projects at once. There are real costs to moving between development environments, code bases, and architectures on the same day, in the same week, during the same sprint, or within even an extended work cycle. We also call this problem “multi-tasking,” and the penalty it imposes of performance is well documented.
Even more than performance, I think of it as a quality of life concern. People are generally happier and more invested when they’re able to do quality work. As a manager, I can work with scheduling and planning to try to mitigate those effects of multitasking on our team. But the responsibility really lies with the organization. We have our commitments, and they are vast in size and scope. We owe it to ourselves to do some introspection now and again, and ask what we can realistically do with what we have, or more accurately, who we are.
This fall, Library ITS is helping the Library Service Center (LSC) plan the transition to new high density storage management software. We are engaging with CaiaSoft who provides new software that supports improved workflow processes and reporting for the LSC.
The center houses roughly 6 million books, documents and archival materials belonging to Duke and other library systems. With this in mind, it is very important to have up-to-date technology, and software services that promotes efficient workflows.
Why Are We Planning This?
GFA, the LSC’s current software tool is running on an unsupported, end-of-life operating system.
As a result, we run the risk of unwanted processing delays in the event of a failure on the current server. In turn, these delays would affect staff, researchers, and others looking for materials located at the LSC.
Who Is Planning This?
The project’s cross-division team involves staff from the following DUL departments:
Access and Delivery Services
DUL Technical Services
Rubenstein Research Services
Library Service Center
When Are We Planning This?
The transition to CaiaSoft is intended to take place on a weekend in January 2020. After this, LSC staff and supporting departments expect to use CaiaSoft to manage items located at the LSC warehouse.
The Project Team, during the planning stages, will have these goals in mind:
Overseeing data loading and accuracy
Creating and documenting workflows
Managing scripts to ensure ALEPH integration
Ensuring future seamless FOLIO integration
The project team has identified several key benefits, most noteworthy is improved workflow support. In addition, other benefits identified by the Project Team are (but not limited to):
Web Browser Access
CaiaSoft runs as a web application. In contrast, GFA only runs within a SSH session and requires the use of added software.
Item-level Data Management
Staff can create “data flags”, and assign them at item-level. In contrast, this feature is not available in GFA.
CaiaSoft offers this feature, while GFA does not.
CaiaSoft developers are active in supporting FOLIO.
More Feature Comparisons
A full list of feature comparisons is available on our WIKI page — look for the “Feature Comparison – CAIASOFT vs GFA” section.
Questions? We Have Answers…
I will be available at “First Wednesday” on November 6 to take questions and (hopefully) provide answers.
The Association of Research Libraries’ Leadership and Career Development Program (LCDP) just recently completed the capstone institute for the 2018-2019 cohort. As a member of that cohort, called “The Disruptors,” I wanted to showcase the program. First of all, it was a year-long program that consisted of an orientation, two institutes, a visit to my career coach’s institution, and a capstone institute.
The Disruptors included librarians who hail mostly from ARL member institutions from all over the country and Canada. The program is intended for librarians of color who are mid-career and are interested in leadership development. The ARL LCDP was an eye-opening experience – one that gave me perspectives from my cohort that I would have never gleaned otherwise, one that allowed us to learn from each other’s challenges and successes, and one that has given me a cohort that I can always rely upon as I go through my professional journey.
I’ll start from the beginning. The orientation in Washington DC was an opportunity for the 24 of us to get to know each other, to establish learning expectations for ourselves and each other, and to plot our journey as a group. We listed topics that we’d like to explore together (i.e. strategic planning, open access, fundraising etc.), and explored the idea of leadership together. Mark Puente, the Director of Diversity and Leadership Programs at ARL, and DeEtta Jones moderated this and many of our discussions (in person and online). What a fantastic duo Mark and DeEtta were – they make facilitation and instruction look easy!
The first Leadership Institute was hosted by The Ohio State University Library. Ohio in the middle of December was a truly invigorating experience. I learned a great deal about all kinds of management issues, including emotional intelligence and conflict resolution, and had opportunities to hear from library leaders such as Damon Jaggars, John Cawthorne, Jose Diaz, Deidra Herring, and Alexia Hudson-Ward. We also received a fantastic tour of their newly renovated flagship Thompson Memorial Library. This library reminded me of the Roman god, Janus, with two faces – one that looked to the past and another that looked to the future. One side of the library had a more traditional façade, consistent with the campus’s more stately frontages, and the other side had a modern look, built primarily with concrete, metal, and glass. What an amazing building that seamlessly combined their vibrant traditions with ambitious modernity. My career coach, Eileen Theodore-Shusta, from Ohio University, even drove up to meet me for dinner in Columbus, Ohio! What a treat it was to have met my career coach so early in the process! The company and the food were fantastic. It was such a hoot to have frozen custard in the middle of winter!
The second Leadership Institute was hosted by the University of Alberta, in Edmonton, Canada. What a lovely sight to see the Canadian plains in full bloom during May. Interestingly too (since I had never visited Canada at this time of year), the sun didn’t set until 10:00 pm! That was a slightly crazy insomnia-inducing experience. This Leadership Institute was facilitated by Kathryn Deiss and Melanie Hawks. As one of the founders of the Minnesota Leadership Institute, Kathryn shared her experiences and thoughts on diversity, equity, and inclusivity. We also learned a great deal from University of Alberta Libraries’ University Librarian, Dale Askey, and his professional journey. Preparation, perseverance, ambition, and risk-taking. All those words, and some more, crystallized my impression of that conversation.
The stand-out experience of this institute, I believe, was the Kairos Blanket exercise. This was an immersive exercise that the entire cohort participated in. We began with a full house and quickly saw members of our group expelled from our respective lands either by death, disease, or governmental mandates (of course this was all pretend, but it was still quite striking). The group also read out loud the past experiences of First Nation Communities. To hear these stories of resilience against systematic violence and loss uttered by voices from the cohort members, was stark and emotional. This link provides more information about the program. The Kairos Blanket exercise, along with revelations on the Canadian government’s approach towards reconciliation with First Nation communities (aka Native Americans in the US) were deeply informative.
There were several highlights in the program beyond the events that we attended. Each LCDP Fellow underwent a Leadership Practices Inventory, a 360 assessment of our leadership skills. This assessment involved our reporting officer, our colleagues, and our direct reports. This was an incredibly enlightening experience, as many of us had not undergone such a review of this detail before.
Also, each LCDP Fellow was paired up with a Career Coach – a librarian in a leadership role – who provided us insights into leadership and administration. As part of this program, the Career Coach would host their fellow at their institution. I had the wonderful opportunity to be paired with Eileen Theodore-Shusta of Ohio University. As the Director of Planning, Assessment, and Organizational Effectiveness at Ohio University, Eileen provided me valuable insights into library administration and management from a Human Resource perspective. What a fantastic visit to the beautiful Ohio University campus as well. I visited their Archives, Special Collections, Digital Archives, and even perused their Southeast Asia Collection.
Another integral piece to the LCDP experience was the Equity Toolkit. In between the institutes, we had webinars and lessons from the Equity Toolkit, created by DeEtta Jones and Associates. This Toolkit included modules on Cultural Competence, Bias in the Workplace, and The Inclusive Manager. Using a combination of videos, text, quizzes and reflections, the Equity Toolkit was chock full of information and revelations. Also, this portion of the program included webinars where LCDP fellows and their career coaches were invited , as well as their supervisors, and the up-line administrators. The objective was to not only “preach to the choir”, but to include allies and influential voices in the discussion.
At last, the Capstone Leadership Institute in Washington DC, was the finale as we said our goodbyes. The Capstone was also a new beginning as we adopted our moniker, The Disruptors. We attended the ARL Directors’ evening reception and sat alongside library directors in the Fall ARL Association meeting. Jennifer Garrett, Director of Talent Management at North Carolina State University, eloquently highlighted the ARL LCDP experience to these Library Directors, and Elaine Westbrooks, the University Librarian of UNC Chapel Hill’s Library, spoke about her time as a career coach and perfectly bookended the speech with her memories as a former ARL LCDP fellow. After all the celebrations, we reconvened, reminisced, and planned for the challenges and opportunities before us.
How do we continue this journey? One step at a time. With each other.
Thank you to my former dean, Catherine Quinlan at the University of Southern California, and Duke University Libraries for your support and encouragement. It is on the shoulders of giants (and forward thinking institutions) that I see the world of great challenges and opportunities before me.
Notes from the Duke University Libraries Digital Projects Team