So much work to do, so little time. But what keeps us focused as we work to make a wealth of resources available via the web? It often comes down to a willingness to collaborate and a commitment to a common vision.
Staying focused through vision and values
When Duke University Libraries embarked on our 2012-2013 website redesign, we created a vision and values statement that became a guidepost during our decision making. It worked so well for that single project, that we later decided to apply it to current and future web projects. You can read the full statement on our website, but here are just a few of the key ideas:
Put users first.
Verify data and information, perpetually remove outdated or inaccurate data and content, & present relevant content at the point of need.
Strengthen our role as essential partners in research, teaching, and scholarly communication: be a center of intellectual life at Duke.
Maintain flexibility in the site to foster experimentation, risk-taking, and future innovation.
As we decide which projects to undertake, what our priorities should be, and how we should implement these projects, we often consider what aligns well with our vision and values. And when something doesn’t fit well, it’s often time to reconsider.
Team work, supporting and balancing one another
Vision counts, but having people who collaborate well is what really enables us to maintain focus and to take a coherent approach to our work.
A number of cross-departmental teams within Duke University Libraries consider which web-based projects we should undertake, who should implement them, when, and how. By ensuring that multiple voices are at the table, each bringing different expertise, we make use of the collective wisdom from within our staff.
The Web Experience Team (WebX) is responsible for the overall visual consistency and functional integrity of our web interfaces. It not only provides vision for our website, but actively leads or contributes to the implementation of numerous projects. Sample projects include:
The introduction of a new eBook service called Overdrive
The development of a new, Bento-style, version of our search portal to be released in August
Members of WebX are Aaron Welborn, Emily Daly, Heidi Madden, Jacquie Samples, Kate Collins, Michael Peper, Sean Aery, and Thomas Crichlow.
While we love to see the research community using our collections within our reading rooms, we understand the value in making these collections available online. The Advisory Committee for Digital Collections (ACDC) decides which collections of rare material will be published online. Members of ACDC are Andy Armacost, David Pavelich, Jeff Kosokoff, Kat Stefko, Liz Milewicz, Molly Bragg, Naomi Nelson, Valerie Gillispie, and Will Sexton.
The Digital Collections Implementation Team (DCIT) both guides and undertakes much of the work needed to digitize and publish our unique online collections. Popular collections DCIT has published include:
Members of DCIT are Erin Hammeke, Mike Adamo, Molly Bragg, Noah Huffman, Sean Aery, and Will Sexton.
These groups have their individual responsibilities, but they also work well together. The teamwork extends beyond these groups as each relies on individuals and departments throughout Duke Libraries and beyond to ensure the success of our projects.
Most importantly, it helps that we like to work together, we value each other’s viewpoints, and we remain connected to a common vision.
A unified search results page, commonly referred to as the “Bento Box” approach, has been an increasingly popular method to display search results on library websites. This method helps users gain quick access to a limited result set across a variety of information scopes while providing links to the various silos for the full results. NCSU’s QuickSearch implementation has been in place since 2005 and has been extremely influential on the approach taken by other institutions.
Way back in December of 2012, the DUL began investigating and planning for implementing a Bento search results layout on our website. Extensive testing revealed that users favor searching from a single box — as is their typical experience conducting web searches via Google and the like. Like many libraries, we’ve been using Summon as a unified discovery layer for articles, books, and other resources for a few years, providing an ‘All’ tab on our homepage as the entry point. Summon aggregates these various sources into a common index, presented in a single stream on search results pages. Our users often find this presentation overwhelming or confusing and prefer other search tools. As such, we’ve demoted the our ‘All’ search on our homepage — although users can set it as the default thanks to the very slick Default Scope search tool built by Sean Aery (with inspiration from the University of Notre Dame’s Hesburgh Libraries website):
The library’s Web Experience Team (WebX) proposed the Bento project in September of 2013. Some justifications for the proposal were as follows:
Bento boxing helps solve these problems:
We won’t have to choose which silo should be our default search scope (in our homepage or masthead)
Synthesizing relevance ranking across very different resources is extremely challenging, e.g., articles get in the way of books if you’re just looking for books (and vice-versa).
We need to move from “full collection discovery to full library discovery” – in the same search, users discover expertise, guides/experts, other library provisions alongside items from the collections. 1
“A single search box communicates confidence to users that our search tools can meet their information needs from a single point of entry.” 2
Sean also developed this mockup of what Bento results could look like on our website and we’ve been using it as the model for our project going forward:
For the past month our Bento project team has been actively developing our own implementation. We have had the great luxury of building upon work that was already done by brilliant developers at our sister institutions (NCSU and UNC) — and particular thanks goes out to Tim Shearer at UNC Libraries who provided us with the code that they are using on their Bento results page, which in turn was heavily influenced by the work done at NCSU Libraries.
Our approach includes using results from Summon, Endeca, Springshare, and Google. We’re building this as a Drupal module which will make it easy to integrate into our site. We’re also hosting the code on GitHub so others can gain from what we’ve learned — and to help make our future enhancements to the module even easier to implement.
Our plan is to roll out Bento search in August, so stay tuned for the official launch announcement!
PS — as the 4th of July holiday is right around the corner, here are some interesting items from our digital collections related to independence day:
Fifty years ago, hundreds of student volunteers headed south to join the Student Nonviolent Coordinating Committee’s (SNCC) field staff and local people in their fight against white supremacy in Mississippi. This week, veterans of Freedom Summer are gathering at Tougaloo College, just north of Jackson, Mississippi, to commemorate their efforts to remake American democracy.
The 50th anniversary events, however, aren’t only for movement veterans. Students, young organizers, educators, historians, archivists, and local Mississippians make up the nearly one thousand people flocking to Tougaloo’s campus this Wednesday through Saturday. We here at Duke Libraries, as well as members of the SNCC Legacy Project Editorial Board, are in the mix, making connections with both activists and archivists about our forthcoming website, One Person, One Vote: The Legacy of SNCC and the Fight for Voting Rights.
This site will bring together material created in and around SNCC’s struggle for voting rights in the 1960s and pair it with new interpretations of that history by the movement veterans themselves. To pull this off, we’ll be drawing on Duke’s own collection of SNCC-related material, as well as incorporating the wealth of material already digitized by institutions like the University of Southern Mississippi, the Wisconsin Historical Society’s Freedom Summer Collection, the Mississippi Department of Archives and History, as well as others.
What becomes clear while circling through the panels, films, and hallway conversations at Freedom Summer 50th events is how the fight for voting rights is really a story of thousands of local people. The One Person, One Vote site will feature these everyday people – Mississippians like Peggy Jean Connor, Fannie Lou Hamer, Vernon Dahmer, and SNCC workers like Hollis Watkins, Bob Moses, and Charlie Cobb. And the list goes on. It’s not everyday that so many of these people come together under one roof, and we’re doing our share of listening to and connecting with the people whose stories will make up the One Person, One Vote site.
Many of us here at Duke have been excited about the Digital Public Library of America (DPLA) since their launch in April of 2013. DPLA’s mission is to bring together America’s cultural riches into one portal. Additionally, they provide a platform for accessing and sharing library data in technologically innovative and impactful ways via the DPLA API. If you are not familiar with DPLA, be sure to take a look at their website and watch their introductory video.
The North Carolina Digital Heritage Center (NCDHC) is our local service hub for DPLA and we met with them to understand requirements for contributing metadata as well as the nuts and bolts of exposing our records for harvesting. They have a system in place that is really easy for contributing libraries around the state, and we are very thankful for their efforts. On our side, we chose our first collection to share, updated rights statements for the items in that collection and contacted NCDCH to let them know where to find our metadata (admittedly these tasks involved a bit more nitty gritty work than I am describing here, but it was still a relatively simple process).
In mid-June, NCDHC harvested metadata from our Broadsides and Ephemera digital collection and shortly thereafter, voila the records are available through DPLA!!
We plan to continue making more collections available to DPLA, but are still selecting materials. What collections do you think we should share? Let us know in the comments below or through Twitter or Facebook.
Thanks again to NCDHC for the wonderful work they do in helping us and other libraries across North Carolina participate in the ambitious mission of the Digital Public Library of America!
The technology for digitizing analog videotape is continually evolving. Thanks to increases in data transfer-rates and hard drive write-speeds, as well as the availability of more powerful computer processors at cheaper price-points, the Digital Production Center recently decided to upgrade its video digitization system. Funding for the improved technology was procured by Winston Atkins, Duke Libraries Preservation Officer. Of all the materials we work with in the Digital Production Center, analog videotape has one of the shortest lifespans. Thus, it is high on the list of the Library’s priorities for long-term digital preservation. Thanks, Winston!
Due to innovative design, ease of use, and dominance within the video and filmmaking communities, we decided to go with a combination of products designed by Apple Inc., and Blackmagic Design. A new computer hardware interface recently adopted by Apple and Blackmagic, called Thunderbolt, allows the the two companies’ products to work seamlessly together at an unprecedented data-transfer speed of 10 Gigabits per second, per channel. This is much faster than previously available interfaces such as Firewire and USB. Because video content incorporates an enormous amount of data, the improved data-transfer speed allows the computer to capture the video signal in real time, without interruption or dropped frames.
Our new data stream works as follows. Once a tape is playing on an analog videotape deck, the output signal travels through an Analog to SDI (serial digital interface) converter. This converts the content from analog to digital. Next, the digital signal travels via SDI cable through a Blackmagic SmartScope monitor, which allows for monitoring via waveform and vectorscope readouts. A veteran television engineer I know will talk to you for days regarding the physics of this, but, in layperson terms, these readouts let you verify the integrity of the color signal, and make sure your video levels are not too high (blown-out highlights) or too low (crushed shadows). If there is a problem, adjustments can be made via analog video signal processor or time-base corrector to bring the video signal within acceptable limits.
Next, the video content travels via SDI cable to a Blackmagic Ultrastudio interface, which converts the signal from SDI to Thunderbolt, so it can now be recognized by a computer. The content then travels via Thunderbolt cable to a 27″ Apple iMac utilizing a 3.5 GHz Quad-core processor and NVIDIA GeForce graphics processor. Blackmagic’s Media Express software writes the data, via Thunderbolt cable, to a G-Drive Pro external storage system as a 10-bit, uncompressed preservation master file. After capture, editing can be done using Apple’s Final Cut Pro or QuickTime Pro. Compressed Mp4 access derivatives are then batch-processed using Apple’s Compressor software, or other utilities such as MPEG-Streamclip. Finally, the preservation master files are uploaded to Duke’s servers for long-term storage. Unless there are copyright restrictions, the access derivatives will be published online.
Thanks for all you do throughout the year to make our lives better, brighter, and a bit more fun! From teaching us to fish to helping us move, fathers and father-figures have always been there to help children learn, grow and achieve. While parenting roles and identities continue to evolve, the love of family persists. So, this Father’s Day here is a Digital Collections salute to dads everywhere!
This past week, we were excited to be able to publish a rare 1804 manuscript copy of the Haitian Declaration of Independence in our digital collections website. We used the project as a catalyst for improving our document-viewing user experience, since we knew our existing platforms just wouldn’t cut it for this particular treasure from the Rubenstein Library collection. In order to present the declaration online, we decided to implement the open-source Diva.js viewer. We’re happy with the results so far and look forward to making more strides in our ability to represent documents in our site as the year progresses.
Challenges to Address
We have had two glaring limitations in providing access to digitized collections to date: 1) a less-than-stellar zoom & pan feature for images and 2) a suboptimal experience for navigating documents with multiple pages. For zooming and panning (see example), we use software called OpenLayers, which is primarily a mapping application. And for paginated items we’ve used two plugins designed to showcase image galleries, Galleria (example) and Colorbox (example). These tools are all pretty good at what they do, but we’ve been using them more as stopgap solutions for things they weren’t really created to do in the first place. As the old saying goes, when all you have is a hammer, everything looks like a nail.
Big (OR Zoom-Dependent) Things
Traditionally as we digitize images, whether freestanding or components of a multi-page object, at the end of the process we generate three JPG derivatives per page. We make a thumbnail (helpful in search results or other item sets), medium image (what you see on an item’s webpage), and large image (same dimensions as the preservation master, viewed via the ‘all sizes’ link). That’s a common approach, but there are several places where that doesn’t always work so well. Some things we’ve digitized are big, as in “shoot them in sections with a camera and stitch the images together” big. And we’ve got several more materials like this waiting in the wings to make available. A medium image doesn’t always do these things justice, but good luck downloading and navigating a giant 28MB JPG when all you want to do is zoom in a little bit.
Likewise, an object doesn’t have to be large to really need easy zooming to be part of the viewing experience. You might want to read the fine print on that newspaper ad, see the surgeon general’s warning on that billboard, or inspect the brushstrokes in that beautiful hand-painted glass lantern slide.
And finally, it’s not easy to anticipate the exact dimensions at which all our images will be useful to a person or program using them. Using our data to power an interactive display for a media wall? A mobile app? A slideshow on the web? You’ll probably want images that are different dimensions than what we’ve stored online. But to date, we haven’t been able to provide ways to specify different parameters (like height, width, and rotation angle) in the image URLs to help people use our images in environments beyond our website.
We do love our documentary photography collections, but a lot of our digitized objects are represented by more than just a single image. Take an 11-page piece of sheet music or a 127-page diary, for example. Those aren’t just sequences or collections of images. Their paginated orientation is pretty essential to their representation online, but a lot of what characterizes those materials is unfortunately lost in translation when we use gallery tools to display them.
The Intersection of (Big OR Zoom-Dependent) AND Paginated
Here’s where things get interesting and quite a bit more complicated: when zooming, panning, page navigation, and system performance are all essential to interacting with a digital object. There are several tools out there that support these various aspects, but very few that do them all AND do them well. We knew we needed something that did.
Our Solution: Diva.js
Setting up Diva.js required us to add a few new pieces to our infrastructure. The most significant was an image server (in our case, IIPImage) that could 1) deliver parts of a digital image upon request, and 2) deliver complete images at whatever size is requested via URL parameters.
Our Interface: How it Works
By default, we present a document in our usual item page template that provides branding, context, and metadata. You can scroll up and down to navigate pages, use Page Up or Page Down keys, or enter a page number to jump to a page directly. There’s a slider to zoom in or out, or alternatively you can double-click to zoom in / Ctrl-double-click to zoom out. You can toggle to a grid view of all pages and adjust how many pages to view at once in the grid. There’s a really handy full-screen option, too.
It’s optimized for performance via AJAX-driven “lazy loading”: only the page of the document that you’re currently viewing has to load in your browser, and likewise only the visible part of that page image in the viewer must load (via square tiles). You can also download a complete JPG for a page at the current resolution by clicking the grey arrow.
We extended Diva.js by building a synchronized fulltext pane that displays the transcript of the current page alongside the image (and beneath it in full-screen view). That doesn’t come out-of-the-box, but Diva.js provides some useful hooks into its various functions to enable developing this sort of thing. We also slightly modified the styles.
Behind the scenes, we have pyramid TIFF images (one for each page), served up as JPGs by IIPImage server. These files comprise arrays of 256×256 JPG tiles for each available zoom level for the image. Let’s take page 1 of the declaration for example. At zoom level 0 (all the way zoomed out), there’s only one image tile: it’s under 256×256 pixels; level 1 is 4 tiles, level 2 is 12, level 3 is 48, level 4 is 176. The page image at level 5 (all the way zoomed in) includes 682 tiles (example of one), which sounds like a lot, but then again the server only has to deliver the parts that you’re currently viewing.
Every item using Diva.js also needs to load a JSON stream including the dimensions for each page within the document, so we had to generate that data. If there’s a transcript present, we store it as a single HTML file, then use AJAX to dynamically pull in the part of that file that corresponds to the currently-viewed page in the document.
Diva.js & IIPImage Limitations
It’s a good interface, and is the best document representation we’ve been able to provide to date. Yet it’s far from perfect. There are several areas that are limiting or that we want to explore more as we look to make more documents available in the future.
Out of the box, Diva.js doesn’t support page metadata, transcriptions, or search & retrieval within a document. We do display a synchronized transcript, but there’s currently no mapping between the text and the location within each page where each word appears, nor can you perform a search and discover which pages contain a given keyword. Other folks using Diva.js are working on robust applications that handle these kinds of interactions, but the degree to which they must customize the application is high. See for example, the Salzinnes Antiphonal: a 485-page liturgical manuscript w/text and music or a prototype for the Liber Usualis: a 2,000+ page manuscript using optical music recognition to encode melodic fragments.
Diva.js also has discrete zooming, which can feel a little jarring when you jump between zoom levels. It’s not the smooth, continuous zoom experience that is becoming more commonplace in other viewers.
With the IIPImage server, we’ll likely re-evaluate using Pyramid TIFFs vs. JPEG2000s to see which file format works best for our digitization and publication workflow. In either case, there are several compression and caching variables to tinker with to find an ideal balance between image quality, storage space required, and system performance. We also discovered that the IIP server unfortunately strips out the images’ ICC color profiles when it delivers JPGs, so users may not be getting a true-to-form representation of the image colors we captured during digitization.
Launching our first project using Diva.js gives us a solid jumping-off point for expanding our ability to provide useful, compelling representations of our digitized documents online. We’ll assess how well this same approach would scale to other potential projects and in the meantime keep an eye on the landscape to see how things evolve. We’re better equipped now than ever to investigate alternative approaches and complementary tools for doing this work.
We’ll also engage more closely with our esteemed colleagues in the Duke Collaboratory for Classics Computing (DC3), who are at the forefront of building tools and services in support of digital scholarship. Well beyond supporting discovery and access to documents, their work enables a community of scholars to collaboratively transcribe and annotate items (an incredible–and incredibly useful–feat!). There’s a lot we’re eager to learn as we look ahead.
The Digital Production Center at the Perkins Library has a clearly stated mission to “create digital captures of unique, valuable, or compelling primary resources for the purpose of preservation, access, and publication.” Our mission statement goes on to say, “Our operating principle is to achieve consistent results of a measurable quality. We plan and perform our work in a structured and scalable way, so that our results are predictable and repeatable, and our digital collections are uniform.”
That’s a mouthful!
What it means is the images have to be consistent not only from image to image within a collection but also from collection to collection over time. And if that isn’t complex enough this has to be done using many different capture devices. Each capture device has its own characteristics, which record and reproduce color in different ways.
How do we produce consistent images?
There are many variables to consider when solving the puzzle of “consistent results of a measurable quality.” First, we start with the viewing environment, then move to monitor calibration and profiling, and end with capture device profiling. All of these variables play a part in producing consistent results.
Full spectrum lighting is used in the Digital Production Center to create a neutral environment for viewing the original material. Lighting that is not full spectrum often has a blue, magenta, green or yellow color shift, which we often don’t notice because our eyes are able to adjust effortlessly. In the image below you can see the difference between tungsten lighting and neutral lighting.
Our walls are also painted 18 percent gray, which is neutral, so that no color is reflected from the walls onto the image while comparing it to the digital image.
Now that we have a neutral viewing environment, the next variable to consider is the computer monitors used to view our digitized images. We use a spectrophotometer (straight out of the Jetsons, right?) made by xrite to measure the color accuracy, luminance and contrast of the monitor. This hardware/software combination uses the spectrophotometer as it’s attached to the computer screen to read the brightness (luminance), contrast, white point and gamma of your monitor and makes adjustments for optimal viewing. This is called monitor calibration. The software then displays a series of color patches with known RGB values which the spectrophotometer measures and records the difference. The result is an icc display profile. This profile is saved to your operating system and is used to translate colors from what your monitor natively produces to a more accurate color representation.
Now our environment is neutral and our monitor is calibrated and profiled. The next step in the process is to profile your capture device, whether it is a high-end digital scan back like the Phase One or BetterLight or an overhead scanner like a Zeutschel. From Epson flatbed scanners to Nikon slide scanners, all of these devices can be calibrated in the same way. With all of the auto settings on your scanner turned off, a color target is digitized on the device you wish to calibrate. The swatches on the color target are known values similar to the series of color patches used for profiling the monitor. The digitized target is fed to the profiling software. Each patch is measured and compared against its known value. The differences are recorded and the result is an icc device profile.
Now that we have a neutral viewing environment for viewing the original material, our eyes don’t need to compensate for any color shift from the overhead lights or reflection from the walls. Our monitors are calibrated/profiled so that the digitized images display correctly and our devices are profiled so they are able to produce consistent images regardless of what brand or type of capture device we use.
During our daily workflow we a Gretag Macbeth color checker to measure the output of the capture devices every day before we begin digitizing material to verify that the device is still working properly.
All of this work is done before we push the “scan” button to ensure that our results are predictable and repeatable, measurable and scalable. Amen.
I started working on the metadata of Sidney D. Gamble photographs in January 2008 on a spreadsheet with no matching images. The nitrate negatives from the collection had just been digitized and resided in a different location. I was, however still amazed by the richness of the content as I tried very hard to figure out the locations of each picture, half of them were so challenging that I must have guessed wrong for most of them in my struggle to meet the project deadline. It was after the digital collection was published that I started to study more thoroughly these images of Chinese life more than 100 years ago. And they have since then continued to amaze me as I understand more of their content and context with the various projects I’ve done; and to puzzle me as I dig deeper into their historical backgrounds. I’ve imagined China in those times in readings, enhanced by films early and recent, yet Gamble’s photographs help me to get closer to what life really looked like and how similar or different things appeared. Recently the hand-colored lantern slides in the collection have made me feel even more so.
Lantern slides are often hand-colored glass slides, commonly used in the first half of the twentieth century to project photographs or illustrations onto walls for better visualization. We are yet to find out whether Gamble colored these slides himself or instructed the work by giving details of the description of the objects. I find the colors in these images strikingly true, suggesting that they were done by someone familiar with the scene or the culture. Whether it is a remote hillside village in a minority region in Sichuan as shown above or the famous Temple of Heaven in Beijing below, the color versions are vivid and lively as if they were taken by a recent visitor.
Gamble used these color slides in his talks introducing China to his countrymen. He included both images of Chinese scenery and those of Chinese people and their lives. The large amount of images of Chinese life in the collection is a record of his social survey work in China, the earliest of its kind ever done in China; as well as a reflection of his curiosity and sympathy in Chinese people and their culture. Funeral is one of Gamble’s favorite subjects, and I have no clue whether green was the color for people’s clothes working at funerals as I see several images with men dressed in green doing all sort of jobs, such as this man carrying the umbrella, the color is not offensive but needs to be studied.
The Lama Temple, or Yonghegong, is an imperial Tibetan Buddhist Temple. Every year in early March, masked lamas performed their annual “devil dance”, a ritual to ward off bad spirits and disasters on a Monday. I learned about this performance through Gamble’s photographs and the color images have simply added more life. A search online for images taken today brought back photos that look just similar.
There are nearly 600 colored slides in the collection, one can imagine the reaction of the audience when Gamble projected them on the wall in his talk about the mysterious China in the Far East. With the help of a capable intern, I was able to create an inventory last fall, matching most of them with existing black and white one in the collection. A project was proposed and approved quickly to digitize these lantern slides. The project was done quickly and a blog post by one of our digitization experts provided some interesting details. In June this year, selected color images will appear in the travelling exhibit that professor Guo-Juin Hong and I curated and started in Beijing last summer when it opens at Shanghai Archives’ museum on bund. I believe they will fascinate the Chinese audience today as much as they had when Gamble showed them to the American audience.
Post Contributed by Luo Zhou, Chinese Studies Librarian, Duke University Libraries
Part of my job is to track our Duke Digital Collections google analytics data. As a part of this work, I like to keep tabs on the most popular digital collections items each month. There is generally some variation among the most popular items from month to month. For example in May, a post on the New Yorker blog pointed to some motherhood oriented ads and our traffic to these items spiked as a result.
However there is one item that persists as one of our most popular items: the Be-Ro Home Recipes: Scones, Cakes, Pastry, Puddings.Looking back at analytics since 2010 this is the most popular item by about 2000 hits (the book has seen 18,447 pageviews since Jan 1 2010). In the six months that I’ve been studying our digital collections analytics I consistently wonder, why this item? no really, why? Sure all the recipes call for lard, but that cannot be the only reason.
“Researching” the cookbook (conducting a few google searches) shows that the Be-Ro company was established in 1875 by the creator of the worlds first self rising flour. Home Recipes was originally published as a pamphlet to promote use of the flour as early as the 1880s. Our version includes over 50 recipes, was published in the 1920s, and is the 13th edition of the cookbook.
Duke’s Home Recipes claims that baking at home with Be-Ro is more economical and inspires the a better home, thanks to the woman of the house’s baking: “In ninety-nine cases out of a hundred she has a happy home, because good cooking means good food and good food means good health” (from page 2). This cookbook has a storied history to be sure, but that still doesn’t explain why our version is so popular.
I kept searching, and found that there is a fervent and passionate following for the Be-Ro Cookbook. Several UK cooking blog posts swoon over the book, saying they grew up with the recipes and first learned to bake from it. The community aspect of the cookbook jives with our traffic as most of the users of the item on our website come from the UK. Another factor driving traffic to our site is that Duke Digital Collections’ version of the cookbook tends to be the 4th hit on Google, when you search for “Be-Ro Cookbook”.
This investigation left me with a better understanding of why this cookbook is so popular, but I’m still surprised and amused that among all the significant holdings we have digitized and available online, this cookbook is consistently the most visited. Are there conclusions we can take away from this? We are not going to start only digitizing cookbooks as a result of this knowledge, I can promise you that. However analytics shows us that in addition to the more traditionally significant items online, items like this cookbook can tap into and find a strong and consistent audience. And that is data we can use to build better and more resonant digital collections.
Notes from the Duke University Libraries Digital Projects Team