We were chatting last week with Brian Norberg, the Digital Humanities Technology Analyst for Trinity College of Arts and Sciences, who wanted to know what makes DC3 tick. I’m not sure we’ve ever been asked that directly before, and the conversation helped crystallize for me some of the basic operating principles we’ve evolved.
- Think big. Really big. Tackle problems that are hard, maybe impossible, that other people don’t want to deal with, that may take years or decades or the rest of your life.
- Put the data first. Improving the state of our data and sharing it takes precedence over doing flashy things with it. Not that we’re averse to cool-looking stuff, but that’s not the priority.
- Build small. That sounds like it’s in direct conflict with #1, but it’s not: you tackle big problems by chipping off discrete chunks of them. The principal win with computers is that they do lots of simple things very fast. So try to exploit that.
- Don’t be technology driven. That might sound weird coming from a Digital Humanities (DH) shop, but our motivation is not “what cool thing can we do with technology X?” It’s more like: “what’s the nature of this problem? Is the best solution a variant of traditional scholarly (analog) approaches, or is it technological, or (most likely) a hybrid? What’s actually going to work? Do that.
- Don’t have formal divisions of labor. We all have ideas, we all implement ideas. We do have different, largely complementary, skillsets and we use them. We can all initiate projects.
For me, this is a dream come true: we’re tackling things like attempting to organize and make digitally accessible a whole discipline (Greek Epigraphy); improving Latin and Greek OCR; developing best practices for digital editions; figuring out how best to model humanistic data; improving geo-services and data for the ancient world. None of these things has a defined end. We’ll definitely improve the state of things and produce tools that make it easier for others to do their own research, but the goal is to leave things better than we found them, not necessarily to “finish”.
Over the course of my career, I’ve worked in organizations with, I’d say, three models: project, service, and research. The project model is very familiar in DH: you come up with a project idea, you get grant money to do the project, you staff up for the project, you do the project, you’re done, and the project members go their separate ways (possibly leaving some poor sysadmin holding the ball, hoping it doesn’t start to tick). The service model is pretty common in libraries: you have clients who come to you with requests for consulting and/or development; you have skillsets that you try to match to those requests. The research model is what I outlined above, and is, in I think important ways, the antithesis of the other two.
Project models, by economic necessity, don’t think really big, they think scoped. They probably don’t build small, they build at whatever the scale of the project is. They also tend (because this is what funders tend to give money for) to focus on innovation rather than incremental improvements—you’ll rarely get a grant just to improve the state of a dataset, for example. Projects usually have pretty strict divisions of labor. Somebody has to be the PI. Somebody has to implement the details specified in the project plan. Somebody has to write the proposal in the first place. Those usually aren’t the same people.
Service models are often technology driven: we have some particular technology, how can we apply it to your problem? The technology is often the thing that is intended to draw customers to your shop. Service models also tend to think small, again by necessity. The constant danger in the service model is that you will drown, so you can’t take on too much at one time and you probably don’t want to experiment too much, because you might end up supporting a variety of hard-to-maintain services. They also usually don’t focus on content/data but rather what can be done with it.
In the research model, we can experiment, because we aren’t completely deadline-driven like a project, and we can fail without recrimination, because learning what doesn’t work is important. Most crucially, all of us have this same set of priorities. The labor division explicitly isn’t faculty member has ideas and developers implement ideas. We do have divisions of labor, of course. Josh knows the Classics scholarship side better and has a terrific grasp of system modeling, but doesn’t code, Ryan focuses more on subject areas like image processing and geodata, I focus more on texts, markup, and linking stuff. But we all do R&D, support existing services, and work on projects, each using our complementary skills.
The research model is not without its downsides, of course. For example: how do you evaluate it? Projects are easy. Did you get grant money? Did you finish? Service-model operations also have fairly straightforward metrics, like how many customers were served and what levels of usage your digital services have. You can also ask clients whether they were satisfied.
The research model is going to be harder to quantify. We do have some measurable components: we provide some services, and we do (or will do) some grant-funded projects. We also produce output, including code, web services, papers, articles, and blog posts. But for the most part we focus on building pieces of ever-evolving, ever-growing, resources that are never ‘finished’—resources that are harder to quantify than finished books or articles, websites or databases are. So, in some ways we look more like faculty than staff, in other ways we look more entrepreneurial. Startups take risks on building big things, hoping, of course, that they’ll be profitable. Very often the things they try don’t work, but sometimes they do, or sometimes they suggest a completely different (but better) direction. The thing is: you’ll never catch anything if you don’t put your fishing line in the water. We try to control for risk, to fish where the fishing’s good, but our basic posture is still research-driven and so to some extent risk bearing.
There are other DH units out there that explicitly combine project, service, and research models. But more often, non-faculty research happens around the edges, unsanctioned and generally unrewarded, because there’s a feedback loop in place for recognizing faculty research, but none for staff (this is one of the things that makes being truly “alt-ac” so difficult). This division of labor is self-limiting though, because it’s nearly impossible for any one person to cover all the bases you need to do DH research: you need a group of collaborators. And if the research activity is confined to one part of that group I’d argue that it means the group can’t operate at its full potential.
It’s a special place like Duke University Libraries (with help from the Andrew W. Mellon Foundation) that commits to the sort of experiment DC3 embodies and my sense is that we may be the first DH unit where research has the highest priority throughout the organization. I sincerely hope we’re joined by others. I think this thing has legs.
One thought on “How we do DH”