By Paolo Mangiafico
No one likes to be judged, and there are plenty of reasons to be wary of quantitative metrics being used to try to paint a complete picture of the value of an individual’s work. Yet things like publication and citation counts, “impact factors” of particular journals, the amount of grant dollars a researcher is bringing in, and other measures you can easily ascribe a number to are commonly used to gauge research activity and impact.
New methods and venues for publishing scholarship and tracking how it’s being used have kept the debate bubbling on how research impact can or shouldn’t be measured. To get a sense of some of the issues, you could start by reading a piece titled “Scholars Seek Better Metrics for Assessing Research Productivity” from the Chronicle of Higher Education last year or the Nature special section on Metrics from earlier this year, including the comments from readers at the end of some of these articles.
This post isn’t going to wade into that broad debate. Since the focus of this series of blog posts is open access, let’s look at how open access is affecting metrics that are commonly used now (specifically, citation counts) and how open access might become the basis for new ways of measuring scholarly impact.
For some years now, the Open Citation Project has been maintaining a bibliography of studies measuring the effect of open access and downloads (‘hits’) on citation impact. This bibliography has links to and summaries of studies going back about a decade, as well as rebuttals and debates about some of them. In general most of the studies tend to show that, compared to toll-access venues, making publications available via open access leads to greater impact, as measured by number of citations. In an earlier blog post on open data, I mentioned a study that showed similar effects for the data underlying the publications.
Some publishers are now providing metrics on use of and references to research on an article-by-article basis. For example the Public Library of Science journals provide article level metrics that include pageviews and downloads, citation counts from scholarly literature, social bookmarks, blog references, and comments, notes and ratings on articles in the PLoS site. Clearly, any such metrics cannot stand on their own as a pure indicator of value, but a basket of these indicators can provide a more comprehensive picture of trends around how research is being used and referenced.
Metrics like these are likely to become increasingly important as new models for scholarly publishing, including open access, become more common. In an opinion piece from 2007 in the Chronicle of Higher Education titled “The New Metrics of Scholarly Authority” Michael Jensen ruminated on how scholarly metrics and judgments about authority and value are changing in a world of information abundance, due to new technologies and publishing modes. He argues that “authority 3.0” will be based on a variety of heuristics computed through openly available data. He concludes by saying
“… if scholarly output is locked away behind fire walls, or on hard drives, or in print only, it risks becoming invisible to the automated Web crawlers, indexers, and authority-interpreters that are being developed. Scholarly invisibility is rarely the path to scholarly authority.”