Dueling metrics?

One of the interesting consequences of the rapid growth of open access to scholarship — a consequence that I, at least, did not see coming — has been some degree of competition, from the perspective of authors, between open access platforms.  In this short article from AALL Spectrum, James Donovan and Carol Watson address a question they have encountered, “Will an institutional repository hurt my SSRN ranking?”  At Duke we have been asked a similar question in regard to RePec, the repository for economics.  Considering these questions gives us interesting insight into the maturing movement toward open access scholarship.

One way to deal with this concern, which we have undertaken in regard to RePec, is to work with the disciplinary repository to feed article statistics from the institutional repository into the rankings produced by the disciplinary one.  That method provides a more comprehensive and accurate ranking of the articles.  And such rankings are, of course, a more useful measure of impact than impact factors, which apply to journals but not to individual articles, can ever be.

I do not know if it is possible to connect institutional statistics to SSRN or not, but Donovan and Watson describe a different approach to addressing this question.  They begin by pointing out an assumption behind the question, that article readership is a zero-sum proposition, that there is a defined number of readers for any given scholarly article, so that new means of access will simply divide up that readership, not generate new “eyeballs on the article.”  This is the same assumption made by publishers who insist that self-archiving, or even national funder policies, imperial their revenue, and by those who argue that libraries will never spend subscription dollars on works that will be made available freely.  Donovan and Watson begin the process of showing that this assumption is false.

In their article the authors report two different research methods they employed to study the question of whether one repository siphons readers away from another repository, or whether, instead, readership grows overall when an article is available from multiple OA sources.  Both methods lead them to the same conclusion: multiple outlets produce additional readers, so the sensible course for an author who wants her work to have maximum impact is, as they say, to “use both!”  Far from harming the ranking in one database, availability of an article in a second repository appears to increase substantially the overall number of downloads.

I like this article for two particular reasons.  The first is that it attempts to find solid data on which to base the discussion.  Instead of mere assertions of “obvious” truths in the open access debate, many of which are based on that zero-sum assumption, Donovan and Watson attempt to move the discussion to real evidence that actually places that assumption in some doubt.  As we continue to explore business models and look for dissemination options that more fully serve the needs of scholarly authors, basing our discussions on real data would be a refreshing trend.

The second reason I like this article is that it appears to offer empirical evidence, beyond the many anecdotes that we have collected over the years, of the role of “unexpected readers” in increasing the reach of scholarly research.  The zero-sum assumption gives rise to the presumption that the current system works in an acceptable way merely because the people I expect to see my work can see it.  But open access offers the possible of discovering a myriad of readers who are not expected, either by publisher or author.  If we take seriously the idea that academic research is undertaken, in the end, for the good of society, these are precisely the readers we would want to see find our scholarship.  And to rule them out on the basis of an unproven assumption would be to sell ourselves short as scholars.