How should we understand the value of academic publications? That was the question addressed at the ALA Annual Conference last month during the SPARC/ACRL Forum. The forum is the highlight of each ALA conference for me because it always features a timely topic and really smart speakers; this year was no exception.
One useful part of this conversation was a distinction drawn between different types of value that can be assigned to academic publications. There is, for example, the value of risk capital, where a publication is valued because someone has been willing to invest a significant amount of money, or time, in its production. Seeing the value of academic publications in this light really depends on clinging to the scarcity model that was a technological necessity during the age of print, but which is increasingly irrelevant. Nevertheless, some of the irrational opposition we see these days towards open access publications seems to be based on a myopic approach that can only recognize this risk value; because online publication can be done more inexpensively, at both production and consumption, and therefore does not involve the risk of a large capital investment, it cannot be as good. Because the economic barrier to entry has been lowered, there is a kind of “they’ll let anyone in here” elitism in this reaction.
Another kind of value that was discussed is the cultural value that is supposedly infused into publications by peer-review. In essence, peer-review is used as a way to create a different, artificial type of scarcity — amongst all the material available in the digital age, peer-review separates and distinguishes some as having a higher cultural value.
Of course, there is another way to approach this kind of winnowing valuable material from the booming, buzzing confusion; one could look at how specific scholarship has been received by readers. That is, one could look at the value created by attention. We are especially familiar with attention value in the age of digital consumerism because we pay attention to Amazon sales figures, we seek recommendations through “purchased together” notes, and we look at consumer reviews before booking a hotel, or a cruise, or a restaurant. Some will argue that these parallels show that we cannot trust attention value; it is only good for inconsequential decisions, the argument goes. But figuring out how to use attention as a means to make sound evaluations of scholarship — better evaluations than we are currently relying on — is the focus of the movement we call “alt-metrics.”
Before we discuss attention value in more detail, however, we need to acknowledge another unfortunate reminder that the cultural value created by peer-review may be even more suspect and unreliable. Last week we saw a troubling incident that provokes fundamental doubts about peer-review and how we value scholarly publications when Sage Publishing announced the retraction of sixty articles due to a “peer-review ring.” Apparently a named author used fake e-mail identities, and maybe some cronies, in order to review his own articles and to cite them, thus creating an artificial and false sense of the value of these articles. Sage has not made public the details, so it is hard to know exactly what happened, but as this article points out, the academic world needs to know — deserves to know — how this happened. The fundamental problem that this incident raises is the suggestion that an author was able to select his own peer-reviewers and to direct the peer-review requests to e-mails he himself had created, so that the reviewers were all straw men. Although all the articles were from one journal, the real problem here is that the system for peer-review apparently simply is not what we have been told it is, and does not, in fact, justify the value we are encouraged to place on it.
Sage journals are not inexpensive. In fact, the recent study of “big deal” journal pricing by Theodore Bergstrom and colleagues (subscription required), notes that Sage journal prices, when calculated per citation (an effort to get at value instead of just looking at price), are ten times higher than those for journals produced by non-profits, and substantially higher even than Elsevier prices. A colleague recently referred to Sage journals in my hearing as “insanely expensive.” So it is a legitimate question to ask if we are getting value for all that money. One way high journal prices are often justified, now that printing and shipping costs are mostly off the table, is based on the expertise required at publishing houses to manage the peer-review system. But this scandal at the Journal of Vibration and Control raises the real possibility that Sage actually uses a kind of DIY system for peer-review that is easily gamed and involves little intervention from the publisher. How else could this have happened? So we are clearly justified is thinking that the value peer-review creates for consumers and readers is suspect, and that attention value is quite likely to be a better measure.
Attention can be measured in many ways. The traditional impact factor is one attempt to analyze attention, although it only looks at the journal level, measures only a very narrow type of attention, and tells us nothing about specific articles. Other kinds of metrics, those we call “alt-metrics” but ought to simply call metrics, are able to give us a more granular, and hence more accurate, way to evaluate the value of academic articles. Of course, the traditional publication system inhibits the use of these metrics, keeping many statistics proprietary and preventing cross-platform measurements. Given the Sage scandal, it is easy to see why such publishers might be afraid of article-level measures of attention. The simple fact is that the ability to evaluate the quality of academic publications in a trustworthy and meaningful way depends on open access, and it relies on various forms of metrics — views, downloads, citations, etc. — that assess attention.
But the most important message, in my opinion, that came out of the SPARC/ACRL forum is that in an open access environment we can do better than just measuring attention. Attention measures are far better than what we have had in the past and what we are still offered by toll publishers. But in an open environment we can strive to measure intention as well as attention. That is, we can look at why an article is getting attention and how it is being used. We can potentially distinguish productive uses and substantive evaluations from negative or empty comments. The goal, in an open access environment, is open and continuous review that comes from both colleagues and peers. This was an exciting prospect when it was raised by Kristen Ratan of PLoS during the forum, where she suggested that we should develop metrics similar to the author-to-author comments possible on PubMed Commons that can map how users think about the scholarly works they encounter. But, after the Sage Publishing debacle last week, it is easier to see that efforts to move towards an environment where such open and continuous review is possible are not just desirable, they are vital and very urgent.