How should we understand the value of academic publications? That was the question addressed at the ALA Annual Conference last month during the SPARC/ACRL Forum. The forum is the highlight of each ALA conference for me because it always features a timely topic and really smart speakers; this year was no exception.
One useful part of this conversation was a distinction drawn between different types of value that can be assigned to academic publications. There is, for example, the value of risk capital, where a publication is valued because someone has been willing to invest a significant amount of money, or time, in its production. Seeing the value of academic publications in this light really depends on clinging to the scarcity model that was a technological necessity during the age of print, but which is increasingly irrelevant. Nevertheless, some of the irrational opposition we see these days towards open access publications seems to be based on a myopic approach that can only recognize this risk value; because online publication can be done more inexpensively, at both production and consumption, and therefore does not involve the risk of a large capital investment, it cannot be as good. Because the economic barrier to entry has been lowered, there is a kind of “they’ll let anyone in here” elitism in this reaction.
Another kind of value that was discussed is the cultural value that is supposedly infused into publications by peer-review. In essence, peer-review is used as a way to create a different, artificial type of scarcity — amongst all the material available in the digital age, peer-review separates and distinguishes some as having a higher cultural value.
Of course, there is another way to approach this kind of winnowing valuable material from the booming, buzzing confusion; one could look at how specific scholarship has been received by readers. That is, one could look at the value created by attention. We are especially familiar with attention value in the age of digital consumerism because we pay attention to Amazon sales figures, we seek recommendations through “purchased together” notes, and we look at consumer reviews before booking a hotel, or a cruise, or a restaurant. Some will argue that these parallels show that we cannot trust attention value; it is only good for inconsequential decisions, the argument goes. But figuring out how to use attention as a means to make sound evaluations of scholarship — better evaluations than we are currently relying on — is the focus of the movement we call “alt-metrics.”
Before we discuss attention value in more detail, however, we need to acknowledge another unfortunate reminder that the cultural value created by peer-review may be even more suspect and unreliable. Last week we saw a troubling incident that provokes fundamental doubts about peer-review and how we value scholarly publications when Sage Publishing announced the retraction of sixty articles due to a “peer-review ring.” Apparently a named author used fake e-mail identities, and maybe some cronies, in order to review his own articles and to cite them, thus creating an artificial and false sense of the value of these articles. Sage has not made public the details, so it is hard to know exactly what happened, but as this article points out, the academic world needs to know — deserves to know — how this happened. The fundamental problem that this incident raises is the suggestion that an author was able to select his own peer-reviewers and to direct the peer-review requests to e-mails he himself had created, so that the reviewers were all straw men. Although all the articles were from one journal, the real problem here is that the system for peer-review apparently simply is not what we have been told it is, and does not, in fact, justify the value we are encouraged to place on it.
Sage journals are not inexpensive. In fact, the recent study of “big deal” journal pricing by Theodore Bergstrom and colleagues (subscription required), notes that Sage journal prices, when calculated per citation (an effort to get at value instead of just looking at price), are ten times higher than those for journals produced by non-profits, and substantially higher even than Elsevier prices. A colleague recently referred to Sage journals in my hearing as “insanely expensive.” So it is a legitimate question to ask if we are getting value for all that money. One way high journal prices are often justified, now that printing and shipping costs are mostly off the table, is based on the expertise required at publishing houses to manage the peer-review system. But this scandal at the Journal of Vibration and Control raises the real possibility that Sage actually uses a kind of DIY system for peer-review that is easily gamed and involves little intervention from the publisher. How else could this have happened? So we are clearly justified is thinking that the value peer-review creates for consumers and readers is suspect, and that attention value is quite likely to be a better measure.
Attention can be measured in many ways. The traditional impact factor is one attempt to analyze attention, although it only looks at the journal level, measures only a very narrow type of attention, and tells us nothing about specific articles. Other kinds of metrics, those we call “alt-metrics” but ought to simply call metrics, are able to give us a more granular, and hence more accurate, way to evaluate the value of academic articles. Of course, the traditional publication system inhibits the use of these metrics, keeping many statistics proprietary and preventing cross-platform measurements. Given the Sage scandal, it is easy to see why such publishers might be afraid of article-level measures of attention. The simple fact is that the ability to evaluate the quality of academic publications in a trustworthy and meaningful way depends on open access, and it relies on various forms of metrics — views, downloads, citations, etc. — that assess attention.
But the most important message, in my opinion, that came out of the SPARC/ACRL forum is that in an open access environment we can do better than just measuring attention. Attention measures are far better than what we have had in the past and what we are still offered by toll publishers. But in an open environment we can strive to measure intention as well as attention. That is, we can look at why an article is getting attention and how it is being used. We can potentially distinguish productive uses and substantive evaluations from negative or empty comments. The goal, in an open access environment, is open and continuous review that comes from both colleagues and peers. This was an exciting prospect when it was raised by Kristen Ratan of PLoS during the forum, where she suggested that we should develop metrics similar to the author-to-author comments possible on PubMed Commons that can map how users think about the scholarly works they encounter. But, after the Sage Publishing debacle last week, it is easier to see that efforts to move towards an environment where such open and continuous review is possible are not just desirable, they are vital and very urgent.
Because I am on vacation this week and have very intermittent Internet access, I am hardly the first to announce that the Second Circuit Court of Appeals affirmed the lower court decision (mostly) in the Authors Guild v. HathiTrust lawsuit. I am a bit paranoid about major decisions coming down on days when I am out of touch, but that is another matter. The important point is that the decision is another important win for libraries and fair use, brought to us by the foolishly litigious Authors Guild. It is the first of three major appeals in fair use cases that academic libraries should be watching carefully, and it may help cause a domino effect in those other two (the Georgia State and Google Books cases).
This potential for impact on decisions currently being written by other judges is increased by the fact that the Second Circuit, in discussing transformation as a major element in fair use deliberately cited precedents from its own previous cases, but also cases from the Ninth Circuit and two other Circuit Courts of Appeal. The judges seem to be deliberately rejecting the idea that the circuits are split about transformative fair use.
This decision is very good news for libraries, and the ARL Public Policy Notes description of the decision is well worth reading. But for all its positives, it has to be admitted that there are some oddities in this decision.
Basically, the Court did three different things in this decision:
- It affirmed the lower court ruling that the Authors Guild did not have standing – the right to bring the lawsuit – of behalf of its members. Another reminder of the oft-repeated rule that only a rights holder may sue to defend those rights, and associations that claim to represent rights holders but do not own any rights are not proper plaintiffs. A simple lesson the Authors Guild declines to learn.
- The court also affirmed that mass digitization for the purpose of creating a searchable index of full-text materials, as well as to provide access to those materials for persons with disabilities, is fair use. There is a lot of language in this opinion that reinforces the ARL Code of Best Practices for Fair Use in Academic Libraries.
- Finally, the judges remanded the case back to the lower court in regard to its opinion about fair use for preservation. This is one of the oddities in the decision, so let’s address that one first.
The oddity about this remand is that it does not actually question the conclusion that digitization for preservation can be fair use. Instead, the Court sent this portion of the case back to the lower court to decide if there was any plaintiff remaining in the case, once it was determined that the AG lacked standing, who was at any real risk of having a preservation copy of their book released by HathiTrust while there were still copies commercially available. In short, The Court of Appeals suggested that any ruling about fair use might have been premature because there was no plaintiff in a legally-recognizable position to raise the challenge. It is still entirely possible that, if such a plaintiff is found in the remaining group of named authors, fair use could nevertheless be affirmed. And, because of the rest of the ruling, it would be hard to see what difference even a ruling against fair use for preservation would make to the actual practice of the HathiTrust. So this was really a technicality, and quite strange.
By the way, in regard to the key argument raised by the Authors Guild that the library-specific exception in section 108 precludes libraries from relying on fair use, the court paid almost no attention. It dismissed this silly argument in a footnote (footnote 4 on page 13). This was a losing argument from the start, and the reliance placed on it by the AG shows just how out of touch they are in their approach to copyright.
I think three points are important about the fair use decision favoring HathiTrust in this case (the factor-by-factor analysis is handled well in the ARL post).
First, the Second Circuit accepted the same broad approach to the issue of transformation as has become common in other decisions. It is not just actual changes to the original work that can support a finding of transformation, but a “different purpose… new expression, meaning or message.” And, as I said, the Court appealed to a broad consensus across the country in defining transformation this way.
Second, the Second Circuit held that the lower court was wrong to find that digitization for the purpose of facilitating access for persons with visual or print disabilities was transformative, but found that it was fair use nevertheless. This is important, because in the Georgia State appeal the plaintiffs are arguing that because Judge Evans found that copying for electronic reserves was not transformative, she was in error to still find fair use. But in the HathiTrust case the Second Circuit recognizes what is there for all who read Supreme Court opinions to see, that when a use is transformative it is very likely to be fair use, but when it is not transformative, it can still be fair use if a careful analysis of the factors indicates that conclusion. That is what the Second Circuit finds in regard to HathiTrust and its copies for the disabled, and it is what Judge Evans found in GSU. Both were correct decisions in keeping with the clear precedent from the Supreme Court.
Finally, there is the oddity of the Second Circuit panel’s treatment of the fourth fair use factor when it is analyzing the indexing function of HathiTrust. First, the appellate panel calls the fourth factor the most important consideration, and cites the case of Harper & Row v. The Nation for that proposition. But the Supreme Court really renounced that position 20 years ago in the “Oh Pretty Woman” case, so this is the first part of the oddity. The Second Circuit then goes on to define the idea of market harm very narrowly, saying that the only harm to a market that is recognized for the purpose of the fourth fair use factor is when “the secondary use serves as a substitute for the original work.” This seems to be how the court aligns itself with the ruling in “Pretty Woman,” but it is a strange way to get there. The effect of this proposition is to rule out consideration of almost all licensing markets when looking at the fourth factor. This is a conclusion that must be causing serious heartburn in the publishing community. While the Authors Guild continues to make fair use easier and more inclusive with their absurd litigation campaign, they cannot be winning themselves many friends amongst rights holders.
The bottom line is that this decision is very good for libraries and others who depend on fair use. It adds another precedent and some additional bits of analysis to our claims of fair use. But we should recognize that it grows out of what was a very dumb lawsuit to begin with. As is so often the case, we should be emboldened by this ruling, but not too much. The best protection the library community has against aggressive litigation is still, as it always has been, careful and responsible reflection. In that context, fair use is an increasingly safe option for us.
Policy on Electronic Course Content
For help deciding whether course content in Blackboard or some other digital form is fair use or requires copyright permission, consult this policy document adopted by the Academic Council in February 2008.
Search the Scholarly Communications Blog
- Authors' Rights
- Copyright in the Classroom
- Copyright Information Notes
- Copyright Issues and Legislation
- Digital Rights Management
- Fair Use
- international IP
- Open Access and Institutional Repositories
- Open Access topics
- Orphan works
- Public Domain
- Scholarly Publishing
- Traditional Knowledge
- User Generated Content