Over the holidays I was contacted by a writer for Library Journal asking me what I thought about a study by Phil Davis, which was commissioned and released by the Association of American Publishers, that analyzed the “article half-life” for journals in a variety of disciplines and reported on the wide variation in that metric. The main finding is that this idea of half-life — the point at which an article has received half of its lifetime downloads — varies a great deal from discipline to discipline. The writer asked me what I thought about the study, and about a blog post on the Scholarly Kitchen in which David Crotty argues that this study shows that the experience of the NIH with article embargoes — that public access after a one-year embargo does not harm journal subscription — cannot be generalized because the different disciplines vary so much. I sent some comments, and the article in LJ came out early last week.
Since this exchange I have learned that the Davis study is being presented to legislators to prove the point Crotty makes — that public access policies should have long embargoes on them to protect journal subscriptions. It is worth noting that Davis does not actually make that claim, but his study is being used to support that argument in the on-going debate over implementing the White House public access directive. That makes it more important, in my opinion, to be clear about what this study really does tell us and to recognize a bad argument when we see it.
Here is my original reply to the LJ writer, which is based on the fact that this metric, “article half-life,” is entirely new to me and its relevance is completely unproved. It certainly does not, in my opinion, support the much different claim that short embargoes on public access will lead to journal subscription cancellations:
I have to preface my comments by saying that I was only vaguely aware of Davis’ study before you pointed it out. So my comments are based on only a very short acquaintance.
I have no reason to question Davis’ data or his results. My question is about why the particular focus on the half-life of article downloads was chosen in the first place, and my issue is with the attempt to connect that unusual metric with the policy debate about public access policies and embargoes.
As far as I can tell, article half-life tells us something about usage, but not too much about the question of embargoes. The discussion of how long an embargo should be imposed on public access is supposed to focus on preventing subscription cancellations. What I do not see is any connection between this notion of article usage half-life and journal cancellation. It is a big leap from saying that a journal retains some level of usefulness for X number of years to saying that an embargo shorter than X will lead to cancelled subscriptions, yet I think that is the argument that is being made.
Here are two paragraphs from Crotty’s Scholarly Kitchen post:
[snip]”As I understand it, the OSTP set a 12-month embargo as the default, based on the experience seen with the NIH and PubMed Central. The NIH has long had a public access policy with a 12-month embargo, and to date, no publisher has presented concrete evidence that this has resulted in lost subscriptions. With this singular piece of evidence, it made sense for the OSTP to start with a known quantity and work from there.
The new study, however, suggests that the NIH experience may have been a poor choice for a starting point. Clearly the evidence shows that by far, Health Science journals have the shortest article half-lives. The material being deposited in PubMed Central is, therefore, an outlier population, and many (sic) not set an appropriate standard for other fields.”[end quotation]
What immediately strikes me is the unacknowledged transition between the two paragraphs. In the first he is talking about lost subscriptions, which makes sense. But in the second he is talking about this notion of download half-life. What neither Davis nor Crotty give us, however, is the connection between these half-life numbers and lost subscriptions. In other words, why should policy decisions about embargoes be made based on this half-life number? At best the connection between so-called article half-life and cancelled subscriptions is based on a highly speculative argument that has yet even to be made, much less proved. At worst, this metric is irrelevant to the debate.
My overall impression is that the publishing industry is unable to show evidence of lost subscriptions based on the NIH public access policy (which Crotty acknowledges), so they are trying to introduce this new concept to cloud the discussion and make it look like there is a threat to their businesses that still cannot be documented. I think it is just not the right data point on which to base the discussion about public access embargoes.
A second point, of course, is that even if it were proved that there would be some economic loss to publishers with 6 or 12 month embargoes, that does not complete the policy discussion. The government does not support scientific research in order to prop up private business models. And the public is entitled to make a decision about return on its investment that considers the impact on these private corporate stakeholders but is not dictated by their interests. It may still be good policy to insist on 6 month embargoes even if we had evidence that this would have a negative economic impact on [some] publishers. Government agencies that fund research simply are not obligated to protect the existing monopoly on the dissemination of scholarship at the expense of the public interest.
By the way, Crotty is wrong, in the passage quoted above, to say that there is no evidence that short embargoes do not impact subscriptions other than the NIH experience. The European Commission did a five-year pilot study testing embargoes across disciplines and concluded that maximum periods of six months in the life sciences and 12 months for other disciplines were the correct embargoes.
In addition to what I said in the long quote above, I want to make two additional points.
First, it bears repeating that Davis’ study was commissioned by the publishing industry and released without any apparent peer-review. Such review might have pointed out that the actual relevance of this article half-life number is never explained or defended. But the publishing industry is getting to be in the habit of attacking open access using “data” that is not subject to the very process that they tell us is at the core of the value that they, the publishers, add to scholarship.
The second point is that I have never heard of any librarian who used article half-life to make collecting or cancellation decisions. Indeed, I had never even heard of the idea until the Davis study was released, and neither had the colleagues I asked. We would not have known how to determine this number even if we had wanted to. It is not among the metrics, as far as I can determine, that publishers offer to us when we buy their packages and platforms. So it appears to be a data point cooked up because of what the publishing industry hoped it would show, which is now being presented to policy-makers, quite erroneously, as if it was relevant to the discuss of public access and embargoes. Crotty says in his post that rational policy should be evidence-based, and that is true. But we should not accept anything that is presented as evidence just because it looks like data; some connection to the topic at hand must be proved or our decision-making has not been improved one bit.
We cannot say it too often — library support for public access policies is rooted in our commitment to serve the best interests of scholarship and to see to it that all the folks who need or could use the fruits of scholarly research, especially taxpayer-funded research, are able to access it. We are not supporting these policies in order to cancel library subscriptions, and the many efforts in the publishing industry to distract from the access issue and to claim, on the basis of no evidence or irrelevant data, that their business models are imperiled are just so many red-herrings.
NB — After this was written I discovered the post on the same topic by Peter Suber from Harvard, which comes to many of the same conclusions and elaborates on the data uncovered by the European Commission and the UK Research Councils that are much more directly relevant to this issue. You can read his comments here.
Hi Kevin,
Can you provide more information about the European Commission study you mention? I see nothing specifically about it in the pdf you’ve linked above. More data would be greatly welcome.
OA’s Real Battle-Ground in 2014: The One-Year Embargo
The EC pilot study worked on the assumption of embargo periods of 6 and 12 months but it did not test these. Likewise the PEER Project (running at the same time as the EC study) indicated that ‘when a journal can choose an embargo period there were no adverse effects’, but it too did not show that 6 month and 12 month embargo periods were arbitrarily fine, nor the preferred route to be adopted.