Lots of news stories and emails flying around about open access in the past few weeks, and as I tried to think what theme might bring them together, I realized that I wanted to talk about three things that open access is not. Here they are:
First, open access is not more prone to abuse than other types of publishing. We hear a lot about “predatory” open access journals, and recently we have also heard a lot about fraud and retracted articles from traditional journals. We need to connect the dots and realize that both systems can be abused, just as all systems devised by human agents can be.
Consider, for example, this story from the Chronicle of Higher Education about a researcher who apparently faked nearly 200 scientific studies. The journals he published in were not top-of-the-line, but they were respectable, traditional subscription-based journals that libraries all over the world pay for. The principal journal mentioned in the article, Anaesthesia, is published by Wiley-Blackwell and is part of the journal package that my library buys. Were we the victims of a predatory subscription journal, or just of an increasingly slipshod system that often fails to live up to the claims made for top-notch editorial work? Either way, the difference between the failures at Anaesthesia and those for which OA journals are sometimes blamed does not seem all that significant.
Indeed, reform of the system by which scholars are evaluated and rewarded is exactly the recommendation of this New York Times article about the rise in scientific retractions, and that accords nicely with some of the changes OA journals can facilitate. After detailing the really alarming rise in retractions, the article quotes suggestions that one way to combat fraud is to stop evaluating scholars simply on the number of papers they publish and the reputations of the journals in which they are published. They do not directly address the question of why these journals, with all of their professional editorial staffs, do not initially catch the fraud, which is increasing at a faster rate than the number of scientific articles is. But their recommendations point in the same direction the OA movement has been going for a number of years — away from a focus on impact factors, which only rate journals, and towards a more flexible and article-based set of metrics that actually relates to the specific article and to the nature of its impact. More about this in a minute.
The second thing that open access is not is just one thing. Recently I have seen a lot of debate about what is and is not open access. Much of this debate has centered on the Finch Report in the UK, which recommended a rapid transition to open access publishing of research results, but put a heavy emphasis on gold OA, and specifically that subset of gold OA in which publishers are supported by article processing changes paid in advance of publication. You can see one example of the debate this has caused here.
Note that I said that charging article processing fees is a subset of gold open access. Much gold OA happens without such fees. Some fully OA journals are simply supported by organizations, whether those are published by scholarly societies, by libraries (Duke’s are here) or by major funders such as those that support eLife. Also, the new journal PeerJ is trying a different experiment in gold open access, financed by memberships. Finally, the recent announcement about the SCOAP3 effort to flip the financing for all of the prominent journals in high-energy physics shows that radical new experiments somewhat different from those on which the Finch Report was focused can succeed.
The diversity of these forays into open access are important, and in my opinion there will be lots of different styles of open access for some time to come. After all, the most effective effort at improving access to funded research in the US so far, the public access policy of the National Institutes of Health, relies to some extent, at least, on green open access, and never depends upon or requires the payment of article processing charges. One of the reasons I especially like this new tool from SPARC, called “How Open Is It,” is because it allows one to evaluate openness on a variety of factors, and seems to recognize that openness is an ambition and a process, not one specific definition.
Failing to keep the diversity of open access efforts in mind leads to some unfortunate conclusions. In a recent statement the American Historical Association “voiced concern” that open access would not work for the humanities and social sciences the way it has for the natural sciences. This is a common complaint, but the myopia on which it is based is especially obvious in the AHA statement, which cites the Finch report and is focused entirely on “author pays” models of open access. Nowhere does the AHA statement consider that green open access would produce significant benefits for historians without the difficulties about which the AHA is wringing its collective hands. Perhaps this failure to see the whole picture is part of the reason that the AHA’s own flagship journal, the American Historical Review, does not facilitate author self-archiving, allowing only the pre-print versions of articles to be made accessible, which is nearly never an acceptable option for historians. If the AHA really wants to keep up with the inevitable future, it needs first to change the policies at AHR. The results, I predict, would help facilitate the transition to other models for history publishing.
The third thing that open access is not is just a business model. In all the debates about which form of OA is best and how each form can be financed, we can lose sight of the fact that more than how we divide up a pot of money is at stake here. Open access is also a statement about the values of scholarship; an attempt to introduce more transparency into the process of research and to encourage greater participation in its creation, financing, and evaluation. Which gets me back to better metrics for assessing the quality and impact of scholarship. The movement called AltMetrics is one of the most exciting thing about open access; it is a chance to use new tools to study the impact of specific articles on a more granular level, yet across a much wider field.
At Duke, we are very excited to have one of the pioneers in the move toward AltMetrics, Jason Priem, who co-founded the ImpactStory project, coming to speak in our Library for Open Access week. Jason will talk on October 22 on “Altmetrics and the decoupled journal: an endgame for Open Access.” His talk is described here. Jason’s lecture is open to the public, and we hope many people — anyone with an interest in where the future of scholarship is headed and how we can break out of the cumbersome and unreliable system of evaluation that has accreted over the years — will attend.