In many mystery novels, there is a moment when someone makes an attempt to frighten or even kill the detective.  Invariably, the detective reacts by deciding that the threat is actually a good thing, because it means that he or she is getting close to the truth and making someone nervous.  In a sense, the article in Science by John Bohannon reporting on a “sting” operation carried out against a small subset of open access journals may be such a moment for the OA movement.  Clearly the publishers of Science are getting nervous, when they present such an obviously flawed report that was clearly designed to find what it did and to exclude the possibility of any other result.  But beyond that, we need to recognize that this failed attempt on the life of open access does point us toward a larger truth.

A good place to start is with the title of Bohannon’s article.  It is not, after all, “why open access is bad,” but rather “Who’s afraid of peer-review?”  Putting aside the irony that Bohannon’s own article was, apparently, never subjected to peer-review (because it is presented as “news” rather than research), this is a real question that we need to consider.  What does it mean for a journal to be peer-reviewed and how much confidence should it give us in articles we find in any specific title?

In the opening paragraphs of his article, Bohannon focuses on the Journal of Natural Pharmaceuticals as his “bad boy” example that accepted the bogus paper he concocted.  He quotes a mea culpa from the managing editor that includes a promise to shut down the journal by the end of the year.  But I want to think about the Journal of Natural Pharmaceuticals and about Science itself for a little bit.

I was a bit surprised, perhaps naively, to discover that the Journal of Natural Pharmaceuticals is indexed in two major discovery databases used by many libraries around the world, Academic OneFile from Gale/Cengage and Academic Search Complete from EBSCO.  These vendors, of course, have a strong economic incentive to include as much as possible, regardless of quality, because they market their products based on the number of titles indexed and percentage of full-text available.  Open access journals are great for these vendors because they can get lots of added full-text at no cost.  But they do help us sort the wheat from the chaff by letting us limit our searches to the “good stuff,” don’t they?  Maybe we should not be too sanguine about that.

I picked an article published last year in the Journal of Natural Pharmaceuticals and searched on one of its key terms, after limiting my search in both databases to only scholarly (peer reviewed) publications.  The article I selected from this apparently “predatory” journal was returned in both searches, since the journal identifies itself as peer-reviewed.  This should not surprise us, because the indexing relies on how the journal represents itself, not on any independent evaluation of specific articles.  Indeed, I am quite confident that once this latest issue of Science is indexed in these databases, a search on “peer review” limited to scholarly articles will return Bohannon’s paper, even though it was listed as “news,” not subject to peer-review, and reports on a study that employed highly questionable methods.

Librarians teach students to use that ability to limit searches to scholarly results in order to help them select the best stuff for their own research.  But in reality it probably doesn’t do that.  All it tells us is whether or not the journal itself claims that it employs a peer-review process; it cannot tell us which articles actually were subjected to that process or how rigorous it really is.  From the perspective of a student searching Academic OneFile, articles from Science and articles from the Journal of Natural Pharmaceuticals stand on equal footing.

Of course, it is perfectly possible that there are good, solid research articles in the Journal of Natural Pharmaceuticals.  These indexes list dozens of articles published over the last four years, written by academic researchers from universities in Africa, India, Australia and Japan.  Presumably these are real people, reporting real research, who decided that this journal was an appropriate place to publish their work.  And we simply do not know what level of peer-review these articles received.  So the question remains — should we tell our students that they can rely on these articles?  If not, how do we distinguish good peer-review from that which is sloppy or non-existent when the indexes we subscribe to do not?

The problem here is not with our indexes, nor is it with open access journals.  The problem is what we think peer-review can accomplish.  In a sense, saying a journal is peer-reviewed is rather like citing an impact factor.  At best, neither one actually tells us anything much about the quality of any specific articles, and at worst, both are more about marketing than scholarship.

The peer-review process is important, especially to our faculty authors.  It can be very helpful, when it is done well, because its goal is to assist them in producing an improved article or book.  But its value is greatly diminished from the other side — the consumption rather than the production side of publishing — when the label “peer-reviewed” is used by readers or by promotion and tenure committees as a surrogate for actually evaluating the quality of a specific article. Essentially, peer review is a black box, from the perspective of the reader/user.  I don’t know if the flaws in the “bogus” article that Bohannon submitted were as obvious as he contends, but had he allowed its acceptance by the Journal of Natural Pharmaceuticals to stand, that article would look just as peer-reviewed to users as anything published in Science.  The process, even within a single journal, is simply too diverse and too subject to derogation on any given day because a particular editor or reviewer is not “on their game” that day to be used in making generalized evaluations.

So what are we to do once we recognize the limits of the label “peer-reviewed?”  In general, we need to be much more attentive to the conditions under which scholarship is produced, evaluated and disseminated.  We cannot rely on some of those surrogates that we used for quality in the past, including impact factor and the mere label that a journal is peer-reviewed.  Those come from a time when they were the best we could do, the best that the available technology could give us.  Perhaps it is ironic, but it is open access itself that offers a better alternative.  Open peer-review, where an article is published along with the comments that were made about it during the evaluation process, could improve the current situation.  The evaluations on which a publisher relies, or does not rely, are important data that can help users and evaluators consider the quality of individual works.  Indeed, open peer review, where the reviewers are known and their assessments available, could streamline the promotion and tenure process by reducing the need to send out as many portfolios to external reviewers, since the evaluations that led to publication in the first place would be accessible.

There are many obstacles to achieving this state of affairs.  But we have Bohannon’s article to thank for helping us consider the full range of limitations that peer-reviewed journals are subject to, and for pointing us toward open access, not as the cause of the problem, but potentially as it solution.

 

4 Responses to The big picture about peer-review

  1. […] Smith: The big picture about peer-review. In: Scholarly Communications @ Duke, 10. Oktober 2013 – Smith kritisiert die […]

  2. I agree that we should move toward open review. However, as you mention, funding agencies and hiring committees probably rely heavily on judging the quality of a scientist’s publications by where they were published. If scientific publishing changes to being based on open review, a paper on an author’s publication list, which is not published through a traditional journal but is openly reviewed, might get disregarded by such a committee because it does not “count” as published in the traditional way and judging it by its actual content, assisted by its openly accessible review comments, requires more work.
    I am not suggesting not to employ open review, but what can we do to avoid this kind of problem?

  3. Straight to the core!

    It is hard to accept that so many hours of thorough reading and evaluation by experts are thrown away after being reduced to a single “yes” or “no” decision. The current peer review model may serve editors and their commercial publishers to secure their “peer-reviewed” label without having to account for their selection of reviewers, but it is harmful for the rest: scholars, science and society. We need access to the assessments of the experts. I find it more reasonable to demand openness in the review process than invest millions in the research and development of “altmetrics” based on blind usage statistics. What better way to evaluate the quality of an article than ask the experts who actually read it!