Retractions and the risk of moral panic

Several people sent me a link to this story from the Chronicle of Higher Education reporting on a study that finds that biomedical researchers continue to cite and rely on published articles even after the papers have been retracted.  My initial reaction was what I presume it was supposed to be – “Gee, that’s terrible.”  The conclusion that the article attributes to the study’s author is that, at worst, some researchers cite articles they have not read, and that, at least, researchers are getting to papers through informal routes that bypass the “official” websites where retractions are generally noted.

This article, however, prompted me to remember an earlier blog post and to explore a web site dedicated to publicizing retractions.  The result is that I want to qualify the potential for a “moral panic” based on this study in two ways.

The first is to remind us all that the Internet is not to blame for the problem of bad science living on in spite of retractions.  It is certainly true that the digital environment has lead to more copies of a work circulating, and those copies can be very persistent.  But printed copies of erroneous studies were and remain much harder to change or stamp with a notice than digital ones are.  In the “old days,” a retraction would be printed several issues after the original article, where many researchers would never see it.  Indeed, it is hard to imagine that a study like the one reported by the Chronicle could even be done in that environment; in most cases it was simply impossible to know (at least for the non-specialist) if an article was citing a prior work that had been discredited.  Today more copies persist, but it is easier to disseminate news of a retraction.

The blog post I remembered about this topic was by Phil Davis on the Scholarly Kitchen blog.  In spite of the post’s unfortunate title, Davis does an excellent job of describing this problem without simply foisting the blame on the Internet and the increased availability it facilitates.  He does suggest that the tendency to cite retracted articles is exacerbated by article repositories, and I would add that that we must balance whatever potential harm there is in these repositories with the great benefits to scientific research that are offered by improved access.  More important, however, is Davis’ discussion of a potential solution to the problem, a service called CrossMark which could help address the “version” issue.

The other blog site that I explored for some insight into the retraction problem is “Retraction Watch,” which is mentioned in the Chronicle report.  What was most interesting about this site, I thought, was its sophisticated awareness of the variety of reasons for retraction and its recognition that not all retractions indicate that an article’s conclusions are unsound.

When we hear that an article has been retracted, we immediately suspect, I think, that there has been fraud, fabrication or falsification.  At the very least we suspect that the authors have discovered that their results cannot be verified or reproduced.  Often this is true, but there are other reasons for retraction as well.

One possible reason for retracting a paper is that it was sloppily presented, even if accurate.  That seems to have happened in regard to a paper by Stanford scientists that was retracted by the Journal of the American Chemical Society.  The authors agreed to the retraction, apparently, because of “inconsistencies” in the documentation and interpretation of the data, but have subsequently verified the fundamental finding that the paper reported.  And some retractions are even less grounded in fundamental scientific errors; retractions have occurred because of political pressure (such as with the conflicting studies about the effect of gun ownership on crime), or even because some people thought an article was in bad taste (Retraction Watch reports here on such a case).

What I like about Retraction Watch is that it looks seriously at the different reasons for retractions and, when they are not clearly explained, as in this retraction from the journal Cell, tries to dig deeper to discover what the flaw actually was, or was perceived to be.  This should be a model for our general reaction to retractions and the news that retracted articles continue to be cited.  We should ask the “why” question over and over while remembering that scholarly communications is a complex system with many layers; simple answers and moral condemnation in advance of specific facts are almost never helpful.

2 thoughts on “Retractions and the risk of moral panic”

  1. Hi Kevin: Thanks for posting this. It’s a terrific extension of the issues raised by the paper and in my CHE Hot Type column (mentioned at the beginning of your post). There’s a really interesting conversation emerging in the comments there about the question of what happens when researchers are working with print rather than electronic copies of journals, and the difficulties that can create when it comes to spotting corrections and retractions.

    I didn’t go into the many reasons behind retractions, as you do here. One of the missions of Retraction Watch’ is to get journal editors to be as transparent as possible about the reasons for retracting a paper, so that other researchers will know whether they’re dealing with flawed data or something less troublesome.

  2. Pingback: July 2011

Comments are closed.