Several people sent me a link to this story from the Chronicle of Higher Education reporting on a study that finds that biomedical researchers continue to cite and rely on published articles even after the papers have been retracted. My initial reaction was what I presume it was supposed to be – “Gee, that’s terrible.” The conclusion that the article attributes to the study’s author is that, at worst, some researchers cite articles they have not read, and that, at least, researchers are getting to papers through informal routes that bypass the “official” websites where retractions are generally noted.
This article, however, prompted me to remember an earlier blog post and to explore a web site dedicated to publicizing retractions. The result is that I want to qualify the potential for a “moral panic” based on this study in two ways.
The first is to remind us all that the Internet is not to blame for the problem of bad science living on in spite of retractions. It is certainly true that the digital environment has lead to more copies of a work circulating, and those copies can be very persistent. But printed copies of erroneous studies were and remain much harder to change or stamp with a notice than digital ones are. In the “old days,” a retraction would be printed several issues after the original article, where many researchers would never see it. Indeed, it is hard to imagine that a study like the one reported by the Chronicle could even be done in that environment; in most cases it was simply impossible to know (at least for the non-specialist) if an article was citing a prior work that had been discredited. Today more copies persist, but it is easier to disseminate news of a retraction.
The blog post I remembered about this topic was by Phil Davis on the Scholarly Kitchen blog. In spite of the post’s unfortunate title, Davis does an excellent job of describing this problem without simply foisting the blame on the Internet and the increased availability it facilitates. He does suggest that the tendency to cite retracted articles is exacerbated by article repositories, and I would add that that we must balance whatever potential harm there is in these repositories with the great benefits to scientific research that are offered by improved access. More important, however, is Davis’ discussion of a potential solution to the problem, a service called CrossMark which could help address the “version” issue.
The other blog site that I explored for some insight into the retraction problem is “Retraction Watch,” which is mentioned in the Chronicle report. What was most interesting about this site, I thought, was its sophisticated awareness of the variety of reasons for retraction and its recognition that not all retractions indicate that an article’s conclusions are unsound.
When we hear that an article has been retracted, we immediately suspect, I think, that there has been fraud, fabrication or falsification. At the very least we suspect that the authors have discovered that their results cannot be verified or reproduced. Often this is true, but there are other reasons for retraction as well.
One possible reason for retracting a paper is that it was sloppily presented, even if accurate. That seems to have happened in regard to a paper by Stanford scientists that was retracted by the Journal of the American Chemical Society. The authors agreed to the retraction, apparently, because of “inconsistencies” in the documentation and interpretation of the data, but have subsequently verified the fundamental finding that the paper reported. And some retractions are even less grounded in fundamental scientific errors; retractions have occurred because of political pressure (such as with the conflicting studies about the effect of gun ownership on crime), or even because some people thought an article was in bad taste (Retraction Watch reports here on such a case).
What I like about Retraction Watch is that it looks seriously at the different reasons for retractions and, when they are not clearly explained, as in this retraction from the journal Cell, tries to dig deeper to discover what the flaw actually was, or was perceived to be. This should be a model for our general reaction to retractions and the news that retracted articles continue to be cited. We should ask the “why” question over and over while remembering that scholarly communications is a complex system with many layers; simple answers and moral condemnation in advance of specific facts are almost never helpful.
Policy on Electronic Course Content
For help deciding whether course content in Blackboard or some other digital form is fair use or requires copyright permission, consult this policy document adopted by the Academic Council in February 2008.
Search the Scholarly Communications Blog
- Authors' Rights
- Copyright in the Classroom
- Copyright Information Notes
- Copyright Issues and Legislation
- Digital Rights Management
- Fair Use
- international IP
- Open Access and Institutional Repositories
- Open Access topics
- Orphan works
- Public Domain
- Scholarly Publishing
- Traditional Knowledge
- User Generated Content
- Academic publisher on Finding out who your friends are
- Martina Periodicos on The GSU decision — not an easy road for anyone
- Jeff Malaguilla on The six million dollar fair use standard
- Kevin Smith on “the radical disaggregation of scholarship” | Marygrove Library News on Meet me at the intersection
- friends and foes at Attempting Elegance on Finding out who your friends are