- How Science Works
- Sources and Experts: Where to Find Them and How to Vet Them
- Making Sense of Science Stats
- Editing for Story
- Editing Controversial Science
- Holding Science to Account
- Covering Health Care
- Climate and the Environment
- Fact-Checking Science Journalism: How to Make Sure Your Stories Are True
Illustrating Complex Science Stories
- The Role of Visuals in Science Journalism
- The Process of Building Science-Centric Graphics
- Strategies for Using Visuals to Put Breaking Science in Context
- Special Considerations for Data Visualization
- Uncertainty and Misinformation
- Editorial Illustration, Photography, and Moving Images
- Additional Reading and Resources
- About the Author
- Social Media and Reader Engagement
- Popular Science
- Op-Eds and Essays
- About This Handbook
Replication and Retraction
By Apoorva Mandavilli / 3 minute read
It’s not clear that the old way, in which both the appearance of results and any corrections took months, if not years, is any better than this new environment of rapid preprint “publication” and equally quick analysis.
BioRxiv pulled the coronavirus-HIV preprint in the matter of a day, and although the conspiracy theory of the virus’s being crated in a lab has not been squelched, the specific preprint and its findings have quickly disappeared.
With traditional publishing, controversial results might make a big splash, especially if they are heralded by embargoes and press releases. But any necessary corrections or retractions tend to go unnoticed, allowing the harm to persist.
Two examples that illustrate this problem:
A 2012 paper in an obscure journal called Diabetes, Metabolic Syndrome, and Obesity: Targets and Therapy shot to fame when Dr. Mehmet Oz promoted it on his Dr. Oz television show. He said the paper showed that an inexpensive extract of green coffee beans could cause people to shed pounds quickly and easily and without exercising. The journal’s obscurity, its tall claims, and its sample size of 16 people were all giant red flags, as any journalist should have known. But the pill became hugely popular. Eventually the study was retracted, and the government forced the manufacturers to pay out $9 million to defrauded consumers.
Perhaps the most destructive such retraction is a 1998 paper by a British doctor named Andrew Wakefield, who claimed to have seen a link between the measles, mumps, and rubella (MMR) vaccine and autism in a study of 12 children. Experts were immediately skeptical, but because the paper was published in The Lancet, a prestigious, peer-reviewed journal, and announced at a time when some parents were panicked about the rising rate of autism, it was widely covered and found a foothold.
Retractions should get at least as much attention as the original paper did.Ivan Oransky, editor in chief, Spectrum
Although no one could verify Wakefield’s claims, it wasn’t until a 2004 exposé by the science journalist Brian Deer that Wakefield’s fraudulent research and financial conflicts became clear. (He intended to sell testing kits and participate in litigation-driven testing.) By then the damage was long done. Wakefield has been discredited, and The Lancet retracted the paper in 2010, following a British government inquiry. But to this day, Wakefield is the patron saint of the anti-vaccination movement.
The larger point is, editors should make sure “retractions should get at least as much attention as the original paper did,” says Oransky. “That goes beyond just publishing it or even just putting out a press release about it. It’s important to look at the context and how much attention a particular finding has received and make the publicity around the retraction commensurate with that.”
It’s easy enough for a journalist to check whether a certain paper has been retracted or even if an author has had other papers retracted. PubMed, the repository for papers in the life sciences, clearly flags retracted papers, as do most publishers. RetractionWatch.org, which Oransky co-founded, also tracks corrections and retractions.
Discussion hubs for papers, such as PubPeer, are also good places to vet a particular paper or an author. Many of these sites allow anonymous comments, so they tend to be gossipy, but they can be a good source of scuttlebutt about certain labs or publications — fodder for deeper inquiry.