- How Science Works
- Sources and Experts: Where to Find Them and How to Vet Them
- Making Sense of Science Stats
- Editing for Story
- Editing Controversial Science
- Holding Science to Account
- Covering Health Care
- Climate and the Environment
- Fact-Checking Science Journalism: How to Make Sure Your Stories Are True
Illustrating Complex Science Stories
- The Role of Visuals in Science Journalism
- The Process of Building Science-Centric Graphics
- Strategies for Using Visuals to Put Breaking Science in Context
- Special Considerations for Data Visualization
- Uncertainty and Misinformation
- Editorial Illustration, Photography, and Moving Images
- Additional Reading and Resources
- About the Author
- Social Media and Reader Engagement
- Popular Science
- Op-Eds and Essays
- About This Handbook
Cover Your Bases
By Erin Brodwin / 4 minute read
Because true breakthroughs are rare in health care, one of your primary tasks is to ensure that the framing of stories — from headline to conclusion — about new interventions or treatments balances potential benefits with potential risk or harm. You should familiarize yourself with the regulatory process by which medical products and interventions enter the market. In most cases, new interventions, whether tests, drugs, or other treatments, are overseen and reviewed by the Food and Drug Administration. After a thorough review, the agency may grant some level of clearance, authorization, or approval to a given intervention. Each label implies a different level of scientific rigor, with “approval” being the highest and most difficult to earn. Do not call a new intervention “FDA-approved” in a story unless you are certain that that is the case.
Because the public is unfamiliar with these terms, relying on them in your stories is unwise. You should be sure to define the terms so that readers understand exactly what they mean. The FDA provides more information on what the FDA does and does not have authority over.
Above all else, put evidence and data above anecdotes, and do not be swayed by the experiences of one patient or family in a way that leads you to gloss over key limitations that could have an impact on treatment for others.
Let’s look at an example of a problematic health-care story, in which an editorial team relied on one couple’s experience with a new intervention in a way that was dangerously misleading. After we discuss this example, we’ll look at an alternative example of a well-reported story on the same topic. Both examples focus on endometrial receptivity tests, which analyze the activity of genes in the lining of the womb to help identify the optimum window for embryo implantation.
For a story published by ABC News in 2018 titled “How a breakthrough new fertility test is offering hope for families who have suffered miscarriages,” the editorial team profiled a couple who for years had struggled to conceive. They succeeded in having a child after using a new analysis tool called an endometrial receptivity test. If you’ve been paying attention, you’ve probably already identified the story’s first error: using the term “breakthrough” in a headline.
Though there are rare instances in which these kinds of terms are merited, this story was not one of them. Why? It lacks the evidence necessary to support such a claim. For starters, there’s a classic correlation/causality problem at work here: We don’t actually know that the test caused the successful pregnancy. All we know is that after taking the test, one woman succeeded in conceiving.
We also don’t know anything about the test’s success rate or the risks or possible harm associated with it, because the story fails to reference a single piece of peer-reviewed research. What’s more, the story does not tell us anything about the cost of the treatment or the companies or institutions that stand to benefit financially from its use. Without that information, we can’t determine how accessible or available this test will be, or whether the couple highlighted by this story was spurred to talk by a company with a vested financial interest in their success.
Conversely, here’s a story that covers the same topic but nails all of the basics:
In the story, published by The Guardian in 2015, the author provides a balanced overview of a new treatment that checks off all of the “dos” and none of the “don’ts” discussed in this chapter. The headline, “IVF test improves chances of implantation by pinpointing fertility window” expresses hope but does not exaggerate. The text, rather than focus on one couple’s success, describes the state of research into the treatment and includes perspectives from a wide range of scientists and clinicians — from those affiliated with a new study to others who might be interested in providing the treatment.
The piece does use the word “breakthrough” — but places it into appropriate context, in a way that avoids hype. Rather than place the word in the headline or even in the lede, the author offers a quote lower in the story from one of the researchers, shows clearly that it expresses an opinion, and appropriately identifies the researcher’s role in the new study: “Geeta Nargund, medical director of Create Fertility whose London clinic is participating, said: ‘The weakest link in IVF is implantation failure. I believe this is a breakthrough.’”
The story goes on to report on the treatment’s limitations and the basic science of how it works, and to identify the institutions that funded the research.
As these two examples show, health-care stories should not play the role of press releases. They should not encourage readers to make purchases by presenting only partial data. Instead, they should provide a balanced overview of risks and potential harm by giving readers access to the available evidence.