- How Science Works
- Sources and Experts: Where to Find Them and How to Vet Them
- Making Sense of Science Stats
- Editing for Story
- Editing Controversial Science
- Holding Science to Account
- Covering Health Care
- Climate and the Environment
- Fact-Checking Science Journalism: How to Make Sure Your Stories Are True
Illustrating Complex Science Stories
- The Role of Visuals in Science Journalism
- The Process of Building Science-Centric Graphics
- Strategies for Using Visuals to Put Breaking Science in Context
- Special Considerations for Data Visualization
- Uncertainty and Misinformation
- Editorial Illustration, Photography, and Moving Images
- Additional Reading and Resources
- About the Author
- Social Media and Reader Engagement
- About This Handbook
By Melinda Wenner Moyer / 5 minute read
When reporters cover a new study or scientific finding, they should begin by reaching out to one or two of the study’s authors — typically the first author, who has conducted much of the research, and/or the last author, who is usually the senior scientist overseeing the project. (Often this author is listed as the corresponding author, meaning the one who was in communication with the journal during the publication process.) Toward the end of those interviews, journalists should ask their sources for recommendations for others, who were not involved in the study, to contact. Even for a short news story, it’s crucial to interview at least one — ideally two — of these “outside” researchers, who can comment on the findings’ validity and significance.
Usually, authors will recommend scientists they know will agree with their findings. To ensure diverse perspectives, it is useful to ask authors for the names of scientists they respect but who might have different viewpoints.
To diversify, ask your sources, can you recommend who else is new and upcoming in the field? Often the people that sources recommend first are the already successful senior people.Apoorva Mandavilli, science reporter, The New York Times
“I’ll specifically ask them who disagrees with them about anything they’re doing or claiming,” says Natalie Wolchover, a senior editor and writer at Quanta magazine, who covers physics and astronomy. “That sets you on the path of finding the conflict that can be important for your narrative.”
Another excellent place to hunt for sources is in the reference section of the study being covered. Usually, researchers mention and cite other related studies in their field, including findings that inspired their research (often described in the paper’s “Introduction” section, which can also provide helpful history and context). In these references, reporters can find the names of scientists who might be good to reach out to as well.
Scan news stories and magazine pieces to see who has given pithy, evocative, or memorable quotes in the past.Ferris Jabr, contributing writer, The New York Times Magazine
Without a range of perspectives, extraordinary claims may not get put in proper context. In 2016, a number of publications, including The Independent, in the United Kingdom, and Futurism.com, in the United States, covered a seemingly exciting new theory in particle physics suggesting that a new natural force, like gravity or electromagnetism, had been discovered. The “discovery” was first described and disseminated in an institutional press release. Reporters and editors who didn’t bother seeking proper perspective ran breathless headlines such as “Scientists Might Have Discovered a Fifth Force of Nature, Changing Our Whole View of the Universe.” Most of those articles quoted only the first author of the paper but no outside researchers — an unfortunate decision, which prevented the reporters from getting the rest of the story. As it turned out, this new theory was speculative and had no data to support it. As Wolchover uncovered and explained in a piece she wrote for Quanta, the research leading to the theory had a rich back story, which included accusations by other researchers of bias and cherry-picking. Simply put, better sourcing would have led to better stories.
For longer features, source hunting can be a more labor-intensive process, in part because features need more sources. (In my opinion, for 3,000-plus-word science features, writers should talk with eight to 15 sources.) Reporters might want to begin by hunting for recent studies on the topic in research databases such as PubMed (for published biomedical papers), ScienceDirect, PsycInfo (for psychology papers), arXiv (for physics and math preprints), and medRxiv (for biomedical preprints). Then they should reach out to scientist authors who have published on the subject within the past couple of years. (If they published on the topic a decade or more ago, they may not know the current state of the science.)
Another approach is for reporters to reach out to public-information officers (PIOs) at research institutions (such as universities and hospitals), explain the scope of their article, and ask if the PIOs can recommend experts to interview. This can be especially useful if a reporter knows that the institution is well regarded in the field at hand.
It’s crucial, though, for reporters to home in on exactly what the relevant field or subject is. For some stories, they will need to find scientists with specific expertise. In 2019, I traveled to West Africa to report a feature for Scientific American about a controversial theory in immunology suggesting that live vaccines (as opposed to inactivated vaccines) protect against more than just their target diseases. (That is, that the live measles vaccine might also reduce the risk for diarrheal and blood infections.) On one level, the subject of the story was immunology, and I could have interviewed general immunologists to get outside comments — but most would not have been familiar with this niche area of immunology. In order to find sources who were familiar enough with the idea to comment, I had to search PubMed using targeted terms (“non-specific vaccine effects”) to find relevant papers and sources. Those sources could talk with me about the evidence behind the theory as well as limitations in the research methodologies used to study the phenomenon, which most immunologists could not have done.
Journalists also need to keep in mind that researchers might present themselves as experts in a particular area even when they aren’t. In January 2020, at the beginning of the U.S. coronavirus outbreak, the Harvard University-affiliated epidemiologist Eric Feigl-Ding tweeted out a series of terrifying (and incorrect) statistics about the virus based on a non-peer-reviewed paper he had read. The tweets went viral, and he was invited onto national television networks, including CNN, to discuss the coronavirus. But Feigl-Ding, while trained in epidemiology, has a Ph.D. in nutrition, not infectious disease — an important detail that journalists glossed over. As the Harvard epidemiologist Marc Lipsitch wrote of Feigl-Ding, “He has a temporary appointment in our department of nutrition, is not an expert on viruses or evolution, and is seeking publicity, not scientific dialogue.” Had journalists spent just two minutes vetting his background via his Harvard visiting-faculty webpage, they would have discovered that he did not have the appropriate expertise.
“A lot of people say, ‘Oh, they have a Ph.D. or an M.D. and therefore they can speak about that subject,’ and that doesn’t fly with me,” says Ivan Oransky, editor in chief of Spectrum magazine and himself an M.D. What matters, he says, is that sources have expertise in the specific area of science or medicine being discussed. “If, say, an oncologist has a really fantastic idea about a pandemic, then that’s interesting. But they are only a little bit closer to being a reputable source on that than someone who doesn’t have a medical degree.”
The Feigl-Ding debacle notwithstanding, social media can, sometimes, be a fruitful way to find experts. “I am pretty shameless about putting out calls for sources on Twitter,” says the Seattle-based science journalist Wudan Yan. But journalists have to be careful. People might recommend sources without appropriate expertise or present themselves as experts when they’re really not. Also, given concerns over social media “bubbles” — that platforms such as Facebook create mini echo chambers in which people interact with only like-minded others — journalists who use social media to find sources can inadvertently limit the types of sources (and perspectives) they find and include. If reporters use social media for sourcing, they should make sure to hunt for sources in other ways, too.
Still, if reporters are specific and careful in their calls for sources on social media, they can identify useful people they might not otherwise have found. Recently, when I tweeted out that I was looking for researchers who study how schools shape the spread of infectious disease, acquaintances suggested excellent researchers whom I probably would not have identified by searching PubMed. But all of this does raise a question: Once you find sources to interview, how do you confirm that they really know what they’re talking about?