Skip to main content

Verified by Psychology Today

Media

Top 3 Ways the Media Screws Up Reporting Science

Avoid these common traps the next time you see a scientific headline

Another day, another headline about the latest tantalizing research study. Could Tylenol cause ADHD? TV result in behavioral problems? Parental involvement cause poorer school achievement?

As someone whose gets asked to review and decide whether or not to publish articles submitted to scientific journals, I’ve learned how to dig a little deeper. If you think, however, that the “peer review” process means that scientists can’t publish a study that turns out to be completely misleading or wrong in its conclusions, think again. Furthermore, even when an article goes to great lengths to point out their own flaws and qualify their results, these subtleties are often the first to go when a media outlet is trying to make a splashy story.

To be fair, many television, radio, and internet reporters are doing their best the walk a fine line between creating interest and not overselling a study. Nevertheless, here is my view on the three most common ways the media can get it wrong when covering medical and psychological studies.

#3. Inflating risk. Many studies that are trying to find risk factors for diseases often report their results something like this…”People who consume green M&Ms are at twice the risk of developing green skin disorder (GSD).” While true (although obviously not in this case since I made this up), it is critical to know first what the baseline incidence of the disorder is. If GSD occurs in one in a million people, a doubling of the risk (which sounds pretty bad) means that those who eat green M&Ms have GSD at a rate of 2 in a million: hardly a smoking gun in terms of a risk factor determining a particular illness.

#2. Misleading titles. Everyone knows in the media that a good title is critical to get people to read content (which is why I spent so long on mine). Titles need to grab you and they need to be short, which means that more nuanced ideas aren’t going to fit. Author Mark Bittman wrote a short piece for the New York Times discussing a recent study that showed that the link between saturated fat consumption and heart disease may not be as strong as we thought. The title of the original study was a riveting “Association of Dietary, Circulating, and Supplement Fatty Acids With (Are you yawning yet?) Coronary Risk: A Systematic Review and Meta-analysis.” The Times piece that discussed the article was called “Butter is Back.” Indeed, the actual review was quite reasonable and measured, but the title gives a very different impression about what was actually contained in the article, let alone the original research study.

#1. Concluding causation from association. This one happens all the time. These provocative studies generally admit that they can’t prove causation but that usually gets lost in the proverbial fine print when it comes to reporting. And while this flaw may sound like methodological minutia, the problem really has the potential of rendering an entire study invalid. The issue is usually related to one of two things namely a) not being able to answer the timeless chicken or the egg dilemma, or b) there being a lurking and unmeasured variable that was the real driver of the association. A great example of the chicken/egg problem can be found in many studies that show a link between television viewing and attention problems. While it may be true that excessive screen time causes attention problems (which is the way the stories are generally spun), it may also be true that those with existing attention problems are more drawn to the stimulation of television and video games. An example of the unmeasured variable is the recent stir over a study linking taking Tylenol in pregnancy to child ADHD. While Tylenol might be the cause of increased ADHD, higher rates of ADHD might also be related to the reason a mom took Tylenol in the first place. The authors tried to measure some of those things but admitted there could have been some that were missed. This distinction is critical because if indeed the cause was related to the underlying reason someone took acetaminophen rather than the medication itself, then the message for pregnant moms to stop taking it could actually be exactly the wrong thing to do.

Science, in particular behavioral science, is really complicated with many moving parts that need to be controlled or at least accounted for. Those of us who have the privilege of trying to synthesize this information for people less familiar with how research works need to be very careful with those interpretations, even at the risk of sounding a bit more boring and wishy-washy.

@copyright by David Rettew, MD

David Rettew is author of Child Temperament: New Thinking About the Boundary Between Traits and Illness and a child psychiatrist in the psychiatry and pediatrics departments at the University of Vermont College of Medicine.

Follow him at @PediPsych and like PediPsych on Facebook.

advertisement
More from David Rettew M.D.
More from Psychology Today