Skip to main content

Verified by Psychology Today

Mylea Charvat, Ph.D.
Mylea Charvat, Ph.D.
Media

How to Tell the Difference Between Good and Bad Science

You don't need a STEM degree to have high scientific literacy.

Drew Hays/Unsplash
Source: Drew Hays/Unsplash

Nothing makes headlines like the latest medical breakthrough. The heroic work of researchers should lead the news when it fundamentally changes our world and prospects for improved health. However, it can often be difficult to sort good science from bad.

When the Media Misleads

One of the reasons it is difficult to determine breakthrough versus dodgy science is that the media is motivated to present the latest scientific findings in the most interesting way possible. Unfortunately, the trend of clickbait has led to misleading articles like “Red wine compound helps kill off cancer cells, new study finds” from the LA Times. The natural takeaway of the article is that the compound resveratrol can help make radiation therapy more effective for people fighting cancer.

And yet, a quick review of the published study in question makes it apparent that the LA Times vastly overstated its implications. The study wasn’t performed on humans or even mice—it was carried out on melanoma cells in a petri dish. It is a far leap to extrapolate results from a petri dish to the human body. In this case, the problem comes with what wasn’t said rather than what was said. While the article does explain that the compound fights cancer cells, it doesn’t make the distinction that these are cells in a petri dish and not cells in the body—a big difference. That is why for each media review of a scientific study, it is critical to go to the source article or abstract.

A Quick Review of a Scientific Article

Where do you begin once you find the original study? Many people think that the Results section of an academic article is the best way to tell if the work is important. In fact, the most critical section of a research paper is the Methods section. This is the part that tells you whether or not you can trust the results.

Two Types of Study Methodologies to Know

First, there is one type of study that is commonly cited in the news: the clinical trial. A clinical trial evaluates the effectiveness of a treatment by comparing a group of people who were given a treatment to a group who were not, called a “control group.” The best kind of clinical trial is randomized; study coordinators place participants into treatment groups randomly rather than placing them in the group where they think the participants will respond most favorably.

One way of comparing treatment groups is with a placebo. One group receives the treatment and the other receives a placebo, commonly a sugar pill, which has no significant effect on health. While it can be useful to compare all versus nothing to assess efficacy, this is not always helpful or ethical. In fact, the World Medical Association has a specific provision around the use of placebos, stating that “the benefits, risks, burdens, and effectiveness of a new intervention must be tested against those of the best proven intervention.” A placebo may be used if no other intervention exists or if there is solid reasoning behind why another intervention is not needed.

Another study often relayed in the news is the observational study. This type of study does not assign treatments to different groups. Instead, it observes people who are already performing a health behavior­—whether it be taking a drug or engaging in activities like smoking or exercise. One of the most famous observational studies is the Framingham Heart Study. This is an ongoing, epic project. It began in 1948 with 5,209 participants of the Massachusetts town of Framingham and is still collecting data and running participants today. Two considerable findings attributed to this study are that high blood pressure increases the risk of stroke and cigarette smoking increases the risk of heart disease.

What to Look for in the Methods Section

When it comes to examining the robustness of a study, whether it be a clinical trial or an observational study, it can be difficult to discern red flags from acceptable limitations. Generally, it is important to have a large sample size. However, this is not possible if a disease is rare or the population being studied is uncommon or difficult to recruit into a study. This does not mean that the research should be thrown out, just that the effect size of the results should be evaluated carefully, as well as the ability to generalize the study results to the population at large.

Along these lines, it is always preferable to have a representative population in the study. This has been a considerable fault of science in the past. A recent article from STAT News shed light on the clinical trial for Ninlaro, a drug for multiple myeloma. Despite the fact that almost 20 percent of multiple myeloma sufferers are black, only 1.8 percent of trial participants were black. This kind of skewed representation requires a dose of skepticism to evaluate the results—and a demand that medical researchers aim to do better.

The Methods section must also be detailed. This section should answer who, what, when, where, and how. Here are some questions that you could expect the Methods section of a clinical trial study to answer.

  • Who was included in the study? Why were some people considered ineligible?
  • What is the intervention measured? How was it administered?
  • When­ and how long did this study run for?
  • Where was the study run? Which hospital or facility?
  • How was the data analyzed? What statistical methods were used? Why were they chosen?
  • How were the treatment and control groups decided?

Sometimes all these questions may be answered, but there may still be lingering inconsistencies. If you are not certain if a Methods section is sound, ask a scientifically-minded friend to review it as well. Their different experiences may help them notice fallacies or allow them to explain why the methods were laid out in that way and are actually correct. Sometimes the media will review a study. Outlets devoted to science, medicine, or research are usually more adept at identifying holes in studies or acknowledging which are sound.

We look to the research communities to answer our most burning questions about health. The vast majority of studies are done in good faith with strong scientific foundation, but for the ones that are not, we need to be able to assess for ourselves what is good science and what is bad science.

advertisement
About the Author
Mylea Charvat, Ph.D.

Mylea Charvat, Ph.D., is a clinical psychologist, translational neuroscientist, and the CEO and founder of the digital cognitive assessment company, Savonix.

Online:
Savonix, Twitter
More from Mylea Charvat, Ph.D.
More from Psychology Today
More from Mylea Charvat, Ph.D.
More from Psychology Today