Mindy Greenstein Ph.D.

The Flip Side

The Savvy Reader's Guide to Science

7 questions to ask about the latest research buzz.

Posted Jun 11, 2013

Social science and medical research have a problem these days, that’s both a good problem and a bad one. The days of “just don’t worry your pretty little head over it” are over, as the average person has more access than ever to the latest findings, thanks to books, blogs, magazines, and newspapers. But the downside is that with more information also comes more responsibility—and confusion—especially when findings keep reversing themselves (eggs are bad for you, no, good for you, no…), and famous researchers keep admitting to outright fraud.

In the meantime, the savvy reader needs to know what questions to ask, before she starts making changes in her life “because studies confirm” she should:

1) Is the data correlational?

If the data is correlational, then the study can’t be said to show causality. This rule is violated all the time. As long as the word "correlation" is used, you should NEVER assume a study has shown that A caused B. For example, there is a strong correlation between ice cream sales and violent crime. Does that mean that your next rocky road might cause you to gun someone down? Or that robbing a bank will make you crave 31 Flavors? It so happens this is an example of a "spurious" correlation - they're both most likely “related” to each other because they’re both related to a third variable, namely, the season of the year. Both crime and ice cream sales go up in the warmer summer weather.

The same could potentially be true of the much touted relationship between optimism and health: optimism could lead to better health; better health could lead to feeling more optimistic; or some other attitude and/or circumstance entirely can lead to both. For instance, what if you’re a “defensive pessimist,” originally described by psychologist Julie Norem—someone who needs to envision all the problems along the way, develop reasonable Plan B’s, and then can relax? In that case, a forced push to be optimistic might make coping harder, rather than easier. And a false belief that science has “proven” that optimism causes better health (with the implication that lack of optimism causes the opposite) might just make you feel more depressed.

2) Is this finding true in the context of my individual life?

A research finding in the lab is still only a new hypothesis when it comes to your individual real life. Remember, a significant research finding wasn't significant for every subject in the study; it's possible it wasn't even significant for most subjects. A positive result means only that the result was greater than what you’d expect purely by chance. You don't know whether it would have been true for you, had you been in the study.

In order to get a sense of whether the finding might true for you, too, you need to evaluate it in the context of your life.

3) Who are the subjects, and how were they recruited?

College students fulfilling a Psych 101 requirement to be a subject aren't necessarily like everyone else. Find out who the study’s subjects were and how they were recruited. If, for example, subjects were people who responded to an online questionnaire, then their answers might be true for people who ordinarily read that web site, but not for others. Ask yourself how similar or different you might be from the subjects recruited for the study.

4) What's the chance of a false positive? (also known as the fine art of "Data Massage")

When we describe a finding as significant, what we’re really saying is that there is a 95% chance the observed effect actually exists in reality, and a 5% chance that the researcher got that finding randomly. But as Wharton Professor Joseph Simmons and his colleagues showed, that 5% chance of a false positive can go up to as much as 60%, depending on the (commonly used) experimental or statistical techniques used by the researcher. Simmons and his group even devised one ingenious study combining these various techniques, and managed to show the impossible result that people actually become younger after listening to a Beatles song.

These techniques are known in the field as "data massage," and we've got some great masseurs and masseuses out there.

Common “massaging” techniques include:

a) analyzing the data of a smaller number of subjects; stop if the data confirm the hypothesis, but if the data don’t confirm it, add more subjects and reanalyze the data; repeat this step until data confirm the hypothesis, and then stop. The “kosher” way is to decide on the number of subjects ahead of time, and stick with that number, whether or not the researcher gets the answer she wants.

b) Taking many different kinds of measures of the same variable (say, multiple measures of anxiety or depression), but reporting only the measures whose analysis confirms the original hypothesis. If the experimenter uses enough measures, some will come up positive purely by chance, and not because his hypothesis was correct.

c) Reporting only the experimental conditions that worked the way the researcher wanted them to. One known example is a pharmaceutical company reporting the times its drug worked better than placebo, but not reporting the times it didn’t.

5) Is there a Mind-Body Research Paradox at work?

For example, saying there's a relationship between stress and illness (cancer, heart disease, infertility) can be tantamount to putting your lips against someone's ear and screaming, "YOU NEED TO RELAX!" Even if it's true, the knowledge might not help very much.

6) Has the study been replicated?

It's great to be in on cutting edge research, but the down side is that you don't really know yet how valid the finding is. There are many fads that come and go. The problem is compounded by the fact that prestigious journals rarely publish replication studies, so there's little incentive for researchers afraid of publish or perish (which is to say, most researchers) to do them. Stanford researcher John Ioannidis went so far as to suggest, "Most research findings are false for most research designs and for most fields." Fortunately, the new Center for Open Science recently was given a $5,000,000 boost to create just that incentive for researchers to do replication studies, to see if results hold up.

7) The Problem of the Beverly Hillbillies Cure for the Common Cold

Okay, I made up that title, but it's a real problem. In an episode of the Beverly Hillbillies, the community and media go wild when they learn that Granny has a family cure for the common cold. It isn’t until after 30 minutes (minus commercial interruptions) of excitement and mayhem that anyone thinks to ask how the cure actually works: if you drink the elixir everyday, then you'll be fine—in about two weeks’ time. In other words, there was no control group, no group who doesn't get the "cure" to compare with the group who does, in order to see if there's any difference (or even to see if the "cured" folk do less well than those who have nothing.)

 *****

The idea of science is based on the assumption that the scientist is seeking the truth, rather than a rubber-stamping of his or her personal beliefs. But, people being people, this assumption isn't always justified, not necessarily because scientists are frauds (though, in some well known cases, they are), but because our beliefs can color how we manipulate our data, whether or not we're consciously aware of doing that.

These same questions apply to the science journalists or magazines routinely reporting on the latest studies. The job of the journalist is to tell us a truth, often in the form of a story, but what if the truth is boring and a questionable counter-intuitive finding is more interesting? Then, she, too, has a source of bias. If she doesn't ask the tough questions about the data, then you can’t trust her reporting, and you need to go to the source and find the answers yourself.

Like I said, the blessing of access to so much information is always accompanied by the responsibility for judging how valid that information is.