Why Should We Believe the Science Behind Social Distancing?
How science works—and why we can trust the experts on coronavirus.
Posted March 27, 2020 | Reviewed by Devon Frye
Whether you personally believe COVID-19 is dangerous or not, our society has been disrupted to an unprecedented degree. Is it all worth it? While most people appear to be heeding the advice of epidemiologists and other medical professionals and it is having a positive impact, there are still a lot of people who think all this is overblown, or stronger yet, that the scientists making extreme recommendations are wrong altogether.
Every year, I have conversations with my research students about why and when they should trust scientific evidence. Sometimes it’s difficult to accept scientific findings when your eyes and ears are telling you something different from the data. In order to know why and when you should believe scientists about coronavirus or any other issue, it’s important to know what science does and how scientists draw our conclusions, and thus, why scientific findings weigh more than opinion or even personal experience.
What makes “data” so great?
In any scientific field, data are observations. In psychology, data refer to any behavior, thought, feeling, or even physiological response that scientists can see and record. When we collect data, we follow a careful process to ensure that the information we are getting is accurate.
First, we ask a question: what is happening, or why is it happening? Before we make any observations, we generate a hypothesis, or a specific and testable prediction about what those observations will look like. In other words, we have to declare what we expect to see before we see it.
Why is this important? It provides a sort of intellectual accountability. Hypotheses are built based on previous observations, and if we expect to see something different, we have to explain ourselves. If we expect to see the same results but don’t, we have to explain that, too.
Without hypotheses, we are more susceptible to the hindsight bias, or the feeling that we expected the outcome we got, even if we really didn't. Numerous studies have shown that we are pretty bad at predicting outcomes, so without hypothesizing, we’d feel like we’ve found a whole lot of common sense, and we wouldn’t critically question our results the way we must to find out how the world really works.
Hypotheses are usually phrased in an “if-then” format. In coronavirus terms, we might say “IF the virus spreads at this rate, THEN we would expect to see the number of cases increase at X rate,” or “IF the virus spreads faster under X conditions, THEN we would expect to see people in Situation A to get sicker than people in Situation B.” If we’re right, the data should show exactly what we predicted. If not, we need to figure out what we were wrong about and create a new prediction based on our updated knowledge.
After we make a prediction, we make observations under tightly controlled conditions so that we can isolate the behaviors or objects we are interested in and can rule out other explanations for the result we are seeing. We make sure that the way we are measuring our outcome is reliable and makes sense.
In research on humans, we test as many people as possible and make sure the people we are studying are representative of most people; that way, we can be more confident that the results we see in our study will look the same in the rest of the population, too. For example, if we only studied how coronavirus affects the elderly, we might incorrectly predict what will happen in other age groups.
Once we get data, we’re not done! One study is never enough to draw firm conclusions. Science requires replication, or studying the same thing over and over and over to make sure you get the same results over and over and over—wash, rinse, and repeat. If we do, we can begin to draw conclusions and make recommendations. Thus, when you see a headline that says, “Scientists say…,” it likely means there are several studies showing the same results. We also use the knowledge we’ve found to make other predictions about related behaviors and expand our knowledge. Finally, most published research is peer-reviewed, meaning that other experts in the field have evaluated the study to make sure it was done well and the results were interpreted correctly.
But isn’t a lot of research corrupted by corporations or agendas?
In short, no. Science aims to be “value free,” which basically means that we want to see how the world really works, whatever the outcome, not just find confirmation of how we wish it worked. The methods we use allow us to answer questions in as unbiased a way as possible. The peer-review process helps us make sure good quality work is being published, and because we’re constantly doing new studies, scientific knowledge updates and corrects itself when we make new discoveries. Science isn’t the most lucrative occupation, so the vast majority of us aren’t in it for the money—we’re in it because we’re genuinely curious and love the work.
Could a funding corporation come in and try to demand we produce certain results? They can demand all they want, but unless you’re committing fraud, the data say what they’re going to say, like it or not. As with any area of life, there are a few bad apples who do commit outright fraud, but it’s rare, and when they get caught, the consequences inside and outside the academic community are severe.
If you’re certain there’s a conspiracy going on, it’s likely that I can’t dissuade you of that. But the reality is that rather than being villains with world domination our minds, scientists (including myself) are really just a bunch of nerds who think the world is really cool and want to learn more about it and make it a better place.
Why is my experience different from what scientists are telling me?
We often feel like a scientific finding is “wrong” when it doesn’t match our anecdotal experience because individual experience is more vivid than big numbers. And our own worldviews come into play, too—whether we think a specific prediction is wrong or that scientists, in general, are no better at predicting behavior than anyone else. A phenomenon called confirmation bias shows that we will be more likely to notice and remember evidence that matches our beliefs, and discount or fail to even notice evidence that the science is right.
Prediction is a powerful, but imperfect, thing. One thing that psychologists and epidemiologists have in common is that we deal in probabilities: we can’t always tell you what every single person is going to do, or when a person is going to get sick, but we can tell you how most people are going to react, or how most people will be affected. Statistics give us information on trends and averages in a population, and sometimes people don’t behave like everyone else. This means that you will always know someone, or even multiple someones, whose situation does not match what the research described.
Prediction is easier in some fields than others. The tricksy thing about studying humans, whether it’s our willful behavior or how our individual bodies react to environmental conditions, is that we are fairly unpredictable creatures. Free will is great, but it makes doing science involving people a lot harder because we will never do the same thing all the time. Thus, we can predict what most humans will do most of the time, but we’ll never be able to pin down every individual.
Epidemiologists can predict that people in your demographic are very likely to contract coronavirus, but you individually may not, even if others do, because of your unique behaviors, physical and mental health, or some of the many other variables involved in becoming ill. Rather than conclude that they were wrong, a more fitting position might be to say, “Something was different about me.”
Even if no one you know has gotten sick, or those you know have had a mild version of the illness, the epidemiological models, which—as we established—are developed based on a heck of a lot of carefully collected data, show that this is shaping up to be a serious situation. Please trust the process, even if it seems inconvenient or even silly. There’s no ideological agenda, no deep conspiracy, just people who have devoted their lives to a deeper understanding of the world trying to use the knowledge they’ve gained to keep us all safe.