3 Reasons for the Rise of Fake News
Cailin O'Connor explains reasons for the shift in American politics.
Posted April 17, 2019 | Reviewed by Lybi Ma
This is the second in a series of interviews on “Science and Philosophy" featuring influential scientists and philosophers of science. See here for part 2.
Cailin O'Connor is a philosopher of biology and behavioral sciences, philosopher of science, and evolutionary game theorist. She is Associate Professor in the Department of Logic and Philosophy of Science, and a member of the Institute for Mathematical Behavioral Science at UC Irvine.
Walter Veit: You recently published The Misinformation Age together with your husband and fellow philosopher James Owen Weatherall. What motivated you to write this book?
Cailin O'Connor: Around the time of the Brexit vote and the 2016 election in the US, I was working on several projects in formal social epistemology — using models to represent scientific communities. Social epistemology puts a big emphasis on the importance of social connections to knowledge creation. At the same time, we were seeing some serious issues related to public misinformation through social media. Many responses to this misinformation seemed to focus on the role of individual psychology and reasoning in the spread of false belief. For instance, confirmation bias, where individuals trust evidence that supports an already-held belief, is obviously relevant. But we think that understanding social ties and behavior is even more important to understanding false belief. For that reason, we wanted to bring some of the most important lessons from social epistemology, and from models of scientific knowledge, to bear on these social problems.
Walter Veit: How do you explain that despite all the evidence, demonstrably false beliefs are able to spread and persist?
Cailin O'Connor: There are many reasons that false beliefs spread, often in spite of good evidence refuting them. One reason is that we all are used to trusting other humans as sources of information. This is, to some degree, a necessity. We certainly cannot go do the work ourselves to guarantee that all our beliefs are good ones. Even when we look to scientific journals for evidence supporting our beliefs, we are ultimately trusting others (the scientists who share their data). And sometimes even these good sources lead us astray. The social sharing of data is powerful, but always opens the possibility that falsity can spread. In addition, there are various social biases that can make us more or less likely to share false beliefs. For example, in our book, we talk about the role of conformity bias — when individuals want to conform their actions or beliefs to their peers — in sometimes preventing the spread of useful or accurate knowledge. Our heuristics for social trust, such as placing more trust in those who are more similar to ourselves, or who share our beliefs, can mislead. Vaccine skeptics, for example, trust other vaccine skeptics more than the researchers who actually work on vaccine safety.
A very important factor in false belief are active attempts to mislead. There are many groups with an interest in controlling public belief — think Russia, political parties, oil and gas companies, pharmaceuticals. Over the years, different groups have developed effective propaganda techniques aimed at spreading scientific misinformation. We talk about some of these in the book. What is perhaps most surprising in how subtle some of these techniques are. For example, industrial groups often mislead the public by sharing real, independently created scientific evidence, but that is somehow misleading. They might share four studies finding no link between tobacco smoke and cancer, but fail to share the many other studies that do find such a link. Or they might fund research into the dangers of asbestos, and then correctly inform the public that asbestos can cause lung cancer. This is true, but it distracts from the truth that tobacco also causes lung cancer.
When it comes to social media misinformation, the techniques are constantly adapting. For instance, a research group recently identified an influence network on Twitter promoting the idea that voter fraud is common in the US. The bots in this network had been designed to avoid detection by Twitter's algorithms. In 2016, there was a lot of sharing of fake news. Awareness has grown about fake news, and many people are getting better at detecting it. But the same people might be misled by a meme that shares real, but misleading data (as in the industry case).
Follow the author on facebook, instagram, medium, and twitter to receive the latest updates!