Skip to main content

Verified by Psychology Today

The Contagion of Doubt: Vaccines and Social Media

Spreading suspicion to manipulate people's behavior is eerily effective.

Key points

  • In 2015, DARPA discovered that foreign bots had not only been influencing U.S. elections, but also targeting the anti-vaccination movement.
  • Twitter activity showed that bots posted messages on both sides of the debate. The point was to sow doubt.
  • Spreading fear, uncertainty and doubt is an effective tactic. It spreads suspicion and leads people to retreat back to the assumed safest choice.

On October 4, 1957, the world looked to the heavens. Humans had succeeded in launching an artificial satellite into orbit around the Earth. It was the beginning of the Space Age. It was also the beginning of the “Sputnik Crisis.”

The successful launch of Sputnik by the Soviet Union posed a serious threat to America’s national security. President Dwight Eisenhower knew that the U.S. had fallen woefully behind in research and development. The nation needed to respond.

In 1958, the National Aeronautics and Space Administration, NASA, was created to develop the engineering know-how necessary for non-military space exploration. But Eisenhower also saw the need for a companion organization. The Defense Advanced Research Projects Agency, referred to today as DARPA, was NASA’s military complement. DARPA was a pioneering initiative to build a cooperative organization to advance scientific research and development at the intersection of government, industry, and academia.

Eisenhower was a forward-thinking leader. His vision was much broader than the obvious military agenda. He was interested in basic science – technological and scientific advancements that might pave the way for groundbreaking discoveries that would serve the national interests in the future.

During the Cold War, DARPA was shrouded in secrecy. Many of the commercial products that we take for granted today started as highly clandestine DARPA-sponsored research projects. In 1959, a joint effort between DARPA and Johns Hopkins University led to the development of an early prototype for the Global Position System (GPS). This once top-secret technology to geolocate any person on Earth is commonly used today for satellite navigation, FedEx deliveries, and finding out when your GrubHub order will arrive.

In the 1960s, DARPA was dedicated to big-picture national security issues, like nuclear test detection, anti-ballistic missile defenses, and counterinsurgency technologies. Since the 1970s, its focus has broadened significantly to include a wide variety of informational, graphical, and intelligence technologies. DARPA co-sponsored the development of the first electronic network for digital information (the forerunner of the internet), the creation of computer reasoning technologies (the forerunner of modern artificial intelligence), and hypermedia technologies that laid the groundwork for virtual reality.

Since the end of the cold war, DARPA’s mandate has expanded further, reaching all the way from physics to sociology. Thousands of social and computational scientists currently collaborate on DARPA-sponsored projects aimed at understanding the properties of modern socio-technical systems.

The Role of Bots in the Anti-Vaccination Movement

Several years ago, DARPA began an effort to address the growing “Bot Crisis.” Once again, it was Russia that jumpstarted the race.

In 2015, DARPA launched the Bot Challenge, which charged researchers with finding and identifying foreign “influence bots” posting on Twitter. Within a year, it would be clear that foreign bots had become a major presence in the U.S. electoral process. But DARPA discovered that bots weren’t only targeting politics. The most striking findings from the 2015 Bot Challenge concerned the involvement of foreign bots in the U.S. anti-vaccination movement.

In the summer of 2018, a collaborative team of public health scholars and computer scientists from Johns Hopkins University published two remarkable findings from the Bot Challenge. First, bots are everywhere. The last count showed that bot messages compose 20 percent of all social media activity in the US (that’s 48 million messages a year). Second, Russian bots, and Russian trolls – mischievous, masked human accounts – are major players in the vaccine debate.

The researchers analyzed the stream of Twitter activity from July 2014 through September 2017. They found two types of foreign influences in the vaccination debate: dumb ones and smart ones.

The dumb bots were “content polluters.” They posted commercial content and malware, along with large numbers of strongly-worded anti-vaccination messages.

The smart bots and the trolls used a different strategy, called “weaponized communication.” They posted comments on both sides of the vaccine debate.

Here are a few examples, all traced back to a single Russian organization:

  • #VaccinateUS mandatory #vaccines infringe on constitutionally protected religious freedoms.
  • “our kids are not your property! You have to #vaccinate them to protect them and all the others! #VaccinateUS
  • #VaccinateUS natural infection almost always causes better immunity than #vaccines.
  • #VaccinateUS You can’t fix stupidity. Let them die from measles, and I’m for #vaccination!
  • Did you know there was a secret government database of #vaccine-damaged children? #VaccinateUS.
  • Most parents in Victorian times lost children regularly to preventable illnesses. #vaccines can solve this problem #VaccinateUS
  • Did you know #vaccines caused [sic] autism? #VaccinateUS

What was the point of all these conflicting messages?

The point was to sow doubt. The Russian bots were not trying to spread a particular fake message. They were trying to generate enough social reinforcement on both sides of the debate that people believed the debate over vaccines was genuine. The more messages there were (and the more heated those messages appeared), the more convincing the debate would seem.

In point of fact, there was no scientific debate about vaccination. But, social reinforcement on both sides increased the apparent legitimacy of the controversy.

The real question is, why would Russian bots be designed to spread general doubt instead of a particular message? Because spreading doubt is an eerily effective tactic for manipulating people’s behavior.

Fear, Uncertainty, and Doubt

Decades before the invention of social media, the strategy that would one day be used by Russian bots was then known as “FUD” – or “fear, uncertainty, and doubt.” It dates back at least to the 1970s. It was commercially pioneered by the IBM corporation. IBM controlled the majority of the computer market. They were the default choice for anyone looking to buy computer equipment. But they had competitors. So a commonly reported sales tactic was to instill doubt into the minds of potential customers about what might happen if they used competitors’ products. An industry exposé detailed this practice: “The idea, of course, was to persuade buyers to go with safe IBM gear rather than competitors’ equipment. This implicit coercion was traditionally accomplished by promising that good things would happen to people who stuck with IBM, but dark shadows loomed over the future of competitors’ equipment or software. After 1991, the term [FUD] has become generalized to refer to any kind of disinformation campaign used as a competitive weapon.”

The technique was effective. And it did not stop with IBM.

Microsoft famously adopted the same strategy in the 1990s. Their twist on the idea was to put fake error codes directly into the Windows 3.1 operating system. These fake error codes (eerily presaging latter-day headlines about “fake news”) were unrelated to the performance of the computer. Rather, messages would simply pop up on the screen if the user had installed a competitor’s product (like DR-DOS) onto the computer.

One of the more famous error codes read:

Non-Fatal error detected: error #2726
Please contact Windows 3.1 beta support
Press ENTER to exit or C to continue

The message was a hoax. Simply pressing the “C” key would remove the error code and allow the computer to perform normally.

In 1992, Microsoft Senior Vice President Brad Silverberg wrote an internal memorandum about this strategy: “What the user is supposed to do is feel uncomfortable, and when he has bugs, suspect that the problem is DR-DOS and then go out to buy MS-DOS.”

Throughout the early 1990s, this “fear, uncertainty, and doubt” strategy succeeded in putting many of Microsoft’s competitors out of business, and establishing the worldwide dominance of Windows and MS-DOS.

It was very successful—but also illegal. In 1996, Caldera, Inc. brought suit against Microsoft for these anti-competitive practices, winning a settlement in excess of $250 million.

Reinforcing the Legitimacy of the Vaccine Debate

On social media, the spread of fear, uncertainty, and doubt is not a feature programmed into a product or used by a salesperson, but a social contagion that spreads through reinforcement.

The findings from DARPA’s Bot Challenge showed that the Russian bots were designed to reinforce doubt. They were trying to create a social contagion.

What makes the contagion of doubt so devilishly effective is that every response to the debate reinforces the debate. Controversy among neighbors and colleagues only helps to further the perceived legitimacy of the debate. The idea behind the Russians’ campaign was that the escalating debate would reinforce doubt about the safety of vaccination. Parents faced with this uncertainty would reason that the safest thing they could do would be not to vaccinate.

Like Silverberg’s email confesses, the goal of this kind of disinformation is not to convince people to believe a particular fake message. Rather, it is to spread the suspicion that any future problems that someone might encounter were mysteriously caused by their decision to do something unsafe.

It has a paralyzing effect.

The strategy of IBM and Microsoft was to create uncertainty about whether the decision to use a competitor’s product might impact the future performance of a customer’s computer. The only thing customers could do to avoid this nebulous threat would be to stick with their current manufacturer. For parents who worry about their children’s safety, the Russian bots’ strategy was to create uncertainty about the side effects of vaccines. Faced with this ominous threat, the only thing for parents to do would be to retreat back to the safest choice.

The contagion of doubt leads to an epidemic of inaction.