Karl Popper, echoing David Hume, argued that no amount of empirical evidence is sufficient to establish that a law of nature is true. Some subsequent observation of the world might contravene the proposed law, demonstrating that it fails to hold universally after all. Popper held that the importance of empirical research in science arises from its ability to falsify, not confirm, hypotheses. Cognitive scientists of science have provided evidence that, Popper’s analyses notwithstanding, scientists often exhibit confirmation bias. They seek out and emphasize evidence that supports their hypotheses, and they dismiss contrary evidence as artifacts of incautious observations or experimental designs.
Ben Tappin, Leslie van der Leer, and Ryan McKay explore an alternative hypothesis that might explain scientists’ (and other humans’) proclivities to emphasize confirming evidence. They consider the possibility that what humans have is a desirability bias in addition to or instead of a confirmation bias. A desirability bias leads people to prefer and attend to new desirable evidence over new undesirable evidence when updating their beliefs. The problem with distinguishing these two hypotheses from one another is that their predictions usually coincide. That is especially so with regard to arguments about the merits of scientific hypotheses, since scientists think that their hypotheses have merit but also want them to have merit and want to exhibit that merit. In order to tease the two hypotheses apart, these researchers studied political belief revision instead.
Teasing Apart the Two Hypotheses
The authors undertook a study in advance of the recent American presidential election. Participants were first asked to state which candidate they supported. They then indicated which candidate they thought was more likely to win by setting a marker on a line between the two candidates’ names, which were situated at the opposite ends of the line. (Crucially, this method ensured that the probabilities of victory that participants accorded to each candidate were dependent upon one another.) After this, they received an account of what was presented to them as recent polling data about the upcoming election. In the test stage participants placed the marker a second time, in light of the new information that they had just received.
In two situations, the different hypotheses’ predictions diverge on how participants would update their beliefs about the candidates’ probabilities of winning: 1) when new information corroborates an earlier judgement but is undesirable (e.g., I expect Clinton will win and want Trump to win, but the new information favors Clinton), or 2) when it is contrary to an earlier judgment but is desirable (e.g., I think Clinton will win and want Trump to win, and the new information favors Trump).
A Bipartisan Desirability Bias
Tappin, van der Leer, and McKay’s findings revealed a substantial desirability bias. Participants were significantly more likely to incorporate the new information into their second assessment of the candidates’ probabilities of winning if that new information favored the election result that they desired. The effects of this bias were independent of whether the new information corroborated or contradicted participants’ initial beliefs. Moreover, once the desirability bias was taken into account, their findings provided only modest evidence for an independent confirmation bias.
Exactly how much these findings pertain to the conduct of scientists is unclear. Updating political beliefs in light of what was putatively recent polling information is not, for example, the same task as either producing or searching for new evidence. Still, as the authors note, the study was concerned with judgments about factual matters rather than political attitudes.
Finally, it is worth noting that regardless of whether participants thought Trump or Clinton would win the election and regardless of which candidate they supported, they exhibited the desirability bias.