Customers are gullible. Republicans are gullible. People are gullible. These are common complaints. Politicians repeat the same empty promises election after election. Customers buy smartphones with useless features and expensive but unreliable cars.
Why do people sometimes seem so trusting as to be gullible? In the eighteenth century, the Scottish philosopher Thomas Reid suggested that humans have a natural propensity to be trusting:
"The wise and beneficent Author of nature, who intended that we should be social creatures, and that we should receive the greatest and most important part of our knowledge by the information of others, hath, for these purposes implanted in our natures two principles that tally with each other. The first of these principles is a propensity to speak truth [... The second principle] is a disposition to confide in the veracity of others, and to believe what they tell us."
Reid's reasoning rests on the assumption that humans have a natural propensity to speak the truth. In this cynical age, few may be inclined to agree with Reid's assessment. Yet, when we think of it, Reid has a point. In spite of all the lies and deception that so easily spring to mind, the majority of what we communicate is (approximately at least) truthful. Think of all the things you've said today. How many of these statements were lies? Maybe one or two--people do seem to engage in inconsequential distortions fairly regularly--but probably nothing like the majority or even a sizeable minority.
Does that mean that Reid is right and that people are trusting simply because speakers tend to be trustworthy? To address that question, we can engage in a bit of evolutionary thinking. As in human societies, a lot of communication goes on in beehives and ant colonies. Food sources are indicated, dangers signaled. Can the ant trust her fellow ant to provide her with accurate information? In general: Yes. Ant workers don't have any reason to suspect foul play (from other ants of the same colony, that is). Overall, what matters to them is that the colony thrives. Leading other ants away from food or into danger would not serve their own genetic goals. Ants can trust each other's signals because their interests overlap nearly completely. If Reid had been talking about social insects, he may have been close to the mark.
What about humans? Human groups are not like ant colonies. While cooperation also played a crucial role in our evolution, it took a very different form than in social insects. Humans do not strive for the best interest of the group; they labor to further their own objectives. It so happens that cooperating, being nice, acting morally is often the way through which evolution achieves this end. But we cannot let our guard down. If we are too nice, others will be there to take advantage of us. A natural propensity to trust would lead to abuse. Jean-Paul is a gullible early homo sapiens, hunting in the African savannah 100.000 years ago. Gaspar, his hunting buddy, tells him that lions really enjoy being petted under the belly. Jean-Paul, being extremely trusting, takes Gaspar's word for it. How many descendants do you think Jean-Paul left behind? Our gullible would-be ancestors remained would-be ancestors, their overly trusting alleles pushed out of the gene pool.
So Reid's explanation does not jibe with the evolutionary perspective. How are we to explain, then, that most of the time people do tell the truth? An analogy may be helpful here. How can we explain that there hasn't been a military attack on US territory since World War II? Is that because everybody likes the US and is too nice to launch an assault? A more likely explanation is that the US military acts as a deterrent. It doesn't have to actively repel enemy attacks to be useful; simply being there is usually enough--si vis pacem para bellum: if you want peace, prepare for war.
How does the analogy carry to human communication? We don't have an army protecting us in case someone tries to lie to us. But we do have mechanisms that are likely to spot lies, with potentially costly consequences for the liar. Just like countries usually refrain from attacking each other because they know they would face strong resistance, evolution makes us refrain from lying too frequently or too brazenly because the chances are high that we get caught.
In other words, the reason we are mostly truthful is not that we have a propensity to tell the truth, but that listeners are vigilant. If we tell a lie, we may get caught and have to face the consequences. We won't be believed and we risk losing someone's trust, maybe for good. Dan Sperber has dubbed the abilities that allow us to evaluate what other people say "epistemic vigilance." It's not that we don't trust people; on the contrary, it's because we are vigilant that we can trust them. As both Lenin and Reagan were fond of saying: "Trust, but verify."
Epistemic vigilance takes many forms, which will be the topic of future posts. We calibrate our trust, believing a doctor more than a neighbor, a friend more than a stranger. We make sure that what we are told doesn't clash with our previous beliefs. We evaluate the arguments presented to convince us. A major difference between Reid's propensity to trust and epistemic vigilance is how flexible the latter is. Think of a poker game with your family. Here are people you would trust with your life and yet you find yourself lying through your teeth and not believing a word they say.
Vigilance towards communicated information was selected in our species. Gullibility would have been as much of an adaptive burden as a fondness for poisonous food or a total lack of interest in sexual activities. Yet accusations of gullibility are thrown around time and again. How come? Some psychologists have suggested that vigilance mechanisms are easily perturbed. For instance, if you stop people from thinking, they may spontaneously accept everything they're told (in a way, that's also the idea behind most brainwashing attempts).
I am not persuaded. Instead, I simply think that, by and large, these accusations are mistaken, and that people are not gullible. Saying that someone is gullible is all too often a shortcut for saying that we disagree with them. Real gullibility entails accepting just about any information that comes our way. But even the people who believe the craziest stories--outlandish conspiracy theories and the like--don't believe everything they hear. It's quite the opposite in fact: they reject what most people tell them, namely that their beliefs are insane.
Saying that conspiracy theorists, for instance, are not gullible doesn't mean that their mechanisms of vigilance are working well. When you believe that the queen of England is a lizard, clearly, something has gone wrong. But the problem is not too much trust, but a lack of trust--in most other people, in the government, etc.--and misplaced trust.
When people believe in something that prompts others to call them gullible, they have generally been told that they were wrong before. Conspiracy theorists are an extreme example, but this is true more generally. Republicans and democrats are aware that there's a bunch of people who disagree with them, they just prefer to trust the people from their side. The main problem, I surmise, is not gullibility but its opposite, conservatism. Rather than changing their opinions too quickly, people take too long to adjust. Evolutionarily, that makes much more sense. When in doubt, it must have been safer for our ancestors to reject communicated information than to accept it. A measure of conservatism should be expected: we are more doubting Thomases than Pinocchios.
The idea that people are gullible is quite deeply anchored, and I don't expect to change many people's mind with this post (at least I hope not to convince everybody, after all, I've just predicted some measure of conservatism...). In a series of posts I will look at different situations that are often taken as standard examples of gullibility--from voter behavior to the conmen's marks. In each case, I'll try to show that either the data doesn't support the accusations of gullibility, or that another interpretation should be favored.
The paper in which the idea of epistemic vigilance is developed:
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance, 25(4), 359-393.
The Thomas Reid quote comes from there:
Reid, T. (1764). Inquiry into the Human Mind.
This post is part of a "we are not gullible" series.
Picture: Caravaggio, The Incredulity of Saint Thomas, found here.