Skip to main content

Verified by Psychology Today

Teamwork

Lies and Damned Lies: Why Is Anyone Ever Trusted?

Experimental evidence suggests that a truth 'default' and language co-evolved.

The buzz around Lance Armstrong’s confession had me thinking about the problem of truthfulness again this week. No surprise that Armstrong had lied, a cynic might say. Why would anyone tell the truth when a lie better serves their interests? And given this, why would anyone ever believe anyone else’s word? Taken to its limits, such doubts could seem to suggest that talk would become so cheap that no one would believe anything anyone else said, ever. If we opened our mouths at all, it might as well be so much white noise or babble that comes out. This is roughly the line of reasoning that led economic theorists during much of the last century to treat as irrelevant what people say, and as relevant data only what they purchase, what salaries they agree to accept, and other costly choices.

In the 1970s, before it was fashionable to do so, the future Nobel laureate Amartya Sen criticized his fellow economics for carrying this kind of reasoning too far. He described a scene in which two strangers, each of whom is the total opportunist assumed by the then-traditional economic theory, encounter one another on a street corner. One is trying to find a station where he plans to board a train, the other is looking for a post office at which he’ll mail a letter. The first asks the second if he knows the way to the station. The second does know, but selfishly points him in the opposite direction, towards the post office, saying ‘it’s five blocks in that direction, and when you pass the post office on your way, would you mind posting this letter for me?’ This way, he can save himself a bit of a walk—never mind the exploitation of the stranger. In Sen’s story, though, the second individual is fully as opportunistic as the first, so he accepts the letter and promises to mail it, all the while intending to open it as soon as he’s out of sight to see whether it contains something of value, and otherwise to discard it.

Perhaps we don’t live in a society in which we’d trust a total stranger with an important letter, but we do often assume that when we ask a stranger for the time of day or directions, they’re unlikely to mislead us out of sheer mischief or indifference. Truthfulness is most people’s default option, if only because inventing and keeping track of lies usually requires more mental energy. I would go further and suggest that human speech could not even have evolved had this not been the case. Evolution could have selected for capacities to better articulate information only if it simultaneously produced abilities to detect, and inclinations to punish, falsehood, and thus tendencies to make truth a default. This is because without these complementary inclinations, there’d have been no selective advantage to the ability to utter meaningful sounds. Speech, lie detection, and truth bias evolved together.

Economists and other social scientists have conducted numerous experiments to investigate the impact of communication on decision-making. One purpose of these experiments has been to test the theoretical idea that in situations in which individuals have at least somewhat disparate interests and thus reasons to mistrust one another, communicating intentions to cooperate should be useless, since each party has the incentive to lie and accordingly has reason to take the others’ word as so much empty or cheap talk. Most of these experiments nonetheless show cooperation to be much greater after subjects exchange messages than when the experimental rules prevent them from doing so.

An interesting aspect of the findings from communication experiments by economic theorists is that cooperation gets its biggest boost when subjects communicate face to face, its next biggest boost when they exchange words either as text messages or through an oral-aural communication channel, and that cooperation is least affected when subjects can only indicate a possible choice from a list, or can only select to send or not to send a message that all participants know to be pre-scripted by the experimenter.

With colleagues, I let subjects in a voluntary contributions experiment signal in advance “possible decisions,” that is, choices they are thinking of making but to which they are in no way bound by the experiment’s rules. We found that on average, the availability of this signaling option made no difference to the level of cooperation, which was modest and falling over time just as when there is no communication possibility. This makes perfect sense according to the old economics orthodoxy in which actions are dictated by self-interest only and the signals people communicate may bear no relation to their plans. According to such assumptions, our subjects should have chosen the signals they sent in an entirely random fashion, knowing that their rational fellow subjects would disbelieve them in any case.

When we studied the data to see whether the signals were random, however, we found them to be full of patterns. In essence, most subjects were trying to convey a desire to coordinate on cooperative behaviors, and in some groups, they succeeded in doing so. However, there were enough other groups containing at least one opportunist so that the other members quickly lost faith in the signals sent, and cooperation fell apart even more rapidly than if no messages had been possible. This combination of higher cooperation in some groups and lower cooperation in others explained why average cooperation was about the same as in the absence of signaling.

Of particular interest were a pair of treatments in which subjects could not only send a non-binding signal of their possible choice, but could also send messages saying “I promise to contribute _ to the group account,” filling in the blank with their preferred value. In one of the two “promise treatments,” subjects could elect to financially punish other group members at the end of each round, though at some cost to themselves. In the treatment without punishment, most promises were initially kept, but as subjects saw that some were getting away with lies, the percentage of unkept promises rose rapidly and the opportunity to send promise messages failed to sustain cooperation. In the promise and punishment treatment, in contrast, false promises were severely punished, so the percentage of unkept promises remained low and the ability to exchange promises led to high rates of cooperation.

Something of the real world is captured here. People get angry when we lie at their expense, and we know that this will happen if our lies are uncovered. Given their anger, people are psychologically disposed to take punitive steps, including gossiping about us, even when strict rationality would dictate no more than future avoidance. Most people accordingly lie only when the prospective benefits seem large enough and the probability of being caught low enough. Given the effort that must go into remembering what lies we’ve told and figuring out how to avoid their detection, telling the truth, or at least a reasonable facsimile of it, is the cheaper alternative, and therefore the default action. In some societies, indeed, the advantage of truthfulness leads to the virtue of truth-telling being so centrally incorporated into the way people are socialized that they internalize the desire to be truthful and suffer psychological punishment at their own hands if they lie. Knowing that most people won’t lie when they have little to gain from it, we can usually follow a stranger’s directions to the train station after all. We can sometimes even cooperate on matters of importance to all of us!

advertisement
More from Louis Putterman Ph.D.
More from Psychology Today
More from Louis Putterman Ph.D.
More from Psychology Today