In the last post, I began writing about the evidence that humans are hard-wired to punish unkind and opportunistic actions by others.
One of the best explored examples of this in behavioral and experimental economics is an interaction called the ultimatum game. The experimenter pairs two anonymous players, say A and B, and gives each one a single decision to make. First, A makes a proposal for dividing $10 between herself and B. Unlike the trust game discussed in earlier posts, the money would go to B as is, without multiplication. Then B decides whether to accept or to reject A’s proposal. A rejection means that neither A nor B earn anything. So this game is only about dividing a pie, not making a pie bigger by way of cooperation—though B can make the whole pie go “p-o-o-f.”
To analyze the game, economic theorists first ask themselves what two perfectly rational players concerned only for their own individual earnings would do. The logical answer is that A would propose to give B $1 and to keep the other $9. This gives B the choice: accept and get $1, or reject and get nothing. Since B is rational and cares only about what he earns, he accepts. If A had offered B $0, B would have had no reason to prefer rejecting or accepting the offer, so B might have tossed a coin, making A’s average expected earnings $5 (50 percent chance of $10 + 50 percent chance of $0). If A had offered B $2, B would have accepted, giving A $8. And so forth. The offer of $1 thus assures A as much as possible—$9—so that’s what the rational selfish A will offer.
But when the experiment is conducted anywhere but in the most primitive societies, almost no A’s offer $1. A possible explanation is suggested by the fact that in those few cases in which $1 or $2 have been offered, a large fraction of the offers were rejected. Perhaps A’s, who’re drawn from the same population as B’s, anticipate that low offers will be rejected. They choose higher offers to play it safe. The large majority of offers have been either $4 or $5, and very few of those have been rejected. So, the A’s might be fair minded, but enlightened self-interest is probably also at work.
It’s a bit harder to explain why B’s reject $1 or $2 in favor of zero, but that’s where the inclination to punish comes in. It’s that little bit of human psychology lacking in the rational economic actor that makes all the difference. If a B rejects $1, his loss of that dollar is not the only thing that happens; his choice causes A to lose $9, as well. Thus, B can inflict a $9 punishment on A at a cost to himself of only $1, and that’s a pretty good deal if B is steamed enough. Among the evidence that this accounts for rejected offers is the fact that when the right to play the A versus the B role is not assigned randomly but is instead based on success in performing some task or playing a prior game, fewer low offers are rejected. This suggests that B’s get angry when counterparts who had no more moral claim to play the A role than themselves arbitrarily ask for most of the pie, but B’s feel less right to be miffed if the counterpart earned her right to make the first decision in a fair competition.
Still, you might be thinking, wouldn’t B be better off controlling his emotions and taking $1 or $2 whenever it’s made available? Note that the game won’t be repeated between this A and this B, and others won’t be told what B did, so B can’t benefit by hanging tough in the hope of being offered more in future games. Where did the irrational emotions driving punishment of opportunism in such circumstances come from? Shouldn’t we 21st century sophisticates try to rise to a higher plane of rationality?
A probable answer to the origins question is that the underpinnings of anger
at being dealt a raw deal are part of our evolved natures. As for rising above such inclinations, the answer might be yes, in numerous specific cases, but no, in a more general sense. This is because the inclination to get mad, and thus the ability to derive satisfaction from getting even, may serve humans as a species far better than perfect rationality would.
Life is full of interactions in which people can be better off if they cooperate, but in which cooperation is threatened by temptations to gain at the expense of others. Among factors that can safeguard cooperation, in such circumstances, are, on the one hand, moral virtue, something to which some are socialized more effectively than others, and on the other hand, the fear of punishment. Motivations for punishing opportunists and cheaters include rational one, e.g.: make an example of them now and deter similar acts by others in the future. But in many cases there’s not an assured enough future of the specific cooperative interaction to make punishing—costly for both punisher and punishee—a rational act. That’s where getting angry comes in handy. It’s because we get angry when we’re taken advantage of, and have an intuitive understanding that this is so for others as well, that we fear punishment even when it may not be worth the punisher’s while in strictly rational terms. Their satisfaction alone makes it worthwhile for them to get back at us. And anticipating this, we refrain from taking advantage of them, though we might otherwise have gotten away with it. Thus does one of the less pretty sides of human nature (the desire to get even) help to support one of the more beautiful ones (the capacity to cooperate with one another).
Despite its adaptiveness at the level of group and species and the abundance of evidence that such a tendency in fact exists, there continues to be debate about how the desire to get even or to punish cheaters could have enjoyed favorable selection in the course of human evolution. In any situation where rejecting the $1 or $2 offer can be of no particular good to the individual, smart or less emotional players would take the money, over time becoming materially better off and thus having at least a slight advantage in terms of survival and reproduction. The tendency to punish out of anger might therefore be expected to have died out, with only rational punishment having a strategic benefit in the future surviving. That this does not seem to have been the case might be explained by the role that the good of the group may have played in human evolution via what biologists call group selection, a possibility discussed in my book The Good, The Bad, and The Economy and embraced in E.O. Wilson’s masterly book The Social Conquest of Earth, an engrossing read for all interested in who we are and how we got here.