Some economic theorists and evolutionary biologists believe that a predisposition to punish those who cheat on cooperative agreements was a critical contributor to the success of cooperation among members of our species from prehistory to the present. For this and other reasons, the study of punishment behaviors (the inflicting of monetary costs on another at some expense to oneself) has been one of the most active areas of research in experimental economics during the past decade.
To see why punishment, or at least the credible threat of punishment, might be important, we can consider any positive sum interaction—an interaction in which cooperation can make each of two or more parties better off—in which those involved act sequentially rather than simultaneously. The trust game I’ve discussed in several posts would do, and so does the gift exchange game used by Nobel Prize-winning economist George Akerlof, on the theory side, and by noted experimental economist Ernst Fehr, in the decision lab, to represent the relationship between employers and their employees.
The gift exchange game tries to capture the idea that employers may pay workers more than the minimum required to get them to accept the job, because the difference between actual and minimal pay is interpreted by the worker as a “gift” or a sign of generosity and trust on the part of the employer, and the worker accordingly feels obligated to reciprocate with more effort and loyalty. Both sides stand to benefit when the worker’s added productivity more than compensates the employer for the better pay package and the added pay more than compensates the employee for the extra effort.
The problem is that while the employer’s pay offer is difficult to revoke once put in writing and agreed to, the supplying of the additional effort by the employee, including reacting to unexpected events in the workplace with the employer’s interests in mind, remains under the employee’s control and can’t be enforced by the courts. That’s because it’s too costly for third parties to tell how much effort was really supplied, whereas how much the employer paid is verifiable, and an employer withholding pay while claiming that his employee shirked on the job would likely appear to be lying. Fehr and collaborators tested the idea in their decision lab and found that those in the worker role tended to provide more effort (i.e., give more tokens to the “employer”) when their counterpart committed to a higher wage, even though the effort was strictly voluntary.
How could such reciprocity have evolved when it’s always in the immediate interest of the second-mover—here, the worker—to take the initial gift without reciprocating through extra effort? Part of the answer, to be sure, is that many such relationships are ongoing and that, even if an individual can’t continue to interact with the same partner, it’s worth her while to invest in a reputation for reciprocating because other desirable partners will then be drawn to her.
But “end game” situations also arise: times when it becomes clear that there’ll be no more rounds played with the present partner and when there are no witnesses to your current choice who can harm your reputation with others. The quintessential example is the traveler eating at a restaurant she’ll never visit again, who (if selfishly rational) should be happy to accept the good service a waiter customarily offers in the hope of receiving a nice tip, but can never be punished for stiffing him, since waiters in the next city will never learn that she doesn’t tip at her one-time restaurant stops.
In the course of our evolution, those who had the canniness to stiff their counterparts in each end game situation—to insist on their share when another band-member snagged a gazelle but to consume privately any small game they bagged on their own without witnesses—should have done better than the faithful reciprocators in their groups. If so, stronger forms of reciprocity that stand up to end game temptations should never have evolved.
That’s where the urge to punish may have entered the picture. Suppose that others, whether the directly injured party or a by-stander, find out that you’ve cheated on reciprocal obligations in an end game situation. Let’s say that those getting this information have an opportunity to make you pay for your action, but that it costs them to do so. Rationality says that it’s no more beneficial to them to pay the cost of punishing you than it was beneficial to you to reciprocate in an end game, since they have no expectation of interacting with you in the future and ought therefore to simply cut their losses. This being so, if people are rational and strictly self-interested, you have no need to worry about being punished for your end-game behaviors. But when such situations occur in real life, flesh-and-blood human beings get angry and many do incur costs to impose punishment on the violators of reciprocity obligations. The quintessential example here is expending the energy to bad-mouth them, possibly generating a reputational penalty after all.
As mentioned, Fehr and collaborators found that most experimental subjects reciprocated generous choices of counterparts with generous choices of their own even when pairs interacted one time only and when being punished was not a possibility. More important, for present purposes, is that when they added the option of punishing a counterpart who failed to reciprocate, reciprocity increased and those failing to reciprocate often got punished. Paying a cost to punish your past partner for failing to reciprocate made no pecuniary sense, in these experiments, but flesh-and-blood subjects did it anyway, presumably because the satisfaction it gave them was worth the money, on an emotional level.
Such results have been replicated hundreds of times in the decision labs of experimental economists in dozens of countries since the late 1990s. In later postings, I’ll describe some of the replications and extensions done by myself and collaborators and I’ll discuss the evolving theoretical debate about how such tendencies might have emerged in our evolutionary history. Some of the experiments are also discussed in my book, The Good, The Bad and The Economy. If there are still some newbies among you who need an introduction to the evolution of human sociality, I continue to recommend Robert Wright’s The Moral Animal, the 1994 book that first turned me on to these ideas back in the 1990s. And in case there are some among you comfortable with mathematics and ready for a look at the cutting edge of research on the subject, I recommend the 2011 book A Cooperative Species: Human Reciprocity and Its Evolution by the economists and game theorists Samuel Bowles and Herbert Gintis.