Survivor. American Idol. So You Think You Can Dance. The Voice. The Amazing Race. All of these shows have one thing in common: They are all winner-take-all competitions. If their popularity is any indication, we watch these shows because they seem to represent life pared down to its very essence—a competition for survival.
The logic underlying these competitions dovetails with the definition of rationality promoted by traditional economic theory: A rational agent is a self-interested agent. He or she always chooses what is most likely to maximize his or her happiness.
So if this is the case, why do the players, judges, and even most of the television audience hate the shows where the losers get kicked out of the competition?
Consider this film clip of a British television show called Golden Balls. The players names are Sarah and Steve. The game played is version of Prisoner’s Dilemma: If both Sarah and Steve choose to cooperate (split the money), they each go home with half the prize money. If instead they both choose to defect (“steal” the money for themselves), then both go home with nothing. And if one person chooses to cooperate while the other chooses to defect, then the person who chose to split the money goes home with nothing while the person who chose to steal it goes home with the entire prize.
What should the players do?
Let’s assume they are both rational agents, and take a look at the game from inside Sarah’s head. If Steve chooses “steal”, then it doesn’t really matter whether Sarah chooses “split” or “steal”; she gets nothing either way. But if Steve chooses “split”, then her best choice is to choose “steal.” That way, she gets all the money rather than just half. She knows this, and she assumes Steve knows this, too. So it looks like choosing “steal” is Sarah’s best response. The same analysis holds for Steve, for the same reasons.
As the clip shows, Steve chooses to split but Sarah chooses to steal. Sarah has played her best choice. Yet the audience boos at her, she weeps in shame, and Steve looks like he’s going to explode. It seems that everyone—including Sarah—expected cooperation rather than rational choice.
Is this what people really do?
Dozens of studies in experimental economics and psychology have shown that people approach transactions like these with a strong bias toward cooperation.
People reward those who cooperate with them far more generously than they should and they punish those who behave selfishly far more severely than they should according to rational choice theory. In fact, they will frequently pay a penalty to obtain an opportunity to punish those who failed to cooperate. Third parties who simply observe the game do the same thing: They will pay a fee to punish behavior that appears to be purely self-interested. Even six-month-old infants prefer cartoon characters who are cooperative over those who behave selfishly.
To accommodate results like these, economists have had to incorporate norms of fairness and reciprocity in their theories of economic behavior. Otherwise, we can’t predict what people are going to do. But let’s go deeper and ask why we seem to have such a strong bias toward cooperation.
Does it pay to cooperate?
Well, what if we had computers play against each other using a variety of strategies. They don’t feel, they are not socialized, and they perfectly execute their programmed strategies. How would cooperation fair in such a tournament?
In 1980, Dr. Robert Axelrod did exactly that. He held a Prisoner’s Dilemma Tournament in which computer programs played games against each other and themselves repeatedly. Surprisingly, the best strategy was a very simple one: Tit-For-Tat. This strategy cooperates on the first move, and then does whatever its opponent has done on the previous move. So it has an initial bias towards cooperation, but switches strategies when its partner is a non-cooperator. As a result, it accrues the full benefits of cooperation when matched against a friendly opponent, but does not risk being taken advantage of when matched against an opponent who defects.
So it seems that cooperation can pay off—as long as your partner is another cooperator. Which brings us to another tantalizing answer to our penchant for cooperation: Perhaps we are looking at our evolutionary history.
As Axelrod and Hamilton pointed out in their classic 1981 paper on the game theory computer tournament, “The theory of evolution is based on the struggle for life and the survival of the fittest. Yet cooperation is common between members of the same species and even between members of different species.”
In 1971, Robert Trivers offered one very influential explanation for the ubiquity of cooperation in nature. He showed that altruism can be selected for when it benefits the organism performing the altruistic act as well as the recipient. But here is the catch: Cooperation can propagate through a population and become stable only if cheaters (those who don’t reciprocate) are excluded from future transactions. If cheating is tolerated, then cooperation vanishes.
Trivers also listed the following characteristics which favor the evolution of cooperation: Long lifespans of individuals, low dispersal rate (people stick around), interdependence (needing other people’s help), high degree of parental care, needing aid in defense in combat, and the absence of a rigid linear dominance hierarchy. As it turns out, this description exactly fits the hunter-gatherer lifestyle of early humans. So, according to Trivers, our species evolved under conditions that favor selection for cooperation.
And perhaps that is why we seem to be “born this way”--with a strong bias toward cooperating. That is why we expect reciprocity, and why we will go out of our way to punish those who fail to reciprocate. We are wired to preserve cooperation because that is what allowed us to survive in the long run.