Skip to main content

Verified by Psychology Today

Teamwork

Errors of cooperation

Ignored, dismissed, explained away

To anyone interested in strategic interpersonal behavior, I recommend Ken Binmore’s (2007) Game theory: A very brief introduction. The brief introduction series is designed to give interested non-specialists an authoritative survey of an academic field. Binmore is a noted game theorist and he rises to the task. He writes lucidly and succeeds (mostly) in communicating without jargon, math, or other arcana. Binmore is a game-theoretic purist, and his exposition is designed to show that game theory as created by John von Neumann and Oskar Morgenstern, and refined by John Nash, John Harasanyi, among others, is a healthy paradigm. It derives what rational people will do under well-specified conditions and it describes, he says, sufficiently well what they do do.

The trouble lies in the word “sufficiently.” Take the prisoner’s dilemma, the theory’s most famous game. Two players independently choose between cooperation and defection. If they both cooperate they do better than if they both defect, but a single defector earns the best outcome and the suckered cooperator earns the worst. The rational, Nash-equilibrial, choice is to defect. A defector is better off no matter what the other player does. The dominance of defection should be easy to recognize. Only a numbskull, unable to subtract, would fail to see it. Yet, in a typical one-shot anonymous prisoner’s dilemma, we find a rate of cooperation of almost 50%. How can it be?

Binmore is incredulous in the face of cooperation. He asserts that the prisoner’s dilemma “represents a situation in which the dice are as loaded against the emergence of cooperation as they could possibly be” (p. 18). “Rational players don’t cooperate in the Prisoner’s Dilemma because the conditions for rational cooperation are absent” (p. 19). These assertions sound like wishful – and hence irrational – thinking because cooperation does occur.

In light of this inconvenient fact, Binmore pursues two strategies. The first strategy is to minimize the occurrence of cooperation. Granting that some cooperation occurs at first, players quickly learn that defection dominates. “Inexperienced subjects do indeed cooperate a little more than half the time on average [!!], but the evidence is overwhelming in games like the Prisoner’s Dilemma that the rate of defection increases steadily as the subjects gain experience, until only about 10% of subjects are still cooperating after 10 trials or so” (p.21). Binmore thus allows that some cooperation may occur as the result of human error. The error theory is not much of a theory, though, if it fails to explain how error comes about and how it is eliminated. Psychologists have learning theories that can do this (Krueger, Freestone & DiDonato, 2012), but game theorists do not. Binmore defers to Nobel laureate Reinhard Selten who proposed to make game theory stochastic, that is, to “build enough chance moves into the rules of our games to remove the possibility that players will find themselves trying to explain the inexplicable” (p. 21 in Binmore). Before examining what this means, marvel at the rhetorical deftness of this statement. It is the players who are said to struggle with inexplicable behavior, not the game theorists. But let’s remember, it is the theorists who need to explain behavior and not the people who do the behaving.

According to the Selten-Binmore theory, an act of cooperation is a random error. “The players are assumed to make random mistakes. Their hands tremble as they reach for the rational button and they press an irrational button by mistake” (p. 21). As I explained in Meanings of Error, there is – by definition – no such thing as random error when the true state (the correct response) is that the probability of error is 0 and when there is only one type of incorrect response. A hand cannot tremble selectively in one direction. This is tautologically true. Once you observe a response that according to theory cannot occur, the theory is falsified. You can, however, model random error around any probability other than zero. You can also model a randomly trembling hand in a three-response world. The minimum requirement for rationality is to hit the rational button on a plurality of strikes and to hit either of the two wrong buttons with the same, lower, probability.

To many revisionist game theorists, the incidence of cooperation is so large that an atheoretical error theory will not do (aside from the fact that the modeling of cooperation as random error fails on its own terms). Revisionist theories introduce psychological assumptions to explain cooperation (Camerer, 2003; Colman, 2003; Krueger, DiDonato & Freestone, 2012). One class of revisionist theories assumes that individuals who cooperate have moral reasons to do so. Another class assumes that there are rational expectancy-times-value calculations that can turn cooperation into the rational choice. Binmore has little patience for either type of theory. His second strategy is to belittle them.

In the moral camp, his favored foil is Kant. Kant sought to ground morality in rationality. He would argue, says Binmore, that cooperation is a categorical imperative. For the sake of social well-being (i.e., collective wealth), the law can demand cooperation but not defection; therefore, an individual can perceive cooperation, but not defection, to be moral duty. Binmore considers this type of argument silly because it ignores the individual’s temptation to defect. It begs the question of why people would want to do their duty. To Binmore, Kant’s and others’ attempts to explain cooperation in psychological terms are “fallacies that were invented in hopeless attempts to show that it is rational to cooperate in the Prisoner’s Dilemma [and that] are now mostly quoted as entertaining examples of what psychologists call magical reasoning, in which logic is twisted to secure some desired outcome” (p. 18). The drip of disdain is audible.

In the expectancy-times-value camp, Binmore belittles what he calls the “fallacy of the twins.” The twin idea is that people expect others who are known to be similar to them to make the same choice in a social dilemma as they do. But Binmore asserts that this expectation is false. “The twins fallacy wrongly assumes that Bob [twin #1] will make the same choices as Alice [twin #2] whatever strategy she chooses” (p. 159). However, this is not a wrong assumption. Unless Bob assumes that the probability of cooperation is exactly .5, he may – and should – assume that whatever he chooses to do will be more likely reciprocated than not. If one strategy is chosen by 60% of players, Bob’s probability of choosing it is .6 and so is Alice’s. Indeed, Binmore comes around to acknowledge that a correlation between choices over players may exists and that it may be rationally exploited, but then he concludes that “if they don’t choose independently, they aren’t playing a Prisoner’s Dilemma” (p. 159). They are instead playing a game of “Prisoner’s Delight” (p.130), in which cooperation dominates. Nice trick. Now that a rational reason from cooperation has emerged, it is said to have changed the context so that it is no longer a threat to the theory. Remember that the theory defines the prisoner’s dilemma as a game, in which the players choose independently.

In my view, confusion has come from the meaning of “independence.” Suppose Alice cooperates with a probability of .8 and so does Bob. If their choices are independent, we expect mutual cooperation with p = .64, mutual defection with p = .04, and a mismatch of choices with p = .32. If, however, Bob just knows that Alice will respond as he does with p = .68, he can use this belief to compute the expected values of cooperation and defection and choose whichever is higher (Krueger et al., 2012).

Binmore’s two strategies add up to a contradiction. He suggests that people act rationally by getting to the Nash equilibrium and that their initial errors are random (trembling), but he also attacks theoretical explanations of non-Nash behavior. The latter suggests that there is something systematic out there that these theories try to explain. Why else attack them?

The fact of human cooperation remains an inconvenient challenge to game theory no matter whether it regards cooperation as a random error or as a systematic mistake. The rhetoric of error belies claims by game theorists that they are not in the business of telling people what to do, but that instead they only derive equilibria. In his critique of Darwinism, David Stove (1995) observed that “you cannot call something an error without reprehending it” (p. 314). Translation: Game theorists reprehend cooperation. Stove goes on to say that “scientific theory cannot possibly reprehend, in any way at all, any actual facts. [. . . ] Astronomy cannot criticize certain arrangements of stars or planets as erroneous, and no more can biology criticize certain organisms, or characteristics of them, as erroneous” (p. 319). Translation: Game theory cannot criticize people for cooperating. And “wherever Darwinisim is in error, Darwinians simply call the organisms in question or their characteristics, an error!” (p. 320). Translation: Wherever game theory is in error, game theorists simply call the players in question or their actions an error. Stove claims that if strict Darwinism were true, we would not see the following behaviors (among others) in humans: Accepting submission signals, adoption, the maternal resentment of baby snatching, feticide, contraception, homosexuality, the love of animals, altruism, fondness for alcoholic drinks, respect for the wishes of the dead. But these behaviors occur with non-ignorable frequency. Darwinism must either declare these behaviors or itself in error. And lo, altruism is one of the Darwinian errors. It is a game-theoretic error as well.

This post was about a specific disagreement with Binmore. I now repeat my initial recommendation of his book as a great introduction to game theory. I mean it.

Binmore, K. (2007). Game theory: A very brief introduction. Oxford, UK: Oxford University Press.

Camerer, C. F. (2003). Behavioral game theory. Princeton, NJ: Russell Sage.

Colman, A. M. (2003). Cooperation, psychological game theory, and limitations of rationality in social interaction. Behavioral and Brain Sciences, 26, 139-153.

Krueger, J. I., DiDonato, T. E., & Freestone, D. (2012). Social projection can solve social dilemmas. Psychological Inquiry, 23, 1-27.

Krueger, J. I., Freestone, D., & DiDonato, T. E. (2012). Twilight of a dilemma: A réplique. Psychological Inquiry, 23, 85-100.

Stove, D. (1995). Darwinian fairytales. New York, NY: Encounter Books.

advertisement
More from Joachim I. Krueger Ph.D.
More from Psychology Today