*Life is a sexually transmitted disease and the mortality rate is one hundred percent.*~ R. D. Laing (found here)

*Man only plays when in the full meaning of the word he is a man, and he is only completely a man when he plays.*

~ Friedrich Schiller (found here)

You have heard of the *Prisoner’s Dilemma* or the *give-some game*, which is a simple version of it. Imagine Ali & Baba are given 2 dirham each. If they give their money to the other, the amount is doubled, but they can also keep it. To give is to cooperate and to keep is to defect. Ali’s payoffs are 6 dirham for unilateral defection, 4 dirham for mutual cooperation, 2 dirham for mutual defection, and nothing for unilateral cooperation. Game theory says that a rational person defects, realizing that defection pays more (here: 2 dirham) no matter what the other does. Of course, when Ali & Baba end up with 2 dirham each, they deplore the fact that they could have done better had they only managed to coordinate on mutual cooperation. The Prisoner’s Dilemma is a headache because individual rationality subverts efficient outcomes; it ensures that rational people will remain poor.

Lest you conclude that mutual defection is the tragic fate of the common player, game theory reassures you that mutual cooperation is possible if Ali & Baba meet again – and again – and again . . . Playing repeatedly, they can find an equilibrium of mutual cooperation, where each reciprocates the kindness of the other. The reason why this works is that each worries that the other might play the *GRIM* strategy if betrayed. GRIM responds to defection with defection and will never return to cooperation no matter what the first defector does. Ken Binmore, a resident sage of game theory, concludes that “the pair (GRIM, GRIM) is a Nash equilibrium for the indefinitely repeated Prisoner’s Dilemma” (2007, p. 73).

The operative word is “indefinitely.” Why is indefiniteness necessary for cooperation to equilibrate? The answer is *backward induction*. If the number of plays is finite, Ali (Baba) can infer that Baba (Ali) will defect on the last round because there is more worry about a GRIM response. Once defection in the last round is assured, it is clear that the choice in the penultimate round must also be defection because “nothing they do today can affect what will happen tomorrow” (Binmore, p. 72). The same logic applies to the antepenultimate round and so on all the way back to round one. The term “indefinite” cannot refer to not knowing how many rounds are played. If backward induction demands defection, a player must conclude that only defection makes sense as long as there is certainty that the number of rounds is finite. It is not necessary to know how many rounds there will be.

Why would anyone think that a Prisoner’s Dilemma will go on forever? Is there a special place in hell for game theorists where they sit and play the Prisoner’s Dilemma until the place freezes over? Only in such a place can they put the so-called *folk theorem* into practice. The folk theorem says that sustained mutual cooperation can be an equilibrium when play goes on indefinitely. Mortals are, however, just that. They die (before they can go to their special place in hell). In this life, all games are finite, and indefiniteness can only refer to the inability to know the number of rounds. When folks do cooperate – which they thankfully do with a healthy probability – they must do so for other reasons.

Game theorists try to finesse the issue by saying that after each round of play, there is a nonzero probability that another round will be played. This assumption retains the possibility of infinity, although the probability of play with a very very large number of rounds becomes very very very small. But this is just a mathematical trick. It founders on the rocks of reality. Suppose you applied this assumption to life itself. It is true that on each birthday, there is a nonzero probability that you will be able to light another candle next year. Hence, immortality is not impossible, it is just very very very unlikely. Think of it as a version of Hume’s complaint about induction. Even after seeing a huge number of white swans, you can’t be sure that there are no black swans. Knowing that all people who ever lived eventually died, you can’t be sure that those living now will also die. Eternal life and infinite play cannot be rejected on logical grounds, but the empirical case against them is overwhelming. We cannot use the idea of infinite play to explain the existence of cooperation in a finite world. Let us look elsewhere – for example, to psychology.

Binmore’s disdain for psychology drips from the page. He rejects attempts of explaining cooperation in psychological, as opposed to game-theoretic, terms as silly, misguided, and foolish. He writes that “*insofar as they are remembered, the many fallacies that were invented in the hopeless attempts to show that it is rational to cooperate in the Prisoner’s Dilemma are now mostly quoted as entertaining examples of what psychologists call magical reasoning, in which logic is twisted to secure some desired outcome*” (p. 19).

To be coherent, Binmore applies the same standards to explanations of trust. In the game theoretic trust game, a trustor has the option to invest a sum of money (usually $10) in a trustee. If invested, the amount multiplies (typically by 3). The trustee may then either keep the money or reward trust by return some of it to the trustee (Evans & Krueger, 2009). The trust game is like a one-sided Prisoner’s Dilemma, though not quite. Yet, the game theoretic analysis is the same. A rational, self-interested trustee will not reciprocate in a one-shot game because reciprocation spells a loss of funds. Knowing this, a rational trustor will not trust in the first place. Endless play provides many equilibria, of which trust-reciprocity is one. Again, indefiniteness in the sense of infinity is a must, and we don’t have that. Binmore is at a loss to explain the widespreadedness of trust and reciprocity.

Binmore suggests that a trustee reciprocates because he “can’t afford to lose his reputation for honesty by cheating” (p. 83). The trustee does not want to be *GRIMmed*. If the fear of reputation loss – which I think is a powerful motivator – kicks in in a finite world, we have overcome the scifi world of the folk theorem, and we have entered the domain of psychology. Notice that the emotion of fear comes with an expectation that certain aversive events will happen with a certain probability. Rational gamers in the Prisoner’s Dilemma or the trust game need no probabilities; they only need to be able to count and subtract.

Back to Binmore. He recalls that “as a small child [he was] wondering why shopkeepers hand over the goods after being paid. Why don’t they just pocket the money?” (p. 76). Why indeed? If they thought they were playing the game of buy-and-sell, they could equilibrate folk theoretically. But they knew that they would not stay in business forever, and they expected young Kenneth to start shopping elsewhere even before that time. Sure, they would be rationally concerned about their reputation as honest shopkeepers, but they needed psychology for that.

The quest for a psychologically informed game theory continues. For those interested, we published a proposal in *Psychological Inquiry* (Krueger, DiDonato, & Freestone, 2012) and our colleagues offered more ideas in 9 open peer commentaries.

Binmore, K. (2007). *Game theory: A very short introduction*. New York, NY: Oxford University Press.

Evans, A. M., & Krueger, J. I. (2009). The psychology (and economics) of trust. *Social and Personality Psychology Compass: Intrapersonal Processes, 3*, 1003-1017.

Krueger, J. I., DiDonato, T. E., & Freestone, D. (2012). Social projection can solve social dilemmas. *Psychological Inquiry, 23*, 1-27.