As you know by now, many human interactions can be modeled as games, where two or more players try to do well while being saddled with uncertainty regarding the behavior of others. Some of these games merely require coordination for individuals and the group to succeed, whereas others are social dilemmas because the individual’s self-interest and the group’s collective interested cannot be served at once. Formal game theory tells us what rational individuals, who expect others to be rational too, do in these games. A famous result is that game theory underpredicts successful coordination and cooperation in games it tellingly labels noncooperative.

To address the disconnect between game theory and game behavior, psychologists and other social scientists have proposed a variety of hypotheses. Unfortunately, no psychological master theory has emerged to replace game theory. Despite its empirical failures, game theory holds several advantages. It is parsimonious in its assumptions regarding self-interested rationality; it is rigorous in its mathematical derivations; it can be applied to all types of game. Most of its psychological competitors make more assumptions before getting off the ground (nonparsimony is, by and large, undesirable); some do not meet the requirements of logic or rationality; and all are applicable only to a subset of games.

The Volunteer’s Dilemma (VoD) is the game theorist’s daymare (Diekmann, 1985, see also Grossman et al. for an introduction). It has two Pareto-efficient Nash equilibria, but no way of telling us how to reach either one of them. It also has a mixed-strategy equilibrium, which alas does not eliminate the possibility of catastrophe and again, the theory has nothing to say about the psychology required to get there.

These are the payoffs in a 2-person VoD: If you volunteer you get the psychic benefit R for having done the noble thing. R is greater than P, the penalty for defecting if everyone defects, and it is less than T, the Temptation payoff received by the lone defector. There is no dominating strategy. If you knew the other person’s choice, you would always do the opposite, resulting in the two pure-strategy Nash equilibria. 2R is not Pareto efficient; it is always less than T+R, because T>R. Since one player does not know what the other will do, a mixed strategy is all that is left. Each player chooses to volunteer with a probability that can be derived from the payoffs. The equilibrium probability ensures that the player is invulnerable to exploitation. The derivation of a mixed-strategy Nash equilibrium is a mathematical feat, but, as noted above, it ensures a residual probability of the mutual defection catastrophe, and it is mute as to how individuals make individual decisions probabilistically.

Suppose you have decided that volunteering with a probability of .75 is the rational thing to do. How will you actually do this? Do you have a mental device that allows you to make a probabilistic choice if you can choose only once? I think there is no such thing. The best most statisticians can do is to observe a sequence of choices, determine the probability of V (vs. D) from the relative frequency of V, such that p(V) = f(V)/(f[V]+f[D]), and make sure that the events are serially independent. They do not know – as far as I know – how an individual and single probabilistic event comes about. How would you?

There is an elegant, though seemingly weird, way around this. You can externalize the probabilistic base of your decision (as the Diceman recommendeth). If your goal is to volunteer with a probability of .75, for example, throw a pair of dice and volunteer unless both come up heads. I speculate that ordinary people will frown upon the strategic admission of chance into their decision making. We are taught to be autonomous and strong decision makers. In this cultural climate, a Diceman-type decider would be falsely accused of not knowing what he is doing, when in fact, he is being hyper rational. I am looking forward to testing the hypothesis that a player who chooses to volunteer without probabilistic hedges is seen as more rational and moral than a player who plays Nash odds.

When we continue, we will look at what kind of medicine some of the psychological game theories have to offer to relieve the volunteer’s headache.

Diekmann, A. (1985). Volunteer's dilemma. Journal of Conflict Resolution, 29, 605-610.

I became the Dalai Lama not on a volunteer basis.~ The Dalai Lama (here)

As you know by now, many human interactions can be modeled as games, where two or more players try to do well while being saddled with uncertainty regarding the behavior of others. Some of these games merely require coordination for individuals and the group to succeed, whereas others are social dilemmas because the individual’s self-interest and the group’s collective interested cannot be served at once. Formal game theory tells us what rational individuals, who expect others to be rational too, do in these games. A famous result is that game theory underpredicts successful coordination and cooperation in games it tellingly labels

noncooperative.To address the disconnect between game theory and game behavior, psychologists and other social scientists have proposed a variety of hypotheses. Unfortunately, no psychological master theory has emerged to replace game theory. Despite its empirical failures, game theory holds several advantages. It is parsimonious in its assumptions regarding self-interested rationality; it is rigorous in its mathematical derivations; it can be applied to all types of game. Most of its psychological competitors make more assumptions before getting off the ground (nonparsimony is, by and large, undesirable); some do not meet the requirements of logic or rationality; and all are applicable only to a subset of games.

The

Volunteer’s Dilemma(VoD) is the game theorist’s daymare (Diekmann, 1985, see also Grossman et al. for an introduction). It has two Pareto-efficient Nash equilibria, but no way of telling us how to reach either one of them. It also has a mixed-strategy equilibrium, which alas does not eliminate the possibility of catastrophe and again, the theory has nothing to say about the psychology required to get there.These are the payoffs in a 2-person VoD: If you volunteer you get the psychic benefit R for having done the noble thing. R is greater than P, the penalty for defecting if everyone defects, and it is less than T, the Temptation payoff received by the lone defector. There is no dominating strategy. If you knew the other person’s choice, you would always do the opposite, resulting in the two pure-strategy Nash equilibria. 2R is not Pareto efficient; it is always less than T+R, because T>R. Since one player does not know what the other will do, a mixed strategy is all that is left. Each player chooses to volunteer with a probability that can be derived from the payoffs. The equilibrium probability ensures that the player is invulnerable to exploitation. The derivation of a mixed-strategy Nash equilibrium is a mathematical feat, but, as noted above, it ensures a residual probability of the mutual defection catastrophe, and it is mute as to how individuals make individual decisions probabilistically.

Suppose you have decided that volunteering with a probability of .75 is the rational thing to do. How will you actually do this? Do you have a mental device that allows you to make a probabilistic choice if you can choose only once? I think there is no such thing. The best most statisticians can do is to observe a sequence of choices, determine the probability of V (vs. D) from the relative frequency of V, such that p(V) = f(V)/(f[V]+f[D]), and make sure that the events are serially independent. They do not know – as far as I know – how an individual and single probabilistic event comes about. How would

you?There is an elegant, though seemingly weird, way around this. You can externalize the probabilistic base of your decision (as the

Dicemanrecommendeth). If your goal is to volunteer with a probability of .75, for example, throw a pair of dice and volunteer unless both come up heads. I speculate that ordinary people will frown upon the strategic admission of chance into their decision making. We are taught to be autonomous and strong decision makers. In this cultural climate, a Diceman-type decider would be falsely accused of not knowing what he is doing, when in fact, he is being hyper rational. I am looking forward to testing the hypothesis that a player who chooses to volunteer without probabilistic hedges is seen as more rational and moral than a player who plays Nash odds.When we continue, we will look at what kind of medicine some of the psychological game theories have to offer to relieve the volunteer’s headache.

Diekmann, A. (1985). Volunteer's dilemma.

Journal of Conflict Resolution, 29, 605-610.