When I introduced the Volunteer’s Dilemma (VoD), I explained how game theory looks at it, noted that psychologists only agree that there must be a better explanation, and worried about their failure to find a convincing alternative.
Recall the basic scenario. In a 2-person game, you and I each have a choice between volunteering (V) and defecting (D). If you choose D, while I choose V, you get T clams while I get R, and T>R. If we both choose V, we both get R. If we both choose D, we both get P clams, where P<R. The VoD is trickiest when played once by strangers who cannot communicate with each other.
Traditional social psychology denies that there is a dilemma. Darley & Latané (1968) showed that individual bystanders’ reluctance to intervene in an emergency situation increases with the number of bystanders present (see Fischer et al., 2011, for a review and meta-analysis). Game theory views this creeping inaction as a manifestation of a rational mixed-strategy equilibrium strategy (Krueger & Massey, 2009). In contrast, mainstream social psychological opinion is that people hide in the group to evade responsibility. By dismissing the costs borne by the volunteers, social psychologists treat volunteering as if it were the dominating choice. This view turns a failure to volunteer into a moral failure. Defectors appear blameworthy. Presumably, they selfishly betray others by succumbing to the lure of loafing.
The reading public likes this view because it is folk psychological. Folk psychologists seek causal explanations even of single events or behaviors. Focusing on the moral valuation of a behavior, they blame defectors and praise volunteers. From a game-theoretic perspective, such causal and moral attributions reduce to the fundamental attribution error, which is a form of outcome bias. If the fundamental attribution error privileges personal over situational causes, it is underdetermined (i.e., not even wrong) in the VoD. A player who decides to volunteer with a probability of, say, .75, would end up volunteering in 75% of rounds of the game, if there were many rounds. But if there is only one round, this player only appears to have chosen to volunteer if he volunteers and only appears to have chosen defection if he defects. In fact, he did no such thing. By acting probabilistically, he does not generate a particular behavior by a discrete choice. Instead, his choice is to generate a strategy with a particular probability. Note that this decision is also consistent with a situational attribution; the player can rationally claim that the particular pay-off structure of the game dictated the probability value he used. In short, a probabilistic understanding of choice in the VoD (and elsewhere) undercuts social-psychological and folk psychological efforts to understand strategic behavior in discrete causal terms.
If the loafing and attribution hypotheses fail, what else does psychology have to offer? I now briefly review some attempts to explain cooperation and coordination in other games (e.g., the prisoner’s dilemma or the hi-lo coordination game) and ask how they might apply to the VoD (see Krueger, DiDonato, & Freestone, 2012, and Colman, Pulford, & Lawrence, 2014, for more on these hypotheses). I consider 2 versions of the VoD, one that is “nice” (R = 4) and another that is “nasty” (R = 1); T = 5 and P = 0 in both. You recognize intuitively that choosing V in the nice game is more appealing than it is in the nasty game. On this score, your folk psychological sense is in tune with the game-theoretic mixed-strategy equilibrium. A rational player volunteers with a greater probability in the nice VoD than in the nasty VoD. But why is your intuition so well-honed? Can other psychological theories explain it?
The differences between payoffs may provoke emotional responses. The difference T-R is often considered the cost of volunteering. This interpretation implies that defection is the default and that a switch to volunteering extracts a price. Alternatively, T-R reflects temptation (hence the label T) or greed. Assuming that you choose V, I can do better by choosing D. Conversely, the difference R-P is often considered the “psychic benefit” of volunteering. However, R-P can be seen as an index of fear. Assuming that you choose D, I can prevent disaster (and realize the socially efficient outcome at the same time) by choosing V. Put differently, volunteering comprises a willingness to resist temptation and risk avoidance. Defection comprises elements of greed and risk seeking. The ratio ([T-R]/[R-P]) captures the nastiness of the game. Nice games have a low ratio. The emotion-regulation hypothesis matches the game-theoretic prediction that there is more volunteering in nice than in nasty game.
Some people not only consider the payoffs at stake for them personally, but they are benevolent in that they also value payoffs going to the other. Suppose I value your payoffs half as much as I value my own (as opposed to not at all). I then subjectively transform my payoffs such that the objectively provided payoff T becomes the subjectively effective payoff T+.5R; R (for mutual volunteering) becomes R+.5R; R (for unilateral volunteering) becomes R+.5T); P becomes P+.5P. With these transformations, the difficulty ratio drops from 4 to 1.33 in the nasty game, and it drops from .25 to .083 in the nice game. In both games, benevolence does not alter the temptation to defect to gain, but it increases the psychic benefit of volunteering (or the fear of being defected against). Benevolence makes volunteering more likely and it attenuates the differences between nice and nasty games, while not changing the nature of the underlying dilemma.
Some people also value fairness. They are inequality averse, which means they put a negative weight on self-other differences in the payoffs. For people who care about inequality with a weight of .5, the nasty game yields T = 3 (i.e., T-.5(T-R)), R (for unilateral volunteering) = -1, whereas the nice game yields T = 4.5 and R = 3.5. For the nasty game, the difficulty ratio remains 4, and for the nice game, it remains 1/4 after transformation. It is noteworthy that in the nasty game, the value transformation reduces the payoff for unilateral volunteering so much that defection becomes the dominating strategy. In other words, inequality aversion - despite being a moral preference - can lead to the worst outcomes.
Taken together, the predictions derived from two simple social preference models are complex and dissatisfying. One needs to know which social preference is at play (benevolence or inequality aversion) and whether the objective payoffs describe a nice or a nasty game. If greater volunteering is what one wants (in line with folk psychological preferences), then benevolence is effective, whereas inequality aversion is not.
Social preference theories have an Achilles heel; they are unable to guide choice without falling back on game theory or other theories that address games without a dominating strategy. Game theory deals with such dilemmas by deriving probabilistic equilibria and other theories offer hypotheses about how people make predictions about the likely behavior of others before they themselves commit to a choice.
A simple, even simple-minded, approach is to assume that players are agnostic about the likely behavior of others. They assume that the other player will volunteer with p = .5. In the nice game, the expected value of cooperation is 4 (2R/2) and the expected value of defection is 2.5 ([T+P]/2). In the nasty game, the corresponding values are 1 and 2.5. Hence, a player assuming p(V) = .5 will volunteer in a nice game and defect in a nasty game. Arguably, players vary in their level of optimism regarding the probability of others volunteering. From the payoff matrix, they can calculate the p value for which the expected values of volunteering and defecting are the same (R/T if P = 0), and then ask if their own subjective estimate of p is greater or smaller than that value. If it is greater, they will defect (i.e., do the opposite of what they think the other is likely to do)
The weakness of this approach is that it says nothing about where the subjective estimates of p come from. Presumably, estimates of p are lower in nasty games than in nice games, and this is a problem. On the one hand, people are more inclined to defect in a nasty game than in a nice game simply because of the less favorable payoffs. On the other hand, the lowering of the subjective estimate of p gives them a reason to volunteer. Yet, once they consider volunteering, their estimate of p goes up again, which in turn encourages them to switch back to defection, and so on.
The theory of social projection explicitly notes that people form estimates about the likely behavior of others by consulting their own (intended) behavior. Whereas social projection can account for cooperation in games such as the prisoner’s dilemma, it has trouble with the VoD. The VoD requires negative coordination. When it is best to do the opposite of what you think the other is about to do, social projection can drive you mad. If pr is the probability that the other will choose as you choose, you will volunteer if pr > (T-R)/(T+P). In the nice game, you volunteer if pr > .2, which is easy to beat, and in the nasty game, you volunteer is pr > .8, which is hard to beat. In the nice game, the sweet zone lies where the level of projection high enough to motivate volunteering, but low enough (.2 < pr < .5) to think that the other person might reap the temptation payoff (so your volunteering would not be in vain). In the nasty game, the sweet zone lies where projection is low enough to motivate defection, but high enough (.5 < pr < .8) to think the other person will volunteer for you.
Maximizing joint payoffs and best replies
In many games, it is useful for the collective for individuals to think on behalf of the collective. In the prisoner’s dilemma, for example, it is beneficial if both players seek to harvest the maximum joint payoff. Both cooperate because they realize that their individual cooperation is a necessary condition for mutual cooperation. This approach, which is sometimes called team reasoning, does not succeed in the VoD. The maximum, Pareto-efficient, payoff occurs when one person volunteers while the other defects. The motive to maximize the sum of payoffs cannot guide choice. It only puts the dilemma into sharper relief. Nice games and nasty games appear equally puzzling to the team reasoner.
Finally, there is the hypothesis that people/players choose whichever strategy is the best response to an opponent who can anticipate what they will do. Freiherr von Stackelberg is credited with this idea, and I am anxiously awaiting a proof that it is not circular. In the VoD, I would reason that you already know that I will volunteer, and that you therefore defect. I therefore indeed cooperate. However, I might equally well reason that you already know that I will defect, and that you therefore volunteer. I therefore volunteer. It seems utterly indeterminate what I will choose. The niceness of the game does not seem to matter either.
Game theory, much battered, holds its own in the VoD. Various psychological hypotheses regarding how sensing and thinking individuals navigate the riddle show serious deficiencies, although some of them work quite well in other game-theoretic contexts. In the VoD, traditional theorizing on the bystander effect and causal attribution turn out to be moralistic and irrational. Emotional regulation theory does well, but it is less parsimonious than the game-theoretic account. Social preference models utterly fail, both in regard to coherence and their own moral pretensions. Expectation theories run in circles without helping players to successfully discoordinate. MaxJoint theories and von Stackelberg reasoning seem neither rational nor able to describe the behavioral data.
In the previous post, I anticipated my conclusion that game theory does well in the VoD. It does go beyond our present capabilities of generating individual choices with a particular probability. We need to learn how to do this. Perhaps it is time to roll the dice.
I google-imaged for a suitable picture of self-sacrifice. Instead, I found a dice picture (see above), which is just as well. It then occurred to me that volunteering shows its dark side in hostile intergroup contexts. Suicide bombers volunteer to sacrifice themselves for the ingroup, hoping to do damage to the outgroup. This should give us pause before getting all too oooh-and-ahhh about the moral beauty of volunteering. I entered the term “suicide bomber” into google images, and what I saw made me sick. Go there at your own risk if you must. Don’t tell them that I sent you. I did not. I am advising you not to go.
~ A colleague pointed out that the suicide bomber scenario is not a VoD because the payoff for unilateral volunteering is less than the payoff for universal defection. And so, finding a volunteer for suicide bombing is not a VoD. This may be so from the outside view, but, as is well known, groups seeking to recruit suicide volunteers emphasize the other-worldly benefits awaiting, or the benefits to the ingroup (especially family and clan) and the harm to the outgroup. If would-be suiciders viewed their own death as the most negative outcome, the rank order of the payoffs would be T > P > R, presenting defection as the dominating strategy.
Colman, A. M., Pulford, B. D., & Lawrence, C. L. (2014). Explaining strategic coordination: Cognitive hierarchy theory, strong Stackelberg reasoning, and team reasoning. Decision. In press.
Darley, J. M., & Latane´, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 8, 377–383.
Fischer, P., Krueger, J. I., Greitemeyer, T., Vogrinic, C., Kastenmüler, A., Frey, D., Wicher, M., & Kainbacher, M. (2011). The bystander effect: A meta-analytic review on bystander intervention in dangerous and non-dangerous emergencies. Psychological Bulletin, 137, 517-537.
Krueger, J. I., DiDonato, T. E., & Freestone, D. (2012). Social projection can solve social dilemmas. Psychological Inquiry, 23, 1-27.
Krueger, J. I., & Massey, A. L. (2009). A rational reconstruction of misbehavior. Social Cognition, 27, 785-810.