He that does good to another does good also to himself.
~ Seneca of Cordoba, Roman stoic who looked beyond the distinction between egoistic and social preferences
The famous prisoner’s dilemma (PD) presents a choice between two strategies: cooperation and defection. Both players have this choice and that’s the trouble. If both cooperate, each gets a handy reward (R), which is higher than the penalty (P) for mutual defection. However, if one defects, while the other cooperates, the former gets the temptation payoff (T), which is higher than R, and the other gets the Sucker’s payoff (S), which is lower than P. The classical solution to the PD is that two rational players will defect, noticing that they are better off no matter what the other player does (T > R and P > S). Yet, many people cooperate, and as they do, they earn collectively more than classic rationalists allow. One challenge of this finding is to explain cooperation; another is to make it even stronger.
To explain cooperation, a popular approach is to assume that people have social preferences in addition to looking out for their own material interests. People care—if only somewhat—about the welfare of others and about what others think of them. One interesting social preference is called inequality (or ‘inequity’) aversion, which refers to a negative reaction (disutility) to any perceived difference between the payoff for the self and the payoff for the other player. Economist Ernst Fehr (e.g., Fehr & Camerer, 2007) allows different terms for an inequality in which one is ahead and for an inequality in which one is behind. He calls the former reaction compassion (the term guilt might also apply), and he calls the latter reaction envy.
Can compassion and envy explain cooperation in the PD? Let’s begin with the assumption that compassion and envy are equally strong. The effect of compassion is that the objective payoff T offered for unilateral defection is subjectively reduced to T - b(T - S), where b is a weighting term. The effect of envy is that S is also reduced so that it becomes S - a(T - S). With S becoming more negative, envy cannot motivate more cooperation, only less.
What about compassion? If compassion is strong enough to make T smaller than R, the PD is transformed into an assurance game, that is, a game in which players prefer mutual cooperation over unilateral defection. This transformation is achieved when b > (T - R)/(T - S). Notice that the assurance game is still a dilemma. Although defection no longer dominates, players are still tempted to play it safe and defect in order to protect themselves from being suckered. Because of this, Fehr & Camerer can only say that “the average player prefers cooperation if she believes the opponent cooperates, and prefers defection if she believes the opponent defects” (p. 421). This is a big concession: Without a theory of expectations, social preference models cannot predict cooperation. Once inequality aversion has turned the objective PD into a subjective assurance game, players whose subjective probability of the other’s cooperation is greater than (T - S)/(T + R – P - S) will find cooperation more attractive than defection. The question is, where can they find these expectations?
This is a dilemma for social preference theorists. One option is to grant their players information about their opponents. If they do, however, they can no longer claim to be studying one-shot anonymous games because the point of such games is to not have extra information. The other option is to say that (some) people are reciprocal altruists. They cooperate if they expect others to cooperate (and if they hate inequality). This option begs the question of expectation.
It is not only that many people (about half on average) cooperate in the PD; they are also sensitive to the finer points of the payoff structure. Consider a game in which T = 12, R = 11, P = 1, and S = 0. This game seems ‘nice’ because there is little room for greed (T – R = 1) or fear (P – S = 1). One might feel emboldened to cooperate because there is little to lose but much to gain if both act in the same way (R – P = 10). In contrast, a game in which T = 12, R = 7, P = 5, and S = 0 seems ‘nasty’ because the differences between payoffs representing greed or fear are comparatively large. Putting all four payoffs together, the niceness of the game is expressed by the index K, which is (R – P)/(T – S). K is predicts the rate of cooperation in a game very well.
In our initial look at the role of inequality aversion, we equated the pull of compassion with the pull of envy. Although increasing the weight of these social preferences can break the dominance of defection, it does not affect K if a = b. To explain why there is more cooperation in nice than in nasty games, the sentiment of inequality aversion per se does nothing. One must assume that compassion weighs more heavily than envy. The K index for the transformed (i.e., subjective and effective) payoff matrix being (R-P)/(a[T-S] – b[T-S]) makes it clear that games will become subjectively nicer as b/a increases.
Three problems remain:  One must hope that compassion is indeed stronger than envy. Social psychological research, however, supports the opposite idea, namely that people find it easier to rationalize their own relative advantage over another than to tolerate their own disadvantage of the same size.  The distinction between compassion and envy refutes the notion of a unitary construct of inequality aversion. In its pure form, the construct of inequality aversion is independent of self-regard (selfishness) and other-regard (benevolence). The notion of compassion conflates inequality aversion with other-regard, and the notion of envy conflates it with self-regard.  Even with the separation of compassion and envy and the added hope that the former is stronger than the latter, we only see that K increases with b/a. We still do not see why or how a larger K should lead to more cooperation.
My colleagues and I recently offered an answer to , thereby making  and  moot. We propose that people – and especially players in one-shot anonymous games – expect others to behave and choose as they themselves do. Their expected probability of seeing their own strategic choices reciprocated or matched tends to be greater than .5. There are good psychological and statistical reasons for why this should be so, but these lie beyond the scope of this essay. Please see Krueger, DiDonato & Freestone (2012) for more. The key implication for the present purpose is that as K goes up, that is, as the game becomes nicer, a lower probability of expected reciprocity is sufficient for the expected value of cooperation to exceed the expected value of defection.
The construction of a social preference model to explain social behavior is a costly undertaking. It requires the purchase of additional psychological constructs and mathematical parameters to represent these constructs. When such attempts fail to accomplish their stated mission, the price seems hardly worth it. Why not go with the expected reciprocity model, which we developed from the simple notion of social projection? When collectively desirable behavior, such as cooperation, can be accounted for by a single parameter for egocentric expectation, it seems that a good bargain is at hand.
What about the challenge of increasing social cooperation? Increasing compassion (or guilt) may help pave the way (by increasing K), but it is not enough. If people projected their own strategic choices more strongly onto others, cooperation would become more attractive. The challenge is therefore to increase projection. Projection is strong inasmuch as people lack concrete information about others. Ignorance, in this case, is bliss, but it is a bliss that is difficult to achieve. When information about others is available, the most helpful kind is that which makes important similarities (e.g., of group membership) salient. Projection regarding behavior can then be grafted onto this basic perception of similarity.
I will not tire to point out that the social projection model is not and cannot be designed to account for all of the data. It is meant to provide a minimally sufficient account for cooperation in social dilemmas. Other factors may come into play. Social projection has nothing to say, for example, about the possibility that social giving as such, and even if done anonymously, may feel rewarding and that the usual suspects in the brain light up to back it up. The reward intrinsic to giving can, if it is strong enough, even turn cooperation into the dominating strategy. If it comes with a dopamine rush, social giving takes us to the edge of a philosophical conundrum: What is it exactly that make a preference social if it is the good old individual that feels the pleasure and endures the pain?
In this essay, I have argued that a person may experience the difference between T and S as guilt rather than compassion, the term Fehr & Camerer prefer. I think that both interpretations are possible. Even a single individual might oscillate between the two. For the sake of model clarity, however, I prefer the term guilt because it makes it comparable to envy. Both terms are sensitive to the interpersonal comparison that is being made. The experience of compassion does not require this comparison and hence does not force the conclusion that the self has too much.
Fehr, E., & Camerer, C. F. (2007). Social neuroeconomics: the neural circuitry of social preferences. Trends in Cognitive Sciences, 11, 419-427.
Krueger, J. I., DiDonato, T. E., & Freestone, D. (2012). Social projection can solve social dilemmas. Psychological Inquiry, 23, 1-27.