One Among Many

The self in social context

Prosper With Projection

Our basic similarities (should) lead us into cooperation

https://www.google.com/search?hl=en&site=imghp&tbm=isch&source=hp&biw=1111&bih=554&q=cooperation&oq=cooperation&gs_l=img.3...0.0

What people choose to do depends, among other things, on their beliefs about one another.

~ Partha Dasgupta (2007, 57)

From projection to expected values

My goal in this essay is to show that social projection can contribute to cooperation in social dilemmas or so-called non-cooperative games. The proposal is that social projection is not only an effect of choice but also a cause. That people project after they made a choice is well known since the seminal studies by Hal Kelley, Robyn Dawes, and David Messick during the 1970s. Cooperators believe that most others cooperate, and defectors believe that most others defect. There are many psychological reasons for projecting one’s behavior onto others, but from a minimal modeling point of view, we notice that projection after choice is consistent with the Bayesian idea that sampling information, or evidence, of any size affects belief. This effect is strong inasmuch as prior beliefs are weak, which means that it is at its maximum when people are ignorant about what others are likely to do. Then, even a single event, like one’s own choice, has a dramatic effect on expectation. This is a good way to think about social dilemmas or experimental games that are played only once with strangers. Here, own behavior is all one has. There is no shadow of the future, no reputational concerns, no opportunity to reward or punish, no relevant memories, and so forth.

Find a Therapist

Search for a mental health professional near you.

Why do so many people cooperate in these idealized conditions? Game theory says a rational person will defect. But to say that cooperators are irrational is not enlightening. It is a mere exercise in labeling – and blaming. Nor is it enlightening to say that cooperation stems from random error, or the so-called “trembling hand.” When the probability of cooperation is .5 this is one hell of a tremor. We need a positive theory of cooperation in non-cooperative games. What about social projection before choice?

The idea is simple: If, as Dawes showed in 1989, with Laplace nodding from the grave, social projection after choice satisfies Bayesian rationality, then it will also satisfy Bayesian rationality before choice, until someone proves that the latter is a fallacy. Some have tried, but I remain unconvinced.

Let’s illustrate projection before choice with a thought experiment. You find yourself in a prisoner’s dilemma with someone who is as much like you as we can make it. The other player is your clone, your twin, and your best friend, all rolled into one. In short, the other person is someone whose behavior, if sampled over a space of events, is highly correlated with yours. You know that this person ends up choosing what you choose most of the time because he or she is embedded in the same causal networks. The factors that drive his or her behavior are the same that drive yours.

Also, let’s take the payoff matrix seriously by assuming that the payoffs that typify the prisoner’s dilemma capture all the preferences the person has, be they self-regarding or social. There is no further transformation of the payoffs at the psychological level.

The recognition that the other player is more likely to end up choosing what you choose than choosing differently is represented by the probability p. As the other person becomes more similar to you, p approaches 1. It is 1, if the other person is you (or your mirror image). You see what happens at the limit. If you cooperate, the other person cooperates; if you defect, the other person defects. The prisoner’s dilemma devolves into a choice between mutual cooperation and mutual defection. A sane person chooses cooperation because it pays more than defection. No reference to social preferences is necessary.

If you relax the assumption of identity, that is, as you let p slide towards .5, all four payoffs come again into play, but from the perspective of projection-driven choice, we are not really dealing with a dilemma at all, but with a simpler decision task. Given the expectations players have generated, they can calculate the expected values of cooperation and defection and select whichever is greater.

Descriptive issues

There is little disagreement about the ability of this decision-making model to account for a good chunk of the empirical evidence. [a] Individual differences in projection predict the willingness to cooperate. [b] People cooperate more with those who are similar to them, for example, by shared category membership. [c] People cooperate more in “nice” than in “nasty” dilemmas. In nice games the extra money that can be made by defection is comparatively small, which means that a low level of projection is sufficient to render the expected value of cooperation larger than the expected value of defection (see Krueger, DiDonato, & Freestone, 2012, for a full review).

Let me emphasize that competing attempts to explain cooperation in social dilemmas have had only limited success. Often, they work only for some types of game, but not others; imply the influence of social projection without acknowledging it; or are downright circular. An example of each is given in Note [1].

Normative issues

The objections that are brought against the projection-before-choice hypothesis attack its normative status. Most of these arguments boil down to two claims. The first claim is that if I cooperate because I think that my cooperation is related to your cooperation, I seem to be acting as if I had causal power over your behavior. But clearly, I don’t have such power, and only fools think they do. We both are acting independently in the moment. There is no causal path from my behavior to yours. And if there were such a path, wouldn’t there also have to be a causal path from your behavior to mine? And if so, how could either one of us act as a cause for the other? In short, the idea that one player’s choice directly affects the other player’s choice in a symmetrical and simultaneous game is incoherent.

Advocates of the social projection hypothesis never claimed that there are direct causal paths between players. Remember the Bayesian basis of the hypothesis. One’s own choice is a piece of evidence that can – indeed, should – be used to update one’s beliefs, and everyone agrees that probabilistic beliefs must inform choice. Who seriously argues that expected values must be computed without considering the probabilities of the payoffs?

The concern is that if I contemplate cooperation and find that its expected value is higher than the expected value of defection when I contemplate defection, I am generating 2 different probability estimates. Because there is only 1 actual probability of cooperation, I already know that one of my estimates must be false. The response to this argument is the same as the one that Dawes gave when justifying social projection after choice. He noted that cooperators and defectors have different information, namely their own behavior, which is different, and therefore their estimates must differ. The logic of before-choice projection is exactly the same, except that we have one player in 2 hypothetical states as opposed to 2 players in 2 different factual states. The Bayesian model makes no distinction between the 2-player and the 1-player situation. They are statistically equivalent.

Since we have already rejected the idea that players can directly make each other cooperate, we know that it is only the statistical relationship between own choice and other’s choice that matters. Remember that 2 players are similar in more respects than they are dissimilar. They end up doing the same thing more often than not. In a world of binary choices it must be so unless all choices are random. People act similarly – that is, there usually exists a majority choice – because choices depend on the causal underbelly of all behavior, and much of this underbelly is shared [for a counterargument and a rebuttal, see Note 2]. Much of human behavior can be described by common-cause models. If I cooperate (or defect) I do so for mostly the same reasons that make you cooperate (or defect). The correlation between our choices is spurious in the statistical sense because there is no (nor can there be) a direct causal path in a simultaneous anonymous game. My cooperation does not cause you to cooperate, but it signals to me that you probably cooperate as well because your choice stems from many of the same causes and reasons.

The common-cause argument has provoked many reactions, some of which can only be described as “allergic.” Why? When critics respond with passion, it is instructive to ask what other ideas they would have to surrender if they were to accept the proposal at hand. Some of these ideas may be cherished and worthy of spirited defense. In the case of the opposition to the projection-before-choice hypothesis, the idea at stake is that of free choice, or put more bluntly, free will. I have already said that two players decide independently. By that I mean that each is left to deliberate privately and there is no opportunity to influence the other player. This is a practical definition of independence. A metaphysical definition of independence assumes, however, that each player is absolutely free to choose between cooperation and defection. This means that my decision is conditionally independent of the other’s decision. Whatever the other has chosen, cooperation or defection, my probability of ending up choosing cooperation is the same. This, however, would amount to the denial of any statistical association between my choice and his, which is contrary to logic and the brute empirical facts. The most likely outcome is that we both will have chosen what most people choose.

The common-cause model welcomes this fundamental interdependency of our choices. Unless we reject determinism, we must accept that choices, like any other behavior, depend in large part on antecedent conditions, and these antecedent conditions cannot be unique to each individual person. That is mathematically impossible. Once we let go of the fantasy of free will, we can treat our choices in social dilemmas as events that are revealed to us, rather than events that we have created ex nihilo. If god can create light out of nothing, we cannot create cooperation out of nothing. The lay perception that people have metaphysically free choice will probably persist, but it is a poor guide to scientific progress.

Colman’s last complaint

Andrew Colman and his collaborators have done a great deal of work to push psychological game theory forward. But they hold fast to the idea that the social projection hypothesis must be rejected on normative grounds. They have an article in press in Decision (Colman, Pulford, & Lawrence, 2014a), in which they seek to refute the projection hypothesis, among other things. I wrote a rejoinder to their paper (Krueger, 2014), to which they, in turn, added a rebuttal (Colman, Pulford, & Lawrence, 2014b). In this rebuttal, they present their final argument against projection. They write:

“According to the Bayesian logic behind social projection, a strategy that is actually played provides evidence of how a co-player is likely to choose, but until it is chosen and played it provides no basis for prediction, and once it has been played, it is too late to use the fact that it was chosen as a basis for choosing it. The following example clarifies this temporal structure objection. Suppose that you habitually donate 1% of your income to charity. If you have literally no other evidence about what others donate, you may reasonably (according to Bayesian inference) estimate that others probably also donate about 1%. If, on the other hand, you merely contemplate donating 10% but decide not to do so, perhaps because it seems far too much, then you are not justified in inferring that, if you had indeed donated 10%—although you did not—then others would probably also have donated 10%, thereby eliminating world poverty at a stroke. If you truly believed that, then you would have an overwhelming moral argument for donating 10%. [A] consistent interpretation of social projection theory suggests that you should expect others not to choose what you yourself have not chosen.”

The projection-to-choice hypothesis is crispest when applied to anonymous one-shot games. Any prospective refutation would be most compelling if applied to that context. By stepping into the complexity of the outside world, Colman et al’s example introduces a host of additional assumptions that undermine its effectiveness. Their illustrative scenario assumes the existence of social or moral preferences, which play no role in projection. Further, the scenario hints at historical knowledge of own and others’ behavior, which plays no role in projection. Most importantly, Colman et al. implicitly make a direct causal argument (I could have eliminated “world poverty at a stroke.”). This is the type of magical causal reasoning that we all rejected already. In other words, Colman et al. knock down a straw man, or, as the British would say, Aunt Sally. What they need to disarm is the common-cause argument and not the personal-cause argument. They are, in other words, barking up the wrong tree. The common-cause argument is that “had I donated more, I would have done so because the antecedent causes shaping my behavior made it so. This would be a counterfactual reality. The same causes would have affected others. Therefore, yes, had I been caused to act differently, so would have many others, and the world would be a better place. But alas, the causal net was not like that. By misunderstanding the common-cause argument, Colman et al. succumb to the free will fallacy.

begging the question
petitio principii
https://www.google.com/search?hl=en&site=imghp&tbm=isch&source=hp&biw=1111&bih=554&q=begging+the+question&oq=begging+the+questio
Begging the question

If Colman’s last complaint does not succeed as intended, let me try myself to refute the normative argument of the projection hypothesis. A theme running through various criticisms of the projection hypothesis that it is ultimately circular and question begging. I have not seen this argument in a particularly coherent or fully worked out fashion, so I will not cite anyone. This is my own version of the challenge.

The goal of the projection hypothesis is to explain – with normative force – why cooperation occurs. To do so, it must assume that that the probability of cooperation can indeed be greater than 0. If it were not so, I could not assume that the probability of cooperation is > 0 when I myself contemplate to exercise my option to cooperate. In other words, to explain how cooperation can normatively occur, we must already assume that it descriptively occurs. If we believe that there cannot be any cooperation (modus tollens), we cannot make an argument that there ought to be any cooperation.

How to rebut myself. We know that the idea that there is no, or cannot be, any cooperation is factually false. Ortho- or heterodox game theorists may think so, but that is only because they let their normative ideas dictate their beliefs of what should empirically be the case. It is possible to accuse the projection hypothesis of being a question-begging exercise, but to do so, one want must commit this very fallacy. One must assume the truth of one’s own theory, which is that only defection is normative, to prove that a theory that says that cooperation can be normative, is false.

Notes

[1] Limited range. Level-1 reasoning, a type of reasoning assumed by cognitive-hierarchy theory (Camerer, Ho, & Chong, 2004), describes individuals who think that others choose strategies randomly and then select their own best response. This reasoning can account for coordination in a Hi-Lo game (where both players gain a lot if they both play Hi, gain little if they both play lo, and gain nothing if their choices mismatch), but, like classic game theory, it predicts uniform defection in the prisoner’s dilemma.

Begging projection. According to the theory of team reasoning, some individuals reason from the point of view of the group or team. They cooperate because their own cooperation is necessary for mutual cooperation to be achieved. They do not cooperate, however, if they believe that the other is not a team reasoner. Whereas I see this as saying that team reasoning requires the projection of one’s own team-reasoning attitude onto the other, Colman et al. (2014ab), deny any such implication.

Circularity. Strong von Stackelberg reasoning makes the “distinctive assumption is that players choose strategies as if they believed that their co-players could anticipate their choices and invariably choose best replies to them, and that they maximize their own payoffs accordingly” (Colman et al., 2014a, p. 10 in ms.). The circularity of this argument is clear. A player’s choice features as both that which causes the other player to choose a certain strategy (if only in as-if fashion) and the consequence of the other player’s choice. Incidentally, strong von Stackelberg reasoning also counterfactually predicts uniform defection in the prisoner’s dilemma.

[2] Does the finding that the overall probability of cooperation is .5 refute the idea that most people align with a majority? The value of .5 comes from a review of a great many studies (Sally, 1995). There is tremendous variation across studies. One critical variable is the difficulty of the game (i.e., its niceness vs. nastiness) as captured by the payoff structure. At the level of the specific game and its context, the argument of predicting individuals by knowing the majority (and oneself) holds.

Endbar: Why is cooperation moral? Or is it?

Ethics is in origin the art of recommending to others the sacrifices required for cooperation with oneself.

~ Bertrand Russell (found here)

Among students of social dilemmas, it is agreed that cooperation is the moral choice. In the prisoner’s dilemma and related games (e.g., public-goods games, resource management games), mutual cooperation yields the most efficient result, that is, it maximizes total wealth. Making a private contribution to public welfare is moral in the deontological or Kantian sense. This interpretation of morality gets little play in the current academic literature, however. Three other views of morality dominate (see Krueger et al., 2012, for a summary). One of these views stems from the notion of reciprocal altruism. Biology and cultural norms converge on the imperative to repay favors in kind. In anonymous one-shot games, the problem is how to get this imperative off the ground. Individuals can only pre-respond (prespond) when anticipating cooperation, which raises the question of where these expectations come from.

The second view stems from notions of sympathy and benevolence. Individuals will cooperate, according to this view because they are not entirely selfish. They also have some regard for the welfare of others. When the payoffs are evenly spaced from unilateral defection to mutual cooperation to mutual defection to unilateral cooperation, the weight placed on the outcomes of the other must be greater than ½ of the weight placed on one’s own payoffs before cooperation can become the dominating choice. With different payoffs, this minimum weight of benevolence also varies. When the morality of a person is only judged by whether he or she cooperated, a person with a stable weight on benevolence will appear to be moral in some dilemmas and immoral in others.

The third view is that people recognize that if they switch from cooperation to defection the cost they impose on the other is greater than their own gain. In the case of evenly spaced payoffs, for example, the other’s cost is twice one’s own gain. They may therefore stick with cooperation, perhaps invoking the Golden Rule. They cooperate with the other because they would not want the other to defect against them. Alas, the other person may defect and thereby impose a cost that is greater than the gain that other person pockets. Even among moral individuals, concern for their own potential losses should be no less than concern for the losses of others.

The epigraph suggests that Bertrand Russell felt that much of what passes as morality is anchored in egocentrism. The projection hypothesis allows us, for once, to go beyond Russell, and note that socially desirable states can be attained in groups comprising perfectly egocentric and self-regarding individuals. No manipulations of others for one's own ends is necessary.

References

Camerer, C. F., Ho, T.-H, & Chong, J.-K. (2004). A cognitive hierarchy model of games. Quarterly Journal of Economics, 119, 861-898. doi: 10.1162/0033553041502225

Colman, A. M., Pulford, B. D., & Lawrence, C. L. (2014a). Explaining strategic coordination: Cognitive hierarchy theory, strong Stackelberg reasoning, and team reasoning. Decision, 1.

Colman, A. M., Pulford, B. D., & Lawrence, C. L. (2014b). Multi-heuristic strategy choice: Response to Krueger. Decision, 1.

Dasgupta, P. (2007). Economics. A very short introduction. Oxford, UK: Oxford University Press.

Dawes, R. M. (1989). Statistical criteria for establishing a truly false consensus effect. Journal of Experimental Social Psychology, 25, 1-17. doi.org/10.1016/0022-1031(89)90036-X

Krueger, J. I. (2014). Heuristic game theory. Decision, 1.

Krueger, J. I., DiDonato, T. E., & Freestone, D. (2012). Social projection can solve social dilemmas. Psychological Inquiry, 23, 1-27. doi:10.1080/1047840X.2012.641167

Sally, D. (1995). Conversation and cooperation in social dilemmas. Rationality and Society, 7, 58–92. doi:10.1177/1043463195007001004

Joachim Krueger, Ph.D., is a social psychologist at Brown University who believes that rational thinking and socially responsible behavior are attainable goals.

more...

Subscribe to One Among Many

Current Issue

Just Say It

When and how should we open up to loved ones?