One Among Many

The self in social context

Don't Diss the Demon

and do good

Newcomb Rd
The only thing that will redeem mankind is cooperation.

~ Bertrand Russell (found here)

Two of the most famous problems in strategic decision-making are the Prisoner’s Dilemma and Newcomb’s Problem. Both demand a choice between a strategy of cooperation and a strategy of defection. In both games, many individuals cooperate. Yet, game theory asserts that defection is rational and that cooperation is not. An alternative theory says cooperation can be rational and that it benefits the cooperating individual.

Find a Therapist

Search for a mental health professional near you.

doppelgänger
http://www.google.com/imgres?num=10&hl=en&tbo=d&biw=1170&bih=629&tbm=isch&tbnid=vVdJ6Iu6NgFIBM:&imgrefurl=http://joshuamatics.wo
The alternative theory recognizes that most people respond like most others do most of the time – undeniably. If it weren’t so, each conceivable human choice between two options would have to be made with a probability of .5. This would be an unreasonable assumption and therefore you can expect others to do as you do. As a thought experiment, suppose a person plays a Prisoner’s Dilemma with an identical twin, or better yet, with his mirror image. Whatever the person does the mirror image does, so that the Prisoner’s Dilemma devolves into a choice between mutual cooperation and mutual defection. A sane person will cooperate because the payoff for cooperation is better than the payoff for defection. Binmore (2007) complains that this imaginary situation turns the Prisoner’s Dilemma into a Prisoner’s Delight, which is a different game, one in which cooperation dominates. To show that sane people cooperate in a Prisoner’s Delight does nothing – so Binmore – to prove that cooperation can be rational in the Prisoner’s Dilemma, because it is, after all, a different game.

The probability that the second person will do what the first person does is hardly ever 1. What if it is .99, or .98, or .97, and so on? As the probability falls, a player’s response can no longer be automatic cooperation, but it must become one of calculation. What is the expected value of cooperation, and is it greater than the expected value of defection? If so, the rational player will cooperate. It is easy to identify the probability of receiving a matching response from the other player for which the two expected values are the same. It is 1/(1+((mutual cooperation – mutual defection)/(unilateral defection – unilateral cooperation)). Whatever the exact payoffs are, once p reaches .5, defection is the rational choice.

However, a p = .5 is unrealistically low. Because majority responses exist virtually everywhere, p > .5 virtually everywhere. The underlying cause of this is the social and biological make-up that is shared among humans. Two random individuals share more similarities than dissimilarities, and this includes the domain of their behaviors in strategic games. Therefore, each Prisoner’s Dilemma is, at least in part, a Prisoner’s Delight.

To Binmore (2007) and other game theorists, this alternative theory is witchcraft, or worse, witchcraft that doesn’t work. He declares that “wishful thinking is never rational” (p. 158), and that players “choose their strategies independently” (p. 159, emphasis in the original). With regard to voting – which is an act of social cooperation – he asserts that “there may be large numbers of people who think and feel like you, but their decisions on whether to go out and vote won’t change if you stay home and watch television” (p. 161).

The problem with Binmore’s point of view is that it makes a distinction between traits (thought and feelings) and behaviors. It grants similarity and thus allows social projection in the former domain while denying it in the latter. But if most of our traits are shared by others, so are most of our behaviors (which come from those traits). The fallacy of Binmore’s distinction is to assume that behaviors are special because they – purportedly – come from decisions that are made freely, i.e., independently of others. Free will must be assumed to support this theory of rationality. If, however, anything in nature is either interdependent (and thus correlated) or truly random (in a quantum kind of sense), there is no room for truly free and independent choice, which is the condition necessary to suck the delight out of the dilemma.

Binmore recognizes the existence of correlated responses among certain types of bird. “If the nestlings were identical twins, both players could therefore count on their opponent choosing exactly the same strategy as themselves” (pp. 130 – 131). If nestlings, then why not humans? The statistical logic is the same. But hey, only humans have free will, right?

Newcomb's prob

http://www.google.com/imgres?num=10&hl=en&tbo=d&biw=1170&bih=629&tbm=isch&tbnid=6rywmZ-lRWKB4M:&imgrefurl=http://barang.sg/index
Newcomb’s Problem resembles a one-sided Prisoner’s Dilemma (it is not quite so, but it is close enough). The player has a choice between opening only one of two boxes and opening both boxes. A clever demon (Omniscient Jones, god, der Weltgeist) predicted what the player would do and the player knows it. If the demon predicted that the player would open only one box, he put 1M dirham (the demon is Moroccan) in it. If he predicted the player would open both boxes, he did nothing. The other box always contains 1,000 dirham. Legend has it that the demon is awfully good at these predictions. For the player, one-boxing amounts to an act of cooperation – trusting that the demon expected this choice. Two-boxing amounts to an act of defection with the assumption that whatever the demon predicted, the player will be better off by 1,000 dirham. Of course, Binmore and the Orthodox Church of Game Theory declare two-boxing to be rational and one-boxing to be witchcraft. The player, they declaim, has the privilege of independent choice.

Everyone agrees that the player cannot make the demon put in 1M dirham just by choosing to open only one box. Yet, the magic still works for the player because both his “choice” and the demon’s prediction are descended from the same latent cause. Nature – i.e., the state of the universe at time 1 – is the common cause of what the player does and what the demon predicts. Action and prediction are correlated spuriously in the eyes of statistical purists, but no matter, they are correlated. Whatever the player ends up doing is statistically related to the demon’s prediction. The fact that the player feels he is making an independent choice – much as Binmore feels players in Newcomb’s Game make independent choices – is perfectly beside the point.

Game theorists are blind to the common-cause structure of the game. They think that players (and witchcraft scientists) assume that the correlation between the player’s choice and the demon’s prediction reflects some mystical path of influence from the player to the demon (back in time, even) and from the demon to his own prediction. For this to work, the player’s choice would have to be the first cause, setting everything in motion. But it is not (as I have now said ad nauseam). The apparently counterintuitive common-cause model can be hammered home with the following cognitive adjustment: Instead of assuming that the demon makes a prediction based on what he has  figured out about the player’s impending choice, assume that the demon causes both the player’s behavior and his own prediction thereof. This way, we pay our respects to the determinism of nature and go on record as disavowing the untenable doctrine of free will . . . and Newcomb no longer has a problem.

Binmore, K. (2007). Game theory: A very short introduction. New York, NY: Oxford University Press.

 

Joachim Krueger, Ph.D., is a social psychologist at Brown University who believes that rational thinking and socially responsible behavior are attainable goals.

more...

Subscribe to One Among Many

Current Issue

Just Say It

When and how should we open up to loved ones?