According to Nowak (2012) and his endlessly-helpful mathematical models, once one assumes that cooperation can be sustained via one's reputation, one ends up with the conclusion that cooperation can, indeed, be sustained (solely) by reputation, even if the same two individuals in a population never interact with each other more than once. As evidenced by the popular Joan Jett song, Bad Reputation, however, one can conclude there's likely something profoundly incomplete about this picture: why would Joan give her reputation the finger in this now-famous rock anthem, and why would millions of fans be eagerly singing along, if reputation was that powerful of a force? The answer to this question will involve digging deeper into the assumptions that went into Nowak's model and finding where they have gone wrong. In this case, not only are some of the assumptions of Nowak's model a poor fit to reality in terms of the one's he makes, but, perhaps more importantly, also poor in regards to what assumptions he doesn't make.
The initial point is that Joan needed to advertise her reputation. Reputations do not follow their owners around like a badge; they're not the type of thing that can be accurately assessed on sight. Accordingly, if one does not have access to information about someone's reputation, then their reputation, good or bad, would be entirely ineffective at deciding how to treat that someone. This problem is clearly not unsolvable, though. According to Sigmund (2012), the simple way around this problem involves direct observation: if I observe a person being mean to you, I can avoid that person without having to suffer the costs of their meanness firsthand. Simple enough, sure, but there are many problems with this suggestion too, some of which are more obvious than others. The first of these problems would be that a substantial amount - if not the vast majority - of (informative and relevant) human interactions are not visible to many people beyond those parties who are already directly involved. Affairs can be hidden, thieves can go undetected, and promises can be made in private, among other things (like, say, browsing histories being deleted...). Now that concern alone would not stop reputations derived from indirect information from being useful, but it would weaken its influence substantially if few people ever have access to it.
The problems don't end there, though; not by a long shot. On top of information not being available, and not being important, there's also the untouched matter concerning whether the information is even accurate. Potential inaccuracies can come in three forms: passive misunderstandings, active misinformation, and diagnosticity. Taking these in order, consider a case where you see your friend get punched in the nose from across the room by a stranger. From this information, you might decide that it's best to steer clear of that stranger. This seems like a smart move, except for what you didn't see: a moment prior your friend, being a bit drunk, had told the stranger's wife to leave her husband at the bar and come home with him instead. So, what does this example show us? That even if you've directly observed an interaction, you probably didn't observe one or more previous interactions that led up to the current one, and those might well have mattered. To put this in the language of game theorists, did you just witness a cooperator punishing a defector, a defector harming a cooperator, or some other combination? From your lone observation, there's no sure way to tell.
But what if your friend told you that the other person had attacked them without provocation? Most reputational information would seem to spread this way, given that most human interaction is not observed by most other people. We could call this the "taking someone else's word for it" model of reputation. The problems here should be clear to anyone who has ever had friends: it's possible your friend had misinterpreted the situation, or that your friend had some ulterior motive for actively manipulating your perception that person's reputation. To again rephrase this in terms of game theorist's language, if cooperators can be manipulated into punishing other cooperators, either through misperception or misinformation, this throws another sizable wrench into the gears of the reputation model. If one's reputation can be easily manipulated, this, to some extent, will make cooperation more costly (if one fails to reap some of cooperation's benefits or can offset some of defection's costs). Talk is cheap, and indirect reciprocity models seem to require a lot of it.
This brings us to the final accuracy point: diagnosticity. Let's say that, hypothetically, the stranger did attack your friend without provocation, and this was observed accurately. What have you learned from this encounter? Perhaps you might infer that the stranger is likely to be an all-around nasty person, but there's no way to tell precisely how predictive that incident is of the stranger's later behavior, either towards your friend or towards you. Just because the stranger might make a bad social asset for someone else, it does not mean they'll make a bad social asset for you, in much the same way that my not giving a homeless person change doesn't mean my friends can't count on my assistance when in need. Further, having a "bad" reputation among one group can even result in my having a good relationship with a different group; the enemy of my enemy is my friend, as the saying goes. In fact, that last point is probably what Joan Jett was advertising in her iconic song: not that she has a bad reputation with everyone, just that she has a bad reputation among those other people. The video for her song would lead us to believe those other people are also, more or less, without morals, only taking a liking to Joan when she has something to offer them.
References: Nowak, M. (2012). Evolving cooperation. Journal of Theoretical Biology, 299, 1-8.
Sigmund, K. (2012). Moral assessment in indirect reciprocity Journal of Theoretical Biology, 299, 25-30
Copywrite Jesse Marczyk