Skip to main content

Verified by Psychology Today

Teamwork

Should You Give A Damn About Your Reputation? (Part 1)

Some problems with current thinking on indirect reciprocity

According to Nowak (2012) and his endlessly-helpful mathematical models, once one assumes that cooperation can be sustained via one's reputation, one ends up with the conclusion that cooperation can, indeed, be sustained (solely) by reputation, even if the same two individuals in a population never interact with each other more than once. As evidenced by the popular Joan Jett song, Bad Reputation, however, one can conclude there's likely something profoundly incomplete about this picture: why would Joan give her reputation the finger in this now-famous rock anthem, and why would millions of fans be eagerly singing along, if reputation was that powerful of a force? The answer to this question will involve digging deeper into the assumptions that went into Nowak's model and finding where they have gone wrong. In this case, not only are some of the assumptions of Nowak's model a poor fit to reality in terms of the one's he makes, but, perhaps more importantly, also poor in regards to what assumptions he doesn't make.

My reply to some current thinking about reputation can’t be expressed as succinctly.

The first thing worth pointing out here is probably that Joan Jett was wrong, even if she wasn't lying: she most certainly did give a damn about her reputation. In fact, some part of her gave so much of a damn about her reputation that she ended up writing a song about it, despite that not being her conscious intent. More precisely, if she didn't care about her reputation on any level, advertising that fact to others would be rather strange; it's not as if that advertisement would provide Joan herself with any additional information. However, if that advertisement had an effect on the way that other people viewed her - updating her reputation among the listeners - her penning of the lyrics is immediately more understandable. She wants other people to think she doesn't care about her (bad) reputation; she's not trying to remind herself. There are a number of key insights that come from this understanding, many of which speak to the assumptions of these models of cooperation.

The initial point is that Joan needed to advertise her reputation. Reputations do not follow their owners around like a badge; they're not the type of thing that can be accurately assessed on sight. Accordingly, if one does not have access to information about someone's reputation, then their reputation, good or bad, would be entirely ineffective at deciding how to treat that someone. This problem is clearly not unsolvable, though. According to Sigmund (2012), the simple way around this problem involves direct observation: if I observe a person being mean to you, I can avoid that person without having to suffer the costs of their meanness firsthand. Simple enough, sure, but there are many problems with this suggestion too, some of which are more obvious than others. The first of these problems would be that a substantial amount - if not the vast majority - of (informative and relevant) human interactions are not visible to many people beyond those parties who are already directly involved. Affairs can be hidden, thieves can go undetected, and promises can be made in private, among other things (like, say, browsing histories being deleted...). Now that concern alone would not stop reputations derived from indirect information from being useful, but it would weaken its influence substantially if few people ever have access to it.

See how you don’t care about anyone pictured here? The feeling’s mutual.

There's a second, related concern that weakens it further, though: provided an interaction is observed by other parties, those who most likely to be doing the observing in the first place are the people who probably already have directly interacted with one or more of the others they're observing; a natural result of people not spending their time around each other at random. People only have a limited amount of time to spend around others, and, since one can't be in two places at once, you naturally end up spending a good deal of that time with friends (for a variety of good reasons that we need not get into now). So, if the people who can make the most use of reputational information (strangers) are the least likely to be observing anything that will tell them much about it, this would make indirect reciprocity a rather weak force. Indeed, as I've covered previously, research has found that people can make use of indirectly-acquired reputation information, and do make use of it when that's all they have. Once they have information from direct interactions, however, the indirect variety of reputational information ceases to have an effect on their behavior. It's your local (in the social sense; not necessarily physical-distance sense) reputation that's most valuable. Your reputation more globally - among those you're unlikely to ever interact much with - would be far less important.

The problems don't end there, though; not by a long shot. On top of information not being available, and not being important, there's also the untouched matter concerning whether the information is even accurate. Potential inaccuracies can come in three forms: passive misunderstandings, active misinformation, and diagnosticity. Taking these in order, consider a case where you see your friend get punched in the nose from across the room by a stranger. From this information, you might decide that it's best to steer clear of that stranger. This seems like a smart move, except for what you didn't see: a moment prior your friend, being a bit drunk, had told the stranger's wife to leave her husband at the bar and come home with him instead. So, what does this example show us? That even if you've directly observed an interaction, you probably didn't observe one or more previous interactions that led up to the current one, and those might well have mattered. To put this in the language of game theorists, did you just witness a cooperator punishing a defector, a defector harming a cooperator, or some other combination? From your lone observation, there's no sure way to tell.

But what if your friend told you that the other person had attacked them without provocation? Most reputational information would seem to spread this way, given that most human interaction is not observed by most other people. We could call this the "taking someone else's word for it" model of reputation. The problems here should be clear to anyone who has ever had friends: it's possible your friend had misinterpreted the situation, or that your friend had some ulterior motive for actively manipulating your perception that person's reputation. To again rephrase this in terms of game theorist's language, if cooperators can be manipulated into punishing other cooperators, either through misperception or misinformation, this throws another sizable wrench into the gears of the reputation model. If one's reputation can be easily manipulated, this, to some extent, will make cooperation more costly (if one fails to reap some of cooperation's benefits or can offset some of defection's costs). Talk is cheap, and indirect reciprocity models seem to require a lot of it.

This brings us to the final accuracy point: diagnosticity. Let's say that, hypothetically, the stranger did attack your friend without provocation, and this was observed accurately. What have you learned from this encounter? Perhaps you might infer that the stranger is likely to be an all-around nasty person, but there's no way to tell precisely how predictive that incident is of the stranger's later behavior, either towards your friend or towards you. Just because the stranger might make a bad social asset for someone else, it does not mean they'll make a bad social asset for you, in much the same way that my not giving a homeless person change doesn't mean my friends can't count on my assistance when in need. Further, having a "bad" reputation among one group can even result in my having a good relationship with a different group; the enemy of my enemy is my friend, as the saying goes. In fact, that last point is probably what Joan Jett was advertising in her iconic song: not that she has a bad reputation with everyone, just that she has a bad reputation among those other people. The video for her song would lead us to believe those other people are also, more or less, without morals, only taking a liking to Joan when she has something to offer them.

The type of people who really don’t give a damn about their reputation.

While this in not an exhaustive list of ways in which many current assumptions of reputation models are lacking (there are, for instance, also cases where cooperating with one individual necessitates defecting on another), it still poses many severe problems that need to be overcome. Just to recap: information flow is limited, that flow is generally biased away from the people who need it the most, there's no guarantee of the accuracy of that information if it's received, and that information, even if received and accurate, is not necessarily predictive of future behavior. The information might not exist, might not be accurate, or might not matter. Despite these shortcomings, however, what other people think of you does seem to matter; it's just that the reasons it matters need to be, in some respects, fundamentally rethought. Those reasons will be the subject of the next post.

References: Nowak, M. (2012). Evolving cooperation. Journal of Theoretical Biology, 299, 1-8.

Sigmund, K. (2012). Moral assessment in indirect reciprocity Journal of Theoretical Biology, 299, 25-30

Copywrite Jesse Marczyk

advertisement
More from Jesse Marczyk Ph.D.
More from Psychology Today
More from Jesse Marczyk Ph.D.
More from Psychology Today