In my last post, I outlined a number of theoretical problems that stand in the way of reputation being a substantial force for maintaining cooperation via indirect reciprocity. Just to recap them quickly: (1) reputational information is unlikely to be spread much via direct observation, (2) when it is spread, it's most likely to flow towards people who already have a substantial amount of direct interactions with the bearer of the reputation, and (3) reputational information, whether observed visually or transmitted through language, might often be inaccurate (due to manipulation or misperception) or non-diagnostic of an individual's future behavior, either in general or towards the observer. Now all of this is not to say that reputational information would be entirely useless in predicting the future behavior of others; just that it seems to be an unlikely force for sustaining cooperation in reality, despite what some philosophical intuitions written in the language of math might say. My goal today is to try and rescue reputation as a force to be reckoned with.
In all fairness, I did only say that I would try...
The first - and, I think, the most important - step is to fundamentally rethink what this reputational information is being used to assess. The most common current thinking about what third-party reputation information is being used to assess would seem to be the obvious: you want to know about the character of that third party, because that knowledge might predict how that third party will act towards you. On top of assuming away the above problems, then, one would also need to add in the assumption that interactions between you and the third party would be relatively probable. Let's return to the example of your friend getting punched by a stranger at a bar one night. Assuming that you accurately observed all the relevant parts of the incident and the behavior of the stranger there was also predictive of how he would behave towards you (that is, he would attack you unprovoked), if you weren't going to interact with that stranger anyway, regardless of whether you received that information or not, while that information might be true, it's not valuable.
But what if part of what people are trying to assess isn't how that third party will behave towards them, but rather how that third party will behave towards their social allies. To clarify this point, let's take a simple example with three people: A, B, and X. Person A and B will represent you and your friend, respectively; person X will represent the third party. Now let's say that A and B have a healthy, mutually-cooperative relationship. Both A and B benefit form this relationship and have extensive histories with each other. Person B and X also have a relationship and extensive histories with one another, but this one is not nearly as cooperative; in fact, person X is downright exploitative over B. Given that A and X are otherwise unlikely to ever interact with each other directly, why would A care about what X does?
The answer to this question - or at least part of that answer - involves A and X interacting indirectly. This requires the addition of a simple assumption, however: the benefits that person B delivers to person A are contingent on person B's state. To make this a little less abstract, let's just use money. Person B has $10 and can invest that money with A. For every dollar that B invests, both players end up making two. If B invests all his money, then, both him and person A end up with $20. In the next round, B has his $10, but before he gets a chance to invest it with A, person X comes along and robs B of half of it. Now, person B only has $5 left to invest with A, netting them both $10. In essence, person X has now become person A's problem, even though the two never interacted. All this assumption does, then, is make clear the fact that people are interacting in a broader social context, rather than in a series of prisoner's dilemmas where your payoff only depends on your own, personal interactions.
Now if only there was a good metaphor for that idea...
With the addition of this assumption, we're able to circumvent many of the initial problems that reputational models faced. Taking them in reverse order, we are able to get around the direct-interaction issue, since your social payoffs now co-vary to some extent with your friends, making direct interaction no longer a necessary condition. It also allows us to circumvent the diagnosticity issue: there's less of a concern about how a third party might interact with you differently than your friend because it's the third party's behavior towards your friend that you're trying to alter. It also, to some extent, allows us to get around the accuracy issue: if your friend was attacked and lies to you about why they were attacked, it matter less, as one of your primary concerns is simply making sure that your friend isn't hurt, regardless of whether your friend was in the right or not. This takes some of the sting out of the issues of misperception or misinformation.
That said, it does not take all the sting out. In the previous example, person A has a vested interest in making sure B is not exploited, which gives person B some leverage. Let's alter the example a bit, and say that person B can only invest $5 with person A during any given round; in that case, if X steals $5 from B's initial $10, it wouldn't affect person A at all. Since person B would rather not be exploited, they might wish to enlist A's help, but find person A less than eager to pitch in. This leaves person B with three options: first, B might just suck it up and suffer the exploitation. Alternative, B might consider withholding cooperation from A until A is willing to help out, similar to B going on a strike. If person B opts for this route, then all concerns for accuracy are gone; person A helping out is merely a precondition of maintaining B's cooperation. This strategy is risky for B, however, as it might look like exploitation from A's point of view. As this makes B a costlier interaction partner, person A might consider taking his business elsewhere, so to speak. This would leave B still exploited and out a cooperative partner.
There is another potential way around the issue, though: person B might attempt to persuade A that person X really was interfering in such a way that made B unable to invest; that is, person B might try to convince A that X had really stolen $8 instead of $5. If person B is successful in this task, it might still make him look like a costlier social investment, but not because he is himself attempting to exploit A. Person B looks like he really does want to cooperate, but is being prevented from doing so by another. In other words, B looks more like a true friend to A, rather than just a fair-weather one or an exploiter (Tooby & Cosmides, 1996). In this case, something like manifesting depression might work well for B to recruit support to deal with X (Hagen, 2003). Even if such behavior doesn't directly stop X from interfering in B's life, though, it might also prompt A to increase their investment in B to help maintain the relationship despite those losses. Either way, whether through avoiding costs or gaining benefits, B can leverage their value with A in these interactions and maintain their reputation as a cooperator.
"I'll only show back up to work after you help me kill my cheating wife"
Finally, let's step out of the simple interaction into the bigger picture. I also mentioned last time that, sometimes, cooperating with one individual necessitates defecting on another. If person A and B allied against person X, if person Y is cooperating with X, person Y may now also incur some of the punishment A and B direct at X, either directly or indirectly. Again, to make this less abstract, consider that you recently found out your friend holds a very unpopular social opinion (say, that women shouldn't be allowed to vote) that you do not. Other people's scorn for your friend now makes your association with him all the more harmful for you: by benefiting him, you can, by proxy, be seen to either be helping him promote his views, or be inferred to hold those same views yourself. In either case, being his friend has now become that much costlier, and the value of the relationship might need to be reassessed in that light, even if his views might otherwise have little impact on your relationship directly. Knowing that someone has a good or bad reputation more generally can be seen as useful information in this light, as it might tell you all sorts of things about how costly an association with them might eventually prove to be.
References: Hagen, E.H. (2003). The bargaining model of depression. In: Genetic and Cultural Evolution of Cooperation, P. Hammerstein (ed.). MIT Press, 95-123
Tooby, J., & Cosmides, L. (1996). Friendship and the banker’s paradox:Other pathways to the evolution of adaptations for altruism. Proceedings of the British Academy (88), 119-143
Copywrite: Jesse Marczyk