Skip to main content

Verified by Psychology Today

Punishment

Getting Others to Do Their Fair Share, Fairly

New study shows people incur a cost to discipline free-riders.

Key points

  • Criticizing or punishing cooperators instead of free-riders undermines cooperation.
  • Punishing based on faulty observations can therefor go awry.
  • Rational cooperators should be motivated to seek out better information in order to enforce cooperative norms more effectively.
 Representació de la justícia espanyola a un mural de Gràcia, Barcelona/Wikimedia Commons
Source: Català: Representació de la justícia espanyola a un mural de Gràcia, Barcelona/Wikimedia Commons

Behavioral and experimental economists have found much support over the years for the idea that most people prefer to cooperate with others when facing a shared challenge, but that it’s necessary that a sufficient number in any given group or society stand ready to provide gentle criticism and, if needed, to more strongly punish rule violators, in order to curb temptations to do less than one’s share. Experiments strongly suggest that “cooperators” are sufficiently motivated, whether by anger, aversion to inequality, or some other force, that they’re willing to part with some of their own material rewards to engage in the task of disciplining “free-riders.”

As studies of the matter have ventured into ever more detail, one concern raised has been the possibility that the information available to people regarding who has and who hasn’t violated a rule might be “noisy” in the sense of sometimes containing reporting errors. Several experiments were conducted during these years using the paradigm of a voluntary contribution experiment or public goods game with punishment opportunities, but modifying it so that the reports participants receive about the contributions of other group members are erroneous a known fraction of the time. In the standard version, group members benefit when all contribute, there’s an incentive for a strictly selfish individual to contribute nothing, and the option of punishing tends to be taken up by the cooperatively inclined, who use it to reduce earnings of less cooperative counterparts—an action that tends to stabilize contributions at high levels.

In the new variants of the experiment that include an error possibility, participants know that a report would be accurate a certain percentage of the time—for example, 50% of the time—but they have no way to tell whether any specific report is accurate or erroneous. Will a group member still punish another member if seeing them reported as having failed to contribute, even if they know the report may be wrong? If so, will such punishment still be of any help in deterring “free-riding” (failing to contribute)?

If everyone involved were hyper-rational about the matter albeit able to bind themselves to punish when they see an apparent violation, then the answer to those questions depends on how likely errors are, as well as on the severity and costliness of punishment. It’s easy to imagine a situation in which errors occur with a very low probability, say 5%, and therefore a report of not contributing would be accurate with 95% likelihood. Given this, simply going ahead and punishing when a member is reported as not contributing may be effective in deterring free-riding. But as the probability of errors becomes high, a member becomes no more likely to escape punishment by contributing than by not contributing, so punishment could not be an effective deterrent to selfish free-riding.

A prominent finding from the experiments that studied these games but that were modified to have a known fraction of erroneous reports is that those reported to be failing to contribute to the group project continued to be targeted with punishment (which involves a reduction of earnings at the voluntarily-incurred expense of the punisher). One group of researchers described this as “punishment despite reasonable doubt.” Whereas availability of the punishment option had often led to a tendency of most or all group members to contribute all or most of their experimental tokens to the group’s joint project, with everyone benefiting, the revised circumstance of potentially erroneous punishment resulted in a lower rate of contributions. With both less contributed and more loss of money to the punishment process, the advantage of having a punishment option available was no longer apparent.

A natural next step has been to see what happens if group members can exert some control over the reliability of the reports that they receive. In real life, your degree of confidence in your perception that a member of your circle is violating a rule or not is often a function of your own efforts—whether you take the time to observe them at the job, ask their co-workers to share their impressions, look more carefully at their contribution, etc. My collaborators, Andreas Nicklisch and Christian Thoeni, and I took this next step by running a version of the experiment that gives group members imperfect observations of each other’s contributions but that also offers them the option of paying to see more accurate reports. Reassuringly, we found it to be relatively rare for an experiment participant to choose to punish a fellow group member based on information they knew to be imperfect, if the opportunity to obtain more accurate information at a modest cost was available to them. More interestingly, we found that while group members could obtain (for a low price) reports sufficiently good to render punishing an effective deterrent (in particular, reports that lower the error probability from 50% to 25%), almost all those who chose to pay opted not just for “adequate” reports, but paid twice as much to get perfect information (that is, to lower the error probability from 50% to 0% instead of only to 25%). This strongly suggests that they not only wanted to get information accurate enough to be able to deter others from free-riding, but they also wanted to avoid punishing wrongfully, if possible. They cared not only about the efficacy of any reprimanding or punishing that took place, but also about its fairness.

Readers should take note that in the decision experiments discussed here, it’s usually not privately profitable for individuals to incur a cost to punish those failing to contribute. Participants intent on achieving a mutually beneficial cooperative equilibrium in their group incur the costs of contributing, of punishing, and (in the latest experiment) of paying for better information, all out of one or another kind of “prosocial” motivation, discussed elsewhere in this blog and in the related literature cited by our work. Those that care want to be fair. That is, individuals with a cooperative disposition, ones willing to contribute to the group welfare provided that others also do so, care enough about doing their parts that they’re not only willing to incur some cost to punish free-riders, but they also bear the cost of avoiding unfair punishing. That they pay for full information rather than merely “good enough” information shows this desire to be fair. The demand for justice or fairness, identified through the economic device of “willingness to pay,” seems widespread in societies in which the modal individual has internalized strong norms of cooperation or of carrying their share of the burden in order to achieve better outcomes for the group as a whole.

References

Andreas Nicklisch, Louis Putterman and Christian Thoeni, Trigger-Happy or Precisionist? On Demand for Monitoring in Peer-Based Public Goods Provision, Journal of Public Economics, 2021.

advertisement
More from Louis Putterman Ph.D.
More from Psychology Today
More from Louis Putterman Ph.D.
More from Psychology Today