In moral psychology, one of the most well-known methods of parsing the reasons outcomes obtain involves the categories of actions and omissions. Actions are intuitively understandable: they are behaviors which bring about certain consequences directly. By contrast, omissions represent failures to act that result in certain consequences. As a quick example, a man who steals your wallet commits an act; a man who finds your lost wallet, keeps it for himself, and says nothing to you commits an omission. Though actions and omissions might result in precisely the same consequences (in that case, you end up with less money and the man ends up with more), they do not tend to be judged the same way. Specifically, actions tend to be judged as more morally wrong than comparable omissions and more deserving of punishment. While this state of affairs might seem perfectly normal to you or I, a deeper understanding of it requires us to take a step back and consider why it is, in fact, rather strange.
And so long as I omit the intellectual source of that strategy, I sound more creative.
From an evolutionary standpoint this action-omission distinction is strange for a clear reason: evolution is a consequentialist process. If I’m worse off because you stole from me or because you failed to return my wallet when you could have, I’m still worse off. Organisms should be expected to avoid costs, regardless of their origin. Importantly, costs need not only be conceptualized as what one might typically envision them to be, like inflictions of physical damage or stealing resources; they can also be understood as failures to deliver benefits. Consider a new mother: though the mother might not kill the child directly, if she fails to provision the infant with food, the infant will die all the same. From the perspective of the child, the failure of the mother to provide food could well be considered a cost inflicted by negligence. So, if someone could avoid harming me – or could provide me with some benefit - but does not, why should it matter whether that outcome obtained because of an action or an omission?
The first part of that answer concerns a concept I mentioned in my last post: the welfare tradeoff ratio. Omissions are, generally speaking, less indicative of one’s underlying WTR than acts. Let’s consider the wallet example again: when a wallet is stolen, this act expresses that one is willing to make me suffer a cost so they can benefit; when the wallet is found and not returned, this represents a failure of an individual to deliver a benefit to me at some cost to themselves (the time required to track me down and forgoing the money in my wallet). While the former expresses a negative WTR, the latter simply fails to express an overtly-positive one. To the extent that moral punishment is designed to recalibrate WTRs, then, acts provide us with more accurate estimates of WTRs, and might subsequently tend to recruit those cognitive moral systems to a greater degree. Unfortunately, this explanation is not entirely fulfilling yet, owing to the consequentialist facts of the matter: it can be as good, from my perspective, to increase the WTR of the thief towards me as it is for me to increase the omitter’s WTR. Doing either means I would have more money than if I had not, which is a useful outcome. Costs and benefits, in this world, are tallied on the same score board.
The second part of the answer, then, needs to invoke the costs inherent in enacting this modification of WTRs through moral punishment. Just as it’s good for me if others hold a high WTR with respect me, it’s similarly good for others if I held a high WTR with respect to them. This means that people, unsurprisingly, are often less-than-accommodating when it comes to giving up their welfare for another without the proper persuasion; persuasion which happens to take time and energy to enact, and comes with certain risks of retaliation. Accordingly, we ought to expect mechanisms that function to enact moral condemnation strategically: when the costs of doing so are sufficiently low or the benefits to doing so are sufficiently high. After all, it’s the case that every living person right now could, in principle, increase their WTR towards you, but trying to morally condemn every living person for not doing so is unlikely to be a productive strategy. Not only would such a strategy result in the condemner undertaking many endeavors that are unlikely to be successful relative to the invested effort, but someone increasing their WTR towards you requires they lower their WTR towards someone else, and those someone elses would typically not be tickled by the prospect.
“You want my friend’s investment? Then come and take it, tough guy”
Given the costs involved in indiscriminate moral condemnation on non-maximal WTRs, we can focus the considerations of the action-omission distinction down to the following question: what is it about punishing omissions that tends to be less-productive than punishing actions? One possible explanation comes from DeScioli, Bruening, & Kurzban (2011). The trio posit that omissions are judged less harshly than actions because omissions tend to leave less overt evidence of wrongdoing. As punishment costs tend to decrease as the number of punishers increases, if third party punishers make use of evidence in deciding whether or not to become involved, then material evidence should make punishment easier to enact. Unfortunately, the design that the researchers used in their experiments does not appear to definitively speak to their hypothesis. Specifically, they found the effect they were looking for – namely, the reduction of the action-omission effect – but they only managed to do so via reframing an omission (failing to turn a train or stop a demolition) into an action (pressing a button that failed to turn a train or stop a demolition). It is not clear that such a manipulation solely varied the evidence available without fundamentally altering other morally-relevant factors.
There is another experiment that did manage to substantially reduce the action-omission effect without introducing such a confound, however: Haidt & Baron (1996). In this paper, the authors presented subjects with a story about a person selling his car. The seller knows that there is a 1/3 chance the car contains a manufacturing defect that will cause it to fall apart soon; a potential defect specific to the year the car was made. When a buyer inquires about the year of the manufacturing defect the seller either (a) lies about it or (b) doesn’t correct the buyer, who had suddenly exclaimed that they remember which year it was, though they were incorrect. When asked how wrong it was for the seller to do (or fail to do) what they did, the action-omission effect was observed when the buyer was not personally known to the seller. However, if the seller happened to be good friends with the buyer, the degree of the effect was reduced by almost half. In other words, when the buyer and seller were good friends, it mattered less whether the seller cheated the buyer through action or omission; both were deemed to be relatively unacceptable (and, interestingly, both were deemed to be more wrong overall as well). However, when the buyer and the seller were all but strangers, people rated the cheat via omission to be relatively less wrong than the action. Moral judgments in close relationships appeared to generally become more consequentialist.
If evidence was the deciding factor in the action-omission distinction, then the closeness of the relationship between the actor or omitted and the target should not be expected to have any effect on moral judgments (as the nature of the relationship does not itself generate any additional observable evidence). While this finding does not rule out the role of evidence in the action-omission distinction altogether, it does suggest that evidence concerns alone are insufficient for understanding the distinction. The nature of the relationship between the actor and victim is, however, predicted to have an effect when considering the WTR model. We expect our friends, especially our close friends, to have relatively high WTRs with respect to us; we might even expect them to go out of their way to suffer costs to help us if necessary. Indications that they are unwilling to do so – whether through action or omission – represent betrayals of that friendship. Further, when a friend behaves in a manner indicating a negative WTR towards us, the gulf between the expected (highly positive) and actual (negative) WTR is far greater than if a stranger behaved comparably (as we might expect a neutral starting point for strangers).
“I hate when girls lie online about having a torso!”
Though this analysis does not provide a complete explanation of the action/omission distinction by any means, it does point us in the right direction. It would seem that actions actively advertise WTRs, whereas omissions do not necessarily do likewise. Morally condemning all those who do not display positive WTRs per se does not make much sense, as the costs involved in doing so are so high as to preclude efficiency. Further, those who simply fail to express a positive WTR towards you might be less liable to inflict future costs, relative to those who express a negative one (i.e. the man who fails to return your wallet is not necessarily as liable to hurt you in the future as the one who directly steals from you). Selectively directing that condemnation at those who display negative appreciably low or negative WTRs, then, appears to be a more viable strategy: it could help direct condemnation towards where it’s liable to do the most good. This basic premise should hold especially given a close relationship with the perpetrator: such relationships entail more frequent contact and, accordingly, more opportunities for one’s WTR towards you to matter.
References: DeScioli, P., Bruening, R., & Kurzban. R. (2011). The omission effect in moral cognition: Toward a functional explanation. Evolution and Human Behavior, 32, 204-215.
Haidt, J. & Baron, J. (1996). Social roles and the moral judgment of acts and omissions. European Journal of Social Psychology, 26, 201-218.