What makes humans moral beings? This is the question that leads off the abstract of a paper by Baumard et al (2013), and certainly one worth considering. However, before one can begin to answer that question, one should have a pretty good idea in mind as to what precisely they mean by the term ‘moral’. On that front, there appears to be little in the way of consensus: some have equated morality with things like empathy, altruism, impartiality, condemnation, conscience, welfare gains, or fairness. While all of these can be features of moral judgments, none of these intuitions about what morality is tends to differentiate it from the non-moral domain. For instance, mammary glands are adaptations for altruism, but not necessarily adaptations for morality; people can empathize with the plight of sick individuals without feeling that the issue is a moral one. If one wishes to have a productive discussion of what makes humans moral beings, it would seem to beneficial to begin from some solid conceptualization of what morality is and what it has evolved to do. If you don’t start from that point, there’s a good chance you’ll end up talking about a different topic than morality.
The current paper up for examination by Baumard et al (2013) is a bit of an offender in that regard: their account explicitly mentions that a definition for the term is harm to agree upon and they use the word “moral” to mean “fair”. To understand this issue, first consider the model that the authors put forth: their account attempts to explain moral sentiments by suggesting that selection pressures might have been expected to shape people to seek out the best possible social deals they could get. In simple terms, the idea contains the following points: (1) people are generally better off cooperating than not, but (2) some individuals are better cooperative partners than others. Since (3) people only have a limited budget of time and energy to spend on these cooperative interactions and can’t cooperate with everyone, we should expect that (4) so long as people have a choice as to whom they cooperate with, people will tend to choose to spend their limited time with the most productive partners. The result is that overly-selfish or unfair individuals will not be selected as partners, resulting in selection pressures generating cognitive mechanisms concerned with fairness or altruism. Their model, in other words, centers around managing the costs and benefits from cooperative interactions. People are moral (fair) because it leads to them be preferred as an interaction partner.
Now that all sounds well and good – and I would agree with each of the points in the line of thought – but it doesn’t sound a whole lot like a discussion about what makes people moral. One way of conceptualizing the idea is to think about a simple context: shopping. If I’m in the market for, say, a new pair of shoes, I have a number of different stores I might buy my shoes from and a number of potential shoes in each store. Shopping around for a shoe that I like with the most for a reasonable price fills all the above criteria in some sense, but shoe-shopping is not itself often a moral task. That a shoe I like is priced at a range higher than I am willing to pay does not necessarily mean I will say that such pricing is wrong the way I might say stealing is wrong. Baumard et al (2013) recognize this issue, noting that a challenge is explaining why people don’t just have selfish motives, but also moral motives that lead to them to respect other people’s interests per se.
Now, again, this would be an excellent time to have some kind of working definition of what precisely morality is, because, if one doesn’t, it might seem a bit peculiar to contrast moral and selfish motivations – which the authors do – as if the two are opposite ends of some spectrum. I say that because Baumard et al (2013) go on to discuss how people who have truly moral concerns for the welfare of others might be chosen as cooperative partners more often because they’re more altruistic, building up a reputation as a good cooperator, and this is, I think, supposed to explain why we have said moral concerns. So the first problem here is that the authors are no longer explaining morality per se, but rather altrustic behaviors. As I mentioned in the first paragraph, mechanisms for altruism need not be moral mechanisms. The second problem I see is that, provided their reasoning about reputation is accurate (and I think it is), it seems perfectly plausible for non-moral mechanisms to make that judgment as well: I could simply be selfishly interested in being altruistic (that is to say, I would care about your interests out of my own interests, the same way people might not murder each other because they’re afraid of going to jail or possibly being killed in the process themselves).The authors never address that point, which bodes poorly for their preferred explanation.
More troublingly for the partner-choice model of morality, it doesn’t seem to explain why people punish others for acts deemed immoral. The only type of punishment it seems to account for would be, essentially, revenge, where an individual punishes another to secure their own self-interest and defend against future aggression; it might also be able to explain why someone might not wish to continue working in an unfair relationship. This would leave the model unable to explain any kind of moral condemnation from third parties (those not initially involved in the dispute). It would seem to have little to say about why, for instance, an American might care about the woes suffered by North Korean citizens under the current dictatorship. As far as I can tell, this is because the partner-choice account for morality is a conscience-centric account, and conscience does not explain condemnation; that I might wish to cooperate with ‘fair’ people doesn’t explain why I think someone should be punished for behaving unfairly towards a stranger. The model at least posits that moral condemnation ought to be proportional to the offense (i.e. an eye for an eye), seeking to restore fairness, but not only is this insight not a unique prediction, it’s also contradicted by some data on drunk driving I covered before (that is, unless a man hitting a woman while driving his car is more “unfair” than drunk woman hitting a man).
Though I don’t have time to cover every issue I see with the paper in depth (in large part owing to it’s length), the main issue I see with the account is that Baumard et al (2013) never really define what it is they mean by morality in the first place. As a result, the authors appear to just substitute “altruism” or “fairness” for morality instead. Now if they want to explain either of those topics, they’re more than welcome to; it’s just that calling them morality instead of what they actually mean (fairness) tends to generate quite a bit of well-deserved confusion. In the interests of progress, then, let’s return to the concern I raised about the opening question. When we are asking about what makes people moral, we need to start by considering what morality is. The short answer to that question is that morality is, roughly, a perception: at a basic level, it’s the ability to perceive acts in or states of the world along a dimension of “right” or “wrong” in much the same way we might perceive sensations as painful or pleasure. This spectrum seems to range from the morally-praiseworthy at one end to the morally-condemnable at the other, with a neutral point somewhere in the middle.
Framed in this light, we can see a few, rather large problems with conflating morality with things like fairness. The first of these is that perceiving an outcome as immoral would require that one first perceives it as unfair and then as immoral, as neither the reverse ordering, or one in which both perceptions appeared simultaneously, does not makes any sense. If one can have a perception of fairness divorced from a moral perception, then, it seems that one could use that perception to do the behavioral heavy lifting when it comes to partner choice. Again, people could be selfishly fair. The second problem that becomes apparent is that we can consider whether perceptions of immorality can be generated in response to acts that do not appear to deal with fairness or altruism. As sexual and solitary behaviors (like incest or drug use) are moralized with some frequency, the fairness account seems to be lacking. In fact, there are even issues where altruistic behavior has been morally condemned by others, which is precisely the opposite of what the Baumard et al (2013) model would seem to predict.
Instead of titling their paper, ”A mutualistic approach to morality”, the authors might have been better served with the title “A mutualistic approach to fairness”. Then again, this would only go so far when it comes to remedying the issue, as Baumard et al (2013) never really define what they mean by “fair” either. Since people seem to disagree on that issue with frequency, we’re still left with more than a bit of a puzzle. Is it fair that very few people in the world hold so much wealth? Would it be fair for that wealth to be taken from them and given to others? People likely have different answers to those questions.
Now the authors argue that this isn’t really that large of a problem for their account, as people might, for instance, disagree as to the truth of a matter while all holding the same concept of truth. Accordingly, Baumard et al (2013) posit that people can disagree about what is fair even if they hold the same concept of fairness. The problem with that analogy, as far as I see it, is that people don’t seem to have competing senses of the word “truth” while they do have different senses of the word “fair”: fairness based on outcome (everyone gets the same amount), based on effort (everyone gets in proportion to what they put in), based on need (those who need the most get the most), and perhaps others still. Which of these concepts people favor is likely going to be context-specific. However, I don’t know of that the same can be said of different senses of the word “true”. Are there multiple senses in which something might or might not be true? Are these senses favored contextually? Perhaps there are different senses of the word, but none come to mind as readily.
Baumard et al (2013) might also suggest that by “fair” they actually mean “mutually beneficial” (writing, “Ultimately, the mutualistic approach considers that all moral decisions should be grounded in consideration of mutual advantage”), but we’d still be left with the same basic set of problems. Bouncing interchangeably between three different terms (moral, fair, and mutually-beneficial) is likely to generate more confusion than clarity. It is better to ensure one has a clear idea of what one is trying to explain before one sets out to explain it.
References: Baumard, N., Andre, JB., & Sperber, D. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral & Brain Sciences, 36, 59-122.