I now know why you cry, but I could never do it. ~ Terminator 2
A generation ago, the brilliant Israeli satirist Ephraim Kishon wrote a story on how to review a book without having read it. Today I do just that. I have some relevant information, though. I heard the book lecture. Noted neurophilosophe Patricia Churchland spoke at my university. Her thesis is that morality arises from biology, the brain, and its neurons. Not that this is a particularly novel or radical thesis; but it is a thesis that traditionally trained philosophers abhor. Speaking to a roomful of psychologists, neuroscientists, and neurologists, however, Churchland preached to the choir (does this count as a pun?). Applause assured.
Churchland explained that morality comes from the combination of high sociality, oxytocin, and a large cortex. High sociality means living in groups and loving it (and needing it). Oxytocin is the attachment peptide that allows males and females to bond with each other and not eat their kids. A large cortex means that lots of information can be manipulated, integrated, and transformed to guide behavior, especially social behavior.
High sociality and oxytocin are necessary but not sufficient for the development of a moral sense (Kant dissenting). Lots of nonhumans have them without getting moral credit. Many species of insect are social and prairie voles are awash in oxytocin. That leaves the large cortex. If you take it away, morality vanishes. Hence, the large cortex is necessary. But is it sufficient? We do not know because there are no creatures with large brains but no sociality and no oxytocin. Such creatures are the stuff of fantasy. Scyfy robots, androids, or terminators are scary because we imagine them to be hyper-intelligent and amoral – they don’t care. Sure, there is Mr. Spock and Terminator 2, but they are programmed not to hurt people. They are not moral. They have no choice but do no harm (ouch, I slipped into the free-will lingo, which I was determined – or was I – not to use).
Churchland’s claim is that given sociality and oxytocin, the addition of a large neo-brain opens the door to morality – and us puzzling over it. Is that it? I think that’s it. Morality has biological roots. And how could that not be true? What human capacity or activity does not have biological roots? Any naturalist must say: None. Music, art, play, and politics all arise from biological sources. Why should only morality be different? Again, the choir speaking here. Perhaps Churchland has an argument with certain philosophers who conceptualize morality as residing in an entirely separate realm. She hints at this when saying she never understood Plato.
Nevertheless, her argument (at least in the talk) stops where it gets interesting, namely at the question of why moral systems have come to be so demanding. Perhaps biology is not enough. Take reciprocity. Most of us reciprocate favors and nice behavior most of the time. This is good for us and for them because it sets mutually rewarding relations in motion. Why does society need to set up a norm of reciprocity and tell us what we must do when we are prepared to do it anyway? The answer is that there is a countervailing motive of narrow self-interest. If you do me a favor and I don’t reciprocate, I win, at least in the short run, because reciprocation is costly. The moral norm of reciprocity guards against a surrender to short-term self-interest. It protects me from discounting the future too much. In that sense, the norm of reciprocity can be said to be a matter of rationality rather than morality. A similar perspective can be taken on the Golden Rule (“Don’t do unto others . . .”).
There’s the collision course between biology and philosophy. If biological evolution has optimized the moral system(s) over thousands of generations, why does the heavy hand of social normative control still appear to be necessary? Of course, a holistically coherent argument would be to say that what we view as social norms imposed by society on the human animal are themselves manifestations of biological forces. An appealing argument, though (and?) entirely irrefutable.
Why do we have a commandment that we shall honor our parents? This is not a Jewish invention. The Confucians’ stress on filial duty would make any Jewish mother blanch. Do we not honor our parents anyway because we are attached, thanks to oxytocin? The oxytocin wears off, the parents get old, and our own kids demand love, attention, and investment. The oxytocin flows again, but looks downstream, generationwise. Again, there is a conflict. Separating from the parents and not caring for them in their twilight years has advantages, but it seems morally wrong.
Churchland’s theory does not appear to discriminate between trust and trustworthiness, although there are striking differences. Trustworthiness is a form of reciprocity and it is further regulated by the norm of reciprocity (see above). Trust, on the other hand, is not normative (Bicchieri, Xiao, & Muldoon, 2011). There can be no commandment “Thou shalt always (never) trust.” Trust is a dilemma. Too much or too little compromises sociality and self-interest. A reflexive trustor will be exploited; an unswerving distrustor will not be included in the group. It’s like bluffing in poker. If you always (never) bluff, you will die poor. The trick is to strike the right balance and to know whom and when to trust. In this sense, trusting too seems to be a matter of rationality rather than morality. Of course, most people don’t see it that way. In a 2008 study, my colleagues Adam Massey & Theresa DiDonato found that the more a person trusts, the more moral credit she receives. We suspect that this finding came about because respondents acknowledged the risk the trustors were taking. Yet, these same individuals would probably not teach their children to trust all strangers.
Trust and trustworthiness also differ emotionally. The decision to trust is the acceptance of uncertainty between the hope of being rewarded and the fear of being betrayed. The decision not to reciprocate can result in guilt or a gratifying thrill, depending on temperament. Again, the latter emotions have a moral tone, whereas the former do not. And who needs the bigger brain, the trustor or the trustee? Arguably, the trustor needs a bigger brain because he has to cope with uncertainty. He needs to figure out what the trustee is likely to do. The trustee, in contrast, only needs to divide the spoils (or not), which can be accomplished with a simpler theory of mind (he still needs to figure out if the trustor intended to trust or if his action was accidental or forced). Nonetheless, it seems that trust, which is not a moral matter, requires a bigger brain than trustworthiness, which is.
Surya, the Orangutan (see picture at top) loves to hang out with Roscoe, the hound. Churchland showed this picture (and others), noting that Orangutans are generally not known for their sociality. They prefer to forage on their own. Yet, they have the oxytocin circuitry waiting to kick in when the right creature comes along. I think that just looking at the two of them raises oxytocin levels in humans. Surya’s lesson is that oxytocin + cortex is not enough. The social instincts are contextual, and often we don’t know what it is in the context that activates a latently available neural program.
I hate explaining my puns. It makes me feel like Jay Leno who tells a joke and then explains why it is (supposed to be) funny. I have to make an exception for the title of this post. The German word for ‘frustration’ is ‘Frustration.’ Ah, so close. Germans love the abbreviate ‘Frust’ (read: froost). Look at the title again and remember that Churchland's book is called Braintrust.
Bicchieri, C., Xiao, E., & Muldoon, R. (2011). Trustworthiness is a social norm but trusting is not. Politics, Philosophy & Economics, 10, 170-182.
Krueger, J. I., Massey, A. L., & DiDonato, T. E. (2008). A matter of trust: From social preferences to the strategic adherence of social norms. Negotiation & Conflict Management Research, 1, 31-52.