Good or Evil?

Are you dealing with a bad apple or do you have a bad barrel? Plus: Humans may be aggressive, but war is something much different.

Morality and Tribalism: The Problem with Utilitarianism

An answer to Joshua Greene’s Moral Tribes

The problem with Utilitarianism is the same problem that one finds using any optimization function—what if the other tribe doesn’t share the same optimization function?  In his book Moral Tribes, Joshua Greene argues for a dual-process description of human morality—an emotional (*1) process that has wired in certain behaviors that drive members of a group into the “cooperate-cooperate” corner of the Prisoner’s Dilemma (*2) and provides controls against members of the group defecting, and a cognitive (*3) process that maximizes some goal.  Joshua Greene argues that the goal maximized by the cognitive process is a form of maximum-happiness or Utilitarianism.

Greene correctly points out that the emotional (Pavlovian) system is very good for controlling within-tribe behavior, but that it also produces out-tribe Xenophobia, which is obviously not conducive to between-tribe morality.  Greene then suggests that we need to be using the more cognitive system to mediate between these tribes. He suggests that some sort of happiness-maximization function is the only possible answer. (*4)

The problem is that you have to assume that the other tribe shares your definition of the optimization function. If one tribe says that all people (Sneetches) with stars on their bellies (link Dr. Seuss) are real people, and all Sneetches without are not, then you can’t convince them to let those non-star-bellied-Sneetches go.

Let’s take Greene’s slavery example. If you believe that non-star-bellied-Sneetches do not count in the grand sum total of happiness, then you can increase happiness by making non-star-bellied-Sneetches slaves to serve the star-bellied-Sneetches. If I could build a robot to do all the hard labor, so I could take a life of ease, would that be OK? I’m pretty sure that Greene doesn’t object to the use of automatic vacuum cleaners (Roomba) to clean floors. But what if the robot was sentient? Is it OK to use horses to draw carriages? In the American Civil War and World War II, we were faced with cultures that removed  certain populations from the sum of total happiness. How do we handle those tribes?

Historically, there have been two solutions to the cross-tribe problem.  Some mechanism to integrate the tribes into one (e.g. integration in the US in the latter half of the 20th century) and applying the same punishment for defection (called “altruistic punishment” because one sacrifices some of one’s resources to punish defection) at the group level instead of the individual level. I would argue that both of these options are more realistic options for the inter-tribe problem than Utilitarianism.

Additional reading:

 

*1 In The Mind within the Brain, I call this action-selection system the Pavlovian system—it is a set of species-specific action repertoires that can be learned to be released in appropriate situations. The Pavlovian system learns the situations in which to release these actions.  Categorizations of these action repertoires are the emotions (lust, anger, fear, love, disgust, etc.)

*2 The Prisoner’s Dilemma is a description of a basic game to get at the deep question of when do we cooperate and when do we cheat each other. The quick description is that each player has two choices (cooperate or defect). Both players cooperating is better for each than both players defecting, but if one player defects and the other cooperates the player that defects does better than both cooperating and the player that cooperates does worse.

*3 In The Mind within the Brain, I call this action-selection system the Deliberative system—an ability to search through potential future outcomes and evaluate those imagined outcomes to determine what would be best. It is slow and cumbersome, but very flexible.

*4 I am going to sidestep the usual issues of How do you define happiness? (As Greene notes, we can get pretty self-consistent estimates simply by asking, and besides, we only need to know change in happiness to decide if doing something is good or not—it’s good if it increases total happiness) and the usual issues of How do you handle the suggestion of trading your one kid for a dozen unknown kids across the world? (This is a conflict between Greene’s emotional and cognitive moralities.)

Good or Evil?