A Response to Sam Harris's Writings on Moral Truth Pt 2 of 3
Why Sam Harris is correct about so much concerning morality except moral truth.
Posted March 3, 2015
[This is part 2 of a 3-part blog post response to Sam Harris's book, The Moral Landscape. This portion of the response will make much more sense to you if you first read Part 1.]
First Comprehensive Articulation of my Noncognitive Theory of Morality: Real Utilitarianism
After graduate school my research focus shifted toward the study of personality test validity. Yet I never lost interest in the nature of morality, and when I was invited in invited in 1995 to give a lecture the following year to the Religious and Philosophical Forum at the Penn State Schuykill Campus, I saw this as an opportunity to articulate my evolving views on morality. I was invited to give my talk based upon the Socioanalytic Theory of Moral Development paper, but what I presented was my own recent thinking about moral goodness, a position I called Real Utilitarianism. I posted a preview of my 1996 lecture to the Religious and Philosophical Forum in my personal web space in 1995 and have revised it on several occasions. The current version is available at http://www.personal.psu.edu/~j5j/virtues/morality.html; I will summarize the main points here and then compare my view to the view presented by Harris in The Moral Landscape.
The central feature of Real Utilitarianism is the idea that the only way to determine whether something is "good" is to consider what it is good for, that is, it's utility or usefulness. If I am asked whether a hammer is good (or if hammering is good), there is no coherent way to answer the question. But if I am asked whether hammering with a hammer is good for joining pieces of wood together with nails, the answer is "yes." On the other hand, hammering with a hammer is not good for joining pieces of wood together with screws. A screwdriver is good for that. Real Utilitarianism says the same for behaviors we usually describe as part of the moral domain, such as stealing, lying, and killing. Real Utilitarianism claims that no behaviors—whether amoral or moral—are intrinsically good or bad in an absolute sense. Rather, behaviors are good for bringing about a specific, limited range of effects and not good for bringing about other effects. Stealing might be good for acquiring things without exchanging something of equal value. But stealing is not good for maintaining an honest reputation or for staying out of prison.
Like the classic utilitarianism of John Stuart Mills, Real Utilitarianism is a form of consequentialism, which claims that the goodness of an act can only be judged in terms of its consequences, in other words, what the act is good for. The difference between the two is that Mill's utilitarianism considers only one consequence for judging an act's goodness: the total amount of pleasure and pain (or happiness and unhappiness) experienced by all people as a result of the act. Real Utilitarianism appreciates human happiness as an important, special kind of consequence of actions, but does not limit itself to this single consequence. Real Utilitarianism holds that goodness—in the most general sense of this word—of an act can be understood only in terms of the consequences the act is good for producing. These consequences may or may not have an impact on human happiness. If particular act has a widespread affect on human happiness across the planet, Real Utilitarianism looks a lot like classic Utilitarianism. In my essay, however, I argue that most of our behaviors have an impact far short of the happiness of humanity, but the goodness of these behaviors can still be evaluated in terms of what they are good for. Nobody on the planet cares whether I water a square foot of earth in my backyard every day. Still, I might call this behavior "good" in the sense that it is good for making mushrooms grow and looking at mushrooms makes me happy. The focus in Real Utilitarianism is on a behavior's pure utility—its ability to cause consequences regardless of whose happiness is affected—which inspired the tongue-in-cheek label "Real" Utilitarianism. A more serious and accurate label for my position might be "Wholly Generic Utilitarianism."
The 1995 Real Utilitarianism essay suggests that most of the time we do not realize that the goodness or badness we perceive in activities is based on utility (what the activity is good for). Rather, we automatically perceive activities to be "good" when accompanied by positive emotions, and "bad" when accompanied by negative emotions. Unless we've studied evolutionary psychology, we remain unaware that all of the basic moral emotions (empathy, shame, embarrassment, guilt, outrage, disgust) evolved as signals about what is good for or not good for creating consequences that impact upon survival and reproduction in social animals. The compelling immediacy of our moral emotional reactions is what leads us to see certain phenomena as obvious moral "truths." Yet the feeling of certainty that we are in possession of truth is just that: a feeling (as documented by Robert Burton in his book, On Being Certain: Believing You Are Right Even When You're Not).
After the initial exposition of Real Utilitarianism in the Religious and Philosophical Forum talk, I continued to develop my position by comparing it to other accounts of morality and to current research on moral judgment and behavior. One of the first things I noticed while perusing the philosophy of morality was that my conception of moral goodness was similar to the ancient Greek concept of virtue, arête. Areté (άρετέ) means excellence in fulfilling a purpose. A sharp knife has arête because its purpose is to cut; a dull knife, in contrast, lacks arête. (This is apparently also similar to Robert S. Hartman's notion of goodness, although a reading of his essay "The Science of Value" did not indicate familiarity with what the ancient Greeks had written about arête.)
The ancient Chinese also seemed to hold a similar view, as their word for virtue, Te (德), refers to an inner potency, a power to make something happen, or an ability to cause certain consequences. The title of what I regard as one of the wisest books in existence, the Tao Te Ching, is translated as The Way and Its Power. Thinking of virtue as the power to create certain consequences might strike many of us as odd, but remnants of this kind of thinking can be seen in archaic uses of the word virtue such as the healing virtue of an herb. All of this is consistent with the central thesis of Real Utilitarianism, that goodness can be meaningfully understood only in terms of what something is good for, that is, what it has the power to accomplish.
In 2000 I coauthored an article with Mike Cawley and Jim Martin on the connection between virtue and personality. Thanks largely to the writing of Gordon W. Allport, scientific personality psychologists have been eager to distinguish a value-free conception of personality from the value-laden concept of character. Such a distinction was apparently motivated by a desire to separate personality psychology from its roots in moral philosophy and to establish it as an empirical science. This motivation is understandable, especially since much of the existing literature on virtue at that time was theological. However, there is nothing unscientific in observing that the personality or character traits that we refer to as virtues are good for accomplishing certain ends. Virtues (sometimes called character strengths) are behavioral tools for solving problems of social living. They are as real as (and just as important as) the physical tools that have played an important role in human evolution.
Most Recent Articulation of my Noncognitive Theory of Morality: The Evolution of Moral Rules from Natural Laws
My idea that virtues can be thought of as behavioral tools, similar to physical tools, was reinforced by reading Lewis Wolpert's 2006 book, Six Impossible Things Before Breakfast: The Evolutionary Origins of Belief. In his book, Wolpert proposed that a critical skill for hominid survival was accurate discernment of natural, cause-effect laws relevant to tool manufacture/use. Understanding, for example, that a particular kind of stone was good for chipping the edges of other stones would have allowed for the production of good scrapers, cutters, and spearheads. Accurate "good-for thinking" (that is, correctly understanding cause-effect relationships) allowed tool users to manipulate the environment to their advantage. It seemed to me that the usefulness of "good-for thinking" might apply equally to one's own social behavior as to the manufacture and use of physical tools. It might have been advantageous for our ancestors to recognize that moral behaviors (e.g., extending sympathy, expressing moral outrage, making appeasement gestures) caused useful reactions (reciprocity, restitution, forgiveness) in conspecifics. This became the thesis of a poster I presented at the 2007 meeting of the Human Behavior and Evolution Society, The Evolution of Moral Rules from Natural Laws.
A portion of my 2007 HBES poster revisited the importance of autonomy vis-à-vis rule-attunement and social sensitivity in Hogan's three-phase model of moral development. We had ended our 1978 chapter on the three-phase model by arguing that truly moral conduct is the product of free choice, not an unconscious reflex, and that free choice requires complete self-awareness (autonomy). However, we are never fully conscious of our motives; this means that authentic moral conduct is more an ideal than a reality. In most cases, respect for authority, rules and tradition (high rule-attunement) and empathy for others (social sensitivity) are sufficient motives for moral behavior. High levels of autonomy are neither common nor necessary for moral behavior to occur.
Of what real importance, then, is autonomy? My 2007 HBES paper suggests that autonomy (making thoughtful, deliberate choices based on careful consideration of the actual, likely outcomes of one's behavior) has both costs and benefits. On the cost side, autonomous decisions are time-consuming compared to the automatic, reflexive feelings of respect for tradition (rule-attunement) or compassion for people (social sympathy). This is a disadvantage if you need to make a quick decision. It can also make you look cold, uncaring, and calculating to patriots who are passionate about a upholding a group's traditions and supporting its leaders and to humanitarians who are passionate about nurturing and helping those in need. (Research by Haidt and his colleagues indicates that among political groups, conservatives are the most emotionally invested in group loyalty and leadership, while liberals are the most emotionally invested in care, protection from harm, and fairness. Libertarians are relatively unemotional, unempathic, and utilitarian in their decision-making. They are also seen as generally disagreeable.)
Although autonomous, deliberate moral judgments have disadvantageous costs, one advantage that they might have over the faster, emotional judgments is that they are better equipped to deal with the increasing complexities of the modern world. The older, emotion-based forms of moral judgment evolved during a time when our ancestors lived in small groups where everyone knew each other well. Also, technology was simple. Although these older methods of moral judgment may still function perfectly well today in our face-to-face dealings within our small circles of acquaintances, we are not emotionally equipped to deal with moral dilemmas that involve global scale issues such mass poverty and disease. Moral confusion is intensified by technological developments. Modern warfare allows killing at a distance on a scale unfathomable to our ancestors. We struggle with issues of appropriate communication and privacy with an Internet that can connect us to millions of people we do not know. Developments in food and medical technologies have improved the quality of life for many but have also raised issues about humane treatment of livestock, the safety of additives and genetic modification, and the prolongation of life at any cost. And modern economies have created degrees of resource inequality that were impossible in hunting-gathering groups, raising questions about economic fairness.
The complexities of modern life can tempt people to retreat to their familiar, emotion-based judgments. This may provide comforting cognitive closure to questions about who is to blame for the rise in unwed teenage motherhood or the conflict in the Middle East. But when groups of people retreat in this fashion to different positions based on different emotions, the result can be gridlock and failure to resolve the issues. This is when autonomy has a chance to play a role in moral assessment. Autonomy is the moral-psychological process that consciously recognizes the "good-for" nature of behavior. It insists on asking what consequences are the most important to us (reduced teenage motherhood; peace in the Middle East) and then determining which behaviors are most likely to lead to those consequences. Autonomy admits that, along the way, these utilitarian behaviors can create other side-effect consequences that are emotionally repugnant to us. But if the value or importance of the final result outweighs the importance of the side effects, then the end justifies the means.
Autonomy, then, is an arbiter of conflicting emotions and motivations rather than a motive itself. Just because it is a cognitive rather than emotional process, however, does not mean it is designed to seek "moral truth." Moral judgments such as "life is sacrosanct" reflect our feelings about issues, not objective facts about issues. The only truth discoverable by autonomy concerns the behaviors most likely to bring about certain desirable consequences, once we establish which consequences are most desirable to us. Because autonomy represents a method for achieving desirable consequences rather than a moral feeling itself, it cannot function alone as a guide to moral behavior. Discernibly moral behavior depends on the combination of rule attunement with social sensitivity and/or rule-attunement.
In a 1973 Psychological Bulletin article, "Moral Conduct and Moral Character," Hogan considers the characterological consequences of combinations high and low levels of rule-attunement and social sensitivity for school-age children. Students who are low in both qualities are likely to be delinquents, and those high in both qualities are likely to be considered morally mature. A student who is highly rule-attuned but socially insensitive is what Jean Piaget called the petit saint (little saint), who ignores peers while groveling before adults in authority. A student who has low rule-attunement but high social sensitivity Jean Piaget called the chic type who flouts adult rules but experiences strong solidarity with peers. But what does the presence or absence of autonomy mean in combination with rule-attunement and social sensitivity?
In STMD, Hogan, Emler, and I outline three patterns of non-autonomous moral conduct: moral realism, moral zealotry, and moral enthusiasm. A moral realist is a former petit saint who, even as an adult, never developed an awareness of the purpose of rule-following. The moral realist's over-accommodation to authority and institutionalized rules leads to rule following as an end in itself, even when such behavior is self-destructive or harmful to others. Moral zealots are former chic types who enjoy aggressive confrontations such as protest and even terrorism in the name of social justice, unaware that they are partially motivated by hostility toward authority. Despite their conventionally moral behavior and do-good intentions, moral enthusiasts lack the perspective that comes with autonomy. Consequently they become swept up in popular moral causes, failing to discern the relative importance of different social issues or the actual consequences of their behavior; this lack of awareness diminishes their effectiveness.
What autonomy adds to rule attunement and social sensitivity is thoughtful, deliberate reflection about the likely consequences of one's behavior. Autonomy by itself is passionless and has no motivating force. In fact, an autonomous person who lacked rule attunement and social sensitivity could be a sociopath, considering the welfare of others only when useful to personal gain. On the other hand, when a person is motivated by rule attunement or social sensitivity (or both), autonomy can help the person to achieve the desired aims of these motives (maintaining the established order; promoting social solidarity) by carefully considering the actual likely consequences of different courses of action.
Even the strongest proponents of the emotion-based view of morality such as Joshua Greene and Jon Haidt recognize that moral judgments are not entirely driven by gut feelings. Greene and Haidt follow what they call a "dual process" view of moral judgment in which people make spontaneous initial judgments based on feelings but can elaborate upon or even change their judgments through further rational, deliberate cognitive processes. Although I did not recognize it at the time of the 2007 HBES paper, autonomy from the Hogan model is similar—if not identical--to the rational, cognitive portion of Greene and Haidt's dual-process model.
Although Greene, Haidt, and I all recognize a role for rational cognition in moral judgment and behavior, we remain noncognitivists because we assert that there are no ultimate moral truths to be discovered by rational cognition (autonomy). Rationality cannot determine which behaviors are actually good or bad in the same way that we can determine the actual boiling point of water or whether a is actually greater than c if a>b and b>c. Empirical and logical truths exist independently of human reasoning, and human reason can discover some of these truths. We can determine whether the statement "Water boils at 100° C at sea level" is true or false. But moral truths do not exist, so reason cannot determine whether the statement "Obedience is good" is true or false. Reason can determine only what obedience, disobedience, helping, harming etc., are good for; that is, the natural cause-effect relations between these behaviors and their outcomes.
The notion that behaviors are not inherently good or bad and that we can only evaluate what behaviors are good or bad for is likely to violate our intuitions. It seems obvious to many people that slavery, torture, genocide, and other behaviors that harm people are just plain bad, period, paragraph, end of story. "Harming people is bad" seems like a moral truth to most of us. "Treating people fairly is good" also seems like a moral truth. But that is simply because most of us have enough empathy for others that we feel bad when they are harmed and feel good when they are treated fairly. Unless we've studied evolutionary psychology, we don't understand why we feel good about protecting others from harm and treating them fairly. (It is because these evolved emotional tendencies motivated our ancestors to engage in behaviors that instrumentally contributed to their own survival and reproduction).
Evolutionary psychologists also tell us that remaining unaware that our altruistic emotions are also self-serving helps us to be more persuasive and influential. If my instinctive, reflexive caring and concern for others moves me to spontaneously help and protect them, this is likely to persuade them to treat me well in return. Presumably this is because they perceive my caring as genuine and authentic rather than a contrived display to curry favor. They might even attribute to me a stable, reliable disposition to be helpful, making me a person worth forming a relationship with by helping in return. Remaining unaware that my feelings of care and concern that lead to spontaneous impulses to help others were designed by evolution to get them to behave favorably toward me serves me well. In contrast, if I self-consciously pretend to care about others in order to manipulate them (consider the friendly, helpful demeanor of a used car salesperson), they may be less inclined to treat me favorably. Still, moral behaviors arising from emotional reflexes involve as much self-serving manipulation of others as calculated efforts to do good; we are just seldom aware of this. Hogan was fond of quoting Malcolm X on this issue. Malcolm X said, "Doing good is a hustle, too."
There is one particular set of moral behaviors, however, in which the attempt to manipulate others is more obvious: moral pronouncements and moral exhortations. A moral pronouncement is a declaration of what is good, e.g., "Sharing what you have with others is good!" Moral pronouncements are meant to persuade others to do what you say is good and avoid doing what you say is bad. They are indirect requests, building on our shared understanding that we ought to do what is good and avoid doing what is bad. Moral exhortations are more direct, e.g., "Share what you have with others [because sharing is good]!"
In both my 1996 and 2007 papers, I hypothesized that the effectiveness of moral pronouncements and exhortations is enhanced if the "goodness" of the demanded behavior is presented as a moral truth and not just an instrumental cause that will bring about a desirable effect for the person engaging in the behavior. If this hypothesis is true, then telling someone that sharing is good is more likely to get them to share than explaining the personal benefit to them (that others are more liable to like them and return favors if they share) or to society (everyone will get along better if everyone shares). I don't know whether anyone has tested this hypothesis, although a recent study by Kreps and Monin (2014) found that people are more likely to see an argument as moralizing if it is presented a behavior as "simply the right thing to do" than something that will bring about a desirable result.
When I was nearing the end of the first draft of this essay, I took a break to read a book that has been on my reading list since it was published, Joshua Greene's Moral Tribes (Penguin Press, 2013). Greene is a consequentialist and classic Utilitarian who marshals impressive experimental evidence and good arguments for adapting a utilitarian stance. As a classic Utilitarian, he denies the reality of moral truths, including rights and duties. Nonetheless, he has no problem using the language of rights as a rhetorical device, to express heartfelt, nonnegotiable feelings about a moral issue. If using a certain kind of language gets better results than using a different kind of language, a pragmatic utilitarian will use the language that actually brings about the desired consequences.
In their general discussion of their research on the language of moral truth and the language of utilitarianism, Kreps and Monin draw a conclusion that might have been unintentionally ironic. They reviewed their finding that observers perceive a person who uses the language of rights and duties as more moralizing than a person who uses utilitarian language of costs and benefits and then discussed an implication for leaders who want to manage how they are perceived. Given that other research has revealed that people who moralize are perceived as particularly authentic, Kreps and Monin advise leaders who want to create an impression of authenticity to communicate in the language of moral truths rather than the language of practical consequences.
[Stay tuned for Part III, "Evaluation of the Thesis of The Moral Landscape from My Noncognitivist Viewpoint," which uses the background in Parts I and II to demonstrate what is wrong with the main thesis of The Moral Landscape.]