Skip to main content

Verified by Psychology Today

Ethics and Morality

A Response to Sam Harris's Writings on Moral Truth Pt 1 of 3

Why Sam Harris is correct about so much concerning morality except moral truth.

I began reading Sam Harris's The Moral Landscape: How Science Can Determine Human Values some years ago with a sense of both curiosity and trepidation. Harris's previous book, The End of Faith: Religion, Terror, and the Future of Reason, had already convinced me that Sam Harris is capable of brilliant, rational analysis. And now he had written a book on one of my longest-standing interests: the nature of morality. One of my favorite writers has written a book on one of my favorite subjects; what could be more wonderful?

The one detail that worried me was that I knew from a pre-release description of the book that Harris would be arguing for the existence of something I do not believe exists: moral truth. Those who do believe in moral truths claim that moral pronouncements about what is moral/immoral (good/bad; right/wrong) and also statements about what we ought to do or not do can be evaluated as true or false, just as logical, mathematical, and empirical pronouncements can be evaluated as true or false. From this viewpoint, a statement such as "Homosexual marriage is immoral, therefore we ought not allow people of the same sex to marry" can be evaluated as true or false, just as "If a>b and b>c then c>a" or "2+2=5" or "Water boils at 50° F" can be evaluated as true or false. Harris further argues for a method of determining whether a moral pronouncement is true or false: True moral pronouncements are those that increase happiness/flourishing/well-being, while false moral pronouncements are those that decrease happiness/flourishing/well-being. And science can show us what is truly moral be revealing what increases or diminishes well-being.

I do not have a problem with talking about the goodness or badness of thoughts, feelings, and behavior in terms of their impact on well-being. I think that is indeed a most sensible way to evaluate our thoughts, feelings and behavior. The problem I have is Harris's attempt to frame this evaluation within the concept of moral truth, because I have been convinced for at least the past 30 years or so that moral pronouncements cannot be judged in terms of truth or falsity. I think that the quality of being true or false does not apply to moral pronouncements any more than the quality of being red or green applies to odors. An odor does not have a color; therefore any attempt to describe the color of an odor is inappropriate. Similarly, I believe that statements about what is moral/immoral are not truth-apt—they are neither true nor false, and any attempt to describe them as true or false is inappropriate. If I am correct about this, then Harris's quest to document moral truths is as likely to succeed as a quest to document the colors of odors.

I will explain my own reasons for denying the truth-aptness of moral statements below. Before I begin, however, I want to note that my position is far from original. My position is a version of what ethical philosophers call non-cognitivism. Philosophers have laid out arguments for non-cognitivism (see http://plato.stanford.edu/entries/moral-cognitivism/) that are much more sophisticated than my own. I arrived at my own non-cognitivist position as a psychologist who has studied and interpreted what people are actually doing when they make moral pronouncements. In a nutshell, my conclusion was that a moral pronouncement represents an expression of positive feelings and approval (or negative feelings and disapproval) designed (consciously or not) to persuade others to follow a course of action desired by the person making the pronouncement. The proximate mechanisms underlying moral pronouncements are automatic emotional reactions (moral sentiments), and the ultimate explanation for these moral sentiments is the same for emotions in general: they evolved through natural selection because these moral emotions favored the survival and transmission of genes.

Although Harris describes as a "worthy endeavor" (p. 49) efforts such as mine to describe and explain morality from a psychological and evolutionary perspective, he says that such efforts are irrelevant to two other projects that interest him more: (1) determining how we ought to think and behave in the name of morality through clearer thinking about the nature of moral truth and (2) convincing people who think and behave in silly and harmful ways in the name of morality to change their ways by presenting them with moral truths. I enthusiastically support the goal of persuading silly and harmful people to behave in more sensible and beneficent ways, but not by reasoning with them about moral truths. My reluctance to influence others' behavior by presenting them with "moral truths" stems not just from my disbelief in moral truths (although that would be enough). Even if I could concoct moral statements that sounded reasonable and truth-like, I do not think such statements would change many peoples' minds. Psychological research shows that our primary moral judgments are emotional and intuitive. Rational discourse can rearrange some of the details of moral judgments, but not our commitments to what we feel is fundamentally right and wrong.

Harris is certainly familiar with the research I am talking about because in his book he discusses two research programs that have reached the same conclusion, that of Jonathan Haidt and Joshua Greene. Other researchers have found the same results. One has to wonder why he dismisses their work as irrelevant to improving the human condition. Personally, I think our only hope in changing the behavior of silly and harmful people is to understand through psychological research how their thoughts and feelings about morality give rise to their behavior. It seems to me that effective interventions must be based on the way the mind actually works. I have a theory about why people like Harris try to use moral truths to influence people who are silly or harmful, and I will present that theory later. (The short version is that our own strong feelings trick us into thinking we possess moral truths, and we think we can better persuade people when we have truth on our side—as opposed to simply having a strong moral conviction). But first I want to describe the ideas and evidence that led me to noncognitivism, and then elaborate on my particular version of noncognitivism and why it denies the existence of moral truths.

Some readers may find it odd to describe events in my personal life that I believe led me a noncognitivist view of morality. They might rather simply hear my arguments for the noncognitivist position I hold today to judge whether these arguments are stronger or weaker than the arguments Harris makes for his position on morality. I include the development of my thinking for two reasons. The first reason is simply to show that I did not start thinking about the nature of morality yesterday; my response to Harris is based on over 40 years of study of morality. Second, I believe that any person's current thinking is better understood by providing a time-line of experiences leading up to the present. My experience tells me that scientists are not mere logico-empirical machines, inferring truths from observation and logic. As human beings, we are subject to the same social, emotional, and motivational influences that affect all people: we are personally attracted to or repulsed by teachers, we have different aesthetic tastes about ideas, and we have hopes, desires, and preferences that can bias our thoughts and perceptions. I therefore begin with some biographical background (a reconstruction that admittedly might itself be biased) as a context for understanding my present thinking about morality.

How I Arrived at Noncognitivism – Undergraduate Experiences

I remember first grappling with the concept of goodness in my freshman writing course, when I wrote a playful imaginary dialog between Socrates and a young man I called Frey. Perhaps unfairly, I set up Socrates as an apologist for those who say that denial of physical pleasure is good because the denial of pleasure leads to immortality of the soul. Frey goads Socrates into admitting that the prospect of immortality makes him happy, although he could not be absolutely certain that he could achieve immortality by denying himself pleasure. Frey argues that it is not only natural to seek physical pleasure, but also that he is certain, based on experience, that pleasure will bring him happiness. Frey proposes that what is good depends upon a person's nature, what makes that person happy. For Socrates it is natural to feel happy while seeking the truth that he believes will set his soul free forever. For Frey, it is natural to feel happy while seeking carnal pleasure. Thus, in my Socratic dialog on goodness, I propose that goodness is to be understood in terms of a natural (biologically given) feeling (happiness), but that individual differences among our natures will mean that different things make different people happy, so what is good for one person might not be good for another.

Fast-forward two years, when a question about morality suddenly popped into my head: "Philosophers have proposed a variety of ethical systems that prescribe how we ought to behave. What would be the differential evolutionary consequences (survival and reproductive success) be for people following these ethical systems?"

To investigate this question, I negotiated an independent study project under the supervision of Dale B. Harris, a faculty member of the psychology department. There was no way of actually testing how following Kant's categorical imperative or Kierkegaard's teleological suspension of the ethical would affect biological survival, so the paper I wrote for the project was an entirely speculative thought experiment. Harris assigned a number of books for me to read, and I contemplated how following different ethical prescriptions might be adaptive or non-adaptive. He also had me read biologist C. H. Waddington's The Ethical Animal, a book that addressed precisely the same issue that I was studying.

My positive experience in the independent study project led me to take an upper-level, extremely rigorous course in humanistic psychology from Harris. One of the books we read for the course, G. Marian Kinget's On Being Human, contained an extensive chapter on ethics, and the epilogue of the book addressed the question, "What is a good life?" Amongst the answers explored in the epilogue, I was most taken by a definition of good attributed to Robert S. Hartman, namely, that a good object is one that fulfills its concept (i.e., does well what it was designed to do). A good knife cuts well, a good spade digs well, and a good yardstick measures accurately. Although it is easier to understand the goodness of artifacts that were designed for one purpose, it seemed to me that the goodness of human beings could, in theory, be understood in terms of how well they fulfilled what they were designed to do by natural selection.

At the end of my undergraduate studies, then, I had arrived at a noncognitive view of morality. People, as far as I could tell, judged goodness and badness in terms of their emotional reactions to events. We call events that make us happy "good," and events that make us unhappy, bad. To the degree that different events make different people happy or unhappy, goodness is relative to the person. An evolutionary perspective gives us a deeper understanding of the evaluation of events as good or bad. Natural selection designed our brains to experience positive emotions when events favor the successful functioning of biological processes that were designed to promote survival and reproduction. In my graduate work I began to spell out what some of those biological processes might be.

Further Development of My Noncognitivism – Graduate School Experiences

I enrolled in the psychology graduate program at Johns Hopkins with the intention of studying psychological factors that affect the conduct of science, under the supervision of the chair of the department. That did not work out, so I switched to another advisor, Robert Hogan, at the end of my first year. Hogan was a personality psychologist who had spent the first 10 years of his career criticizing the dominant theory of moral development at that time, the cognitive stage theory of Lawrence Kohlberg. I knew nothing about personality psychology, but I did believe in biologically-based, individual differences in our natures, and I found that Hogan held an evolutionary view of personality and moral development. That was enough for me to become one of his students.

According to Hogan, Kohlberg's model of moral development suffered from a number of weaknesses that his alternative model was able to overcome. Kohlberg had proposed that individuals progressed through stages of cognitive-moral development. Each stage was cognitively more sophisticated than the previous stage, enabling people to resolve moral dilemmas more intelligently as they mature. Kohlberg's stages formed a progression from inferior to superior moral reasoning. Those who reached the most advanced stage, stage 6, allegedly could reason according to universal moral truths.

Hogan and his colleagues drew attention to what they considered to be weaknesses in Kohlberg's model. One was that women typically score at Stage 3, while men typically score at Stage 4, implying that men tend to be more morally mature than women. That implication is inconsistent with documented male-female differences in criminal behavior and violence. Another problem with the model is that higher levels of moral development are associated with liberal political values. Although many liberals have argued that they are indeed more intelligent and more morally advanced than conservatives, this view might be a self-serving rationalization. But the most significant problem with Kohlberg's stage model is that the stages do not predict actual moral or immoral behavior. And the reason for this is simple: the stage model merely represents a person's complexity and sophistication of thinking, without considering the feelings that motivate a person toward moral or immoral behavior.

Hogan's model ties moral development to the emotions and motivations of personality dispositions rather than cognitive stages. Specifically, the model posits three dispositions—rule-attunement, social sensitivity, and autonomy—that emerge roughly in early childhood, middle childhood, and adolescence. Hogan derived his notion of the three dispositions explicitly from the three elements of morality described by Émile Durkheim in his book Moral Education (discipline, attachment, and autonomy). But whereas Durkheim assumed that these three qualities were a product of education, Hogan viewed their development as a product of genetic factors and social experiences. Furthermore, he considered the evolutionary origins of the dispositions. High levels of these personality dispositions motivate adaptive behaviors that help a person to deal with the pressing challenges and demands in each stage of life. Failure to achieve sufficient rule-attunement, social sensitivity, and autonomy results in maladaptive and anti-social behavior.

The major challenge of early childhood concerns the development of bonding with caretakers and the internalization of the caretakers' rules. As long as a caretaker is reasonably responsive to a child's needs, that child's natural need for approval and a safe, predictable world will result in what developmentalists call secure attachment—an unquestioning love toward the caretaker and respect for his or her rules. This respect manifests as what Piaget called moral realism, a tendency in young children to regard moral rules as absolute truths on a par with natural laws rather than as social conventions.

Whereas Piaget and Kohlberg regarded moral realism as a product of the child's cognitive immaturity, a defect to be overcome by intellectual development, Hogan describes obedience and respect for one's caretakers as a vital, adaptive tool that enables a child to rapidly acquire the knowledge needed to survive in a particular culture and physical environment. Poorly-attached children with low rule-attunement have trouble learning the skills they need and later find themselves at odds with legitimate authority figures such as teachers and leaders. Rule-attunement as a personality attribute can be assessed with the Socialization (So) scale on the California Psychological Inventory. The So scale is a powerful predictor of delinquent, anti-social, and criminal behavior at the low end versus honesty, integrity, and good citizenship at the high end.

In middle childhood, when children are old enough to begin spending significant amounts of time playing with other children, they discover that rules held to be absolute in one's own family are not necessarily seen as absolute in other children's families. The major challenges of this phase of life are learning how to comprehend and respect others' perspectives and cooperating with others when their perspectives differ from your own. The ability to grasp and take into account others' perspectives Hogan calls social sensitivity or empathy. Sharing, taking turns, playing fairly, and compromising all arise from empathy. These social skills not only facilitate play in children, but also represent essential adaptive proficiencies for cooperative endeavors in adulthood. Failure to develop social sensitivity leaves a person at a serious disadvantage in life.

Whereas children with high rule-attunement have a strong respect for the letter of the law, children who develop social sensitivity (empathy) begin to understand the spirit of the law—how rules promote social harmony. Instead of following all rules blindly out of a love for one's parents, people with high social sensitivity follows rules that help them get along with the peers whom they care about. Hogan constructed an Empathy Scale that has been shown to be a strong predictor of prosocial behavior. Women are, on average, more empathic than men and therefore tend to show more compassion and caring towards others. Psychologist Carol Gilligan criticized Kohlberg's model for favoring a masculine fairness/justice orientation while neglecting this prototypically feminine expression of morality.

Getting along with authority and the rules of one's culture is the first lesson of moral development. Getting along with one's peers is the second lesson. The third lesson in Hogan's model, autonomy, is learned during late adolescence and early adulthood. The lesson here is getting along with yourself—formulating an identity that strikes an appropriate balance between satisfying your own personal needs while contributing to the welfare of society. Although we need to take into account what authority figures, peers, and cultural rules tell us, if we slavishly do only what everyone else says we should do, we are unlikely to satisfy the unique constellation of values that each of us holds at the core of our being. Autonomy involves review of and reflection on what our parents and friends have told us is good and then deciding what is good for both others and ourselves. Successful achievement of such awareness allows a person to choose a vocational role that is both personally satisfying and also valuable to society. Failure to achieve this awareness leads either to self-gratification at the expense of others (which can ultimately lead to social isolation or imprisonment) or to self-denial to fulfill others' expectations (which can ultimately lead to resentment, dissatisfaction, and depression).

My very first publication, A Socioanalytic Theory of Moral Development (STMD; coauthored in 1978 with Hogan and colleague Nick Emler), represents Hogan's final statement on his three-phase model of moral development. Although the model is Hogan's, not mine, I was very much on board with the noncognitive aspects of the model. The model denies the existence of timeless, absolute, moral truths that Kohlberg claimed were accessible to individuals in Stage 6 of his model. In STMD, we argue that proponents of this sort of moral absolutism are motivated by a fear of moral relativism and a desire to have an unshakeable ground for criticizing relativism. The problem, we note, is that thousands of years of philosophical debate have yet to provide complete agreement on what is morally good. Unlike the physical sciences, where we do have agreement on things like the freezing point of water and melting point of lead, in the moral domain the basic claim of relativists is correct: what is considered morally good differs across time and cultures.

This isn't to say that there is not substantial (if incomplete) agreement around the world on certain moral issues such as lying, cheating, stealing, torture, slavery, and murder. The reason our intuitions tell us that these behaviors are not good is not because they are bad in some objectively real sense, apart from the functioning of human societies. Rather, these behaviors are not good for harmonious relationships within small human groups. (Yet behaviors considered immoral within a group can be considered good when directed toward people outside of the group.) The phrase "good for" is crucial for my particular noncognitive understanding of morality. No behavior is good or bad in and of itself. Rather, certain behaviors can be good for accomplishing certain aims, or bad for accomplishing those aims. Because cooperation in small groups was essential to our ancestors' survival, behaviors that were good for accomplishing that aim came to be felt as good. Our moral emotions (guilt, pride, sympathy, moral outrage, etc.) evolved in our ancestors as signals to whether social interactions were good for or not good for the effective functioning of our group.

My view of goodness as functionality (what a behavior is good for), assessed by our emotions, was cinched at the end of my graduate school career by a single footnote in a chapter titled "The Emotions" by James Averill, published in a 1980 book edited by Ervin Staub, Personality: Basic Aspects and Current Research. The text leading to the footnote reads, "there is a division within psychology between the study of cognitive-intellectual functions on the one hand, and non-cognitive (emotional-motivational) functions on the other, and that an emphasis on the latter is one of the major features of personality psychology. . . . the distinction between cognitive and emotional processes represents an historically important division of labor1 . . ." And then the footnote reads, "1This division within contemporary psychology reflects a much older division between mental and moral philosophy. Mental philosophy was concerned primarily with questions of epistemology, i.e., the origins and nature of knowledge, while moral philosophy was concerned primarily with questions of motivation, will, emotion, and the like. Stated more colloquially, mental philosophy had to do with truth or falsity, and moral philosophy had to do with goodness or badness. Thus, one might ask of a perception, memory, or problem solution, Is it true (veridical) or false? But one does not usually ask of an emotion or act of will whether it is true or false, although it may be judged right or wrong in a moral sense" (pp. 134-135).

For the record, the remainder of Averill's chapter on the emotions argues against a strict dichotomy between intellectual and emotional functions, contending that emotions are interpretations of experience based on cognitive appraisals of the situation. Nonetheless, I remained struck by three facts within Averill's opening footnote: (1) epistemology and axiology are historically separate domains in philosophy; (2) cognitive psychology is an outgrowth of the former, and personality psychology, the latter; and (3) the objects of study in epistemology/cognitive psychology are truth-apt, so it makes sense to ask if a perception or memory is true, while the objects of study in moral philosophy and personality are not truth-apt, so it does not make sense to ask if a motive, emotion, or act of will is true (although these aspects of character or personality can be evaluated as good or bad).

It would be years later that I realized that a particular viewpoint in moral philosophy called emotivism, championed by A. J. Ayer and C. L. Stevenson, explicitly claimed that moral pronouncements are expressions of emotional approval or disapproval rather than truth-apt propositions. This and other discoveries about morality had to await other research I undertook during the tenure and promotion phase of my career.

[End of Part I. Part II will begin with my first comprehensive articulation of my noncognitive theory of morality, which I call "Real Utilitarianism."]

advertisement
More from John A. Johnson Ph.D.
More from Psychology Today