Skip to main content

Verified by Psychology Today

Ethics and Morality

How Science Can Help Us Be More Rational About Morality

What Sam Harris gets wrong about the evolution of moral systems.

We are, as a species, remarkably preoccupied with making rules about how people ought to behave. This moralistic tendency is not an inherently good or bad quality, it's simply a fact of human nature. But it would be nice if people could be more rational in the way that they create moral rules. Although human rationality is foundational in some domains (like science and technology), it often falls by the wayside when it comes to the production of morality.

Research by psychologists like Jonathan Haidt and Robert Kurzban suggests that our brains generate moral beliefs in a passion-fuelled, intuitive manner. And then, after we've experienced these beliefs, we give after-the-fact rationalizations for why we think our beliefs are really based on logic, rather than emotion. For example, let's say that you sense, emotionally and intuitively, that Behavior X is a disgusting act, and that anybody who engages in it ought to be punished. If asked to justify your belief, you might come up with a post hoc rationalization about the negative consequences of Behavior X (e.g., it's bad for society, for one's health, or for the environment; or God will punish us for it), even if you lack good evidence that this causal relationship really exists. That's not to say that we're utterly irrational when it comes to our moralizing; after all, some acts really do harm our health or the environment (for example), and many people weigh such considerations in their moral reasoning. Nevertheless it's clear that when we think moralistically, we often put passion before reason. Or as Haidt says, it's often a case of "the emotional dog and its rational tail".

Not that there's anything wrong in general with passion and intuition. Many of the most satisfying things in life (art, love, sex, food, etc.) would be impossible to enjoy dispassionately. But because the moral beliefs we espouse have grave consequences for both other people and ourselves, morality should not ultimately be just a matter of impulsive intuition, aesthetic judgment or personal taste. Moral rules are, after all, efforts to control the behavior of others (as well as ourselves), and in any culture that values personal freedom, this kind of imposition shouldn't be taken lightly. Cross-culturally, moral beliefs determine who is celebrated, who is ostracized, who is worshipped as a hero and who is put to death. They govern how we think about ourselves--what shames us, what we take pride in--and how others judge us. And moral beliefs don't operate only at an individual level. Whether a society as a whole can provide for its citizens, and compete successfully against other groups, may come down to the content of its moral system.

So how can we become more rational moral thinkers? The problem mentioned above--that of putting passion before reason--is unfortunately not the only, or even the most formidable, obstacle we face. A more fundamental problem is that of deciding how to even define "rationality" in a moral context. To determine whether a moral rule is rational or not, we need to decide what the goal of that rule should be: Who is the rule supposed to benefit? Sam Harris addresses this question in The Moral Landscape: How Science Can Determine Human Values. He argues that the goal of a rational moral system should be to promote the general well-being of conscious creatures. The problem with this idea is that it overlooks a principal way in which people were designed, by evolution, to use moral rules.

To understand what Harris' approach is missing, consider the arguments made by Richard Alexander in The Biology of Moral Systems. Alexander notes that people strive for goals that would have promoted their individual genetic fitness (survival and reproduction) in ancestral environments, and that an important way in which they do so is by cooperating in groups of people with whom they share common interests. By cooperating in groups, individuals can achieve their goals better than they could by acting alone, so it's in the individual's interest to cooperate. (Cooperation also presents individuals with dilemmas like the "free rider problem", but we can leave these aside for now). While cooperating in groups, people use moral rules in order to influence the behavior of group members in ways that will promote group success. This, Alexander argues, is a primary evolved function of moral rule-making: it enables individuals to more effectively pursue the interests they share with others in their group. For example, if people are cooperatively building a dam to protect their village from a flood, they might use rules like "all adult villagers should work on the dam for a minimum of X hours per day", "those who contribute above this minimum should be honoured", and "those who contribute below this minimum should be shunned". (Note that the promotion of shared interests is not the only evolved function of moralizing. Another important function is to signal to other people--honestly or not--that you have an altruistic or otherwise upstanding disposition. But that's a topic for another post).

If people use moral rules to better pursue their group interests, then it starts to become clear why Harris' proposal--that rational morality ought to promote the well-being of conscious creatures--will not generally apply. People use morality to pursue their own coalitional interests, not the interests of people in general, let alone conscious life in general. From this perspective, people judge the rationality of a moral rule not by how much it benefits conscious beings, but by how much it benefits their interest group. Now sometimes, the interests of the group may overlap with those of conscious creatures in general. For example, building the dam mentioned above would not obviously harm any conscious entity, and it would benefit the villagers, so it would seem consistent with the goal of promoting the welfare of conscious life. Another example where group interests overlap with those of conscious beings in general would be a group's effort to eradicate a disease like smallpox. However, situations such as these--where everybody has an interest in the same goal, and nobody has an interest in a conflicting goal--do not pose moral dilemmas, because they don't involve conflicts of interest between competing human coalitions.

In situations that do involve coalitional conflict, moral dilemmas cannot be solved by applying the "welfare of conscious creatures" rule. A primary reason why people cooperate in groups is so that they can compete more effectively against external groups, and moral disputes tend to arise out of these coalitional conflicts. In these contexts, you can't resolve moral debates by identifying the solution that would benefit all conscious beings, not only because this will often be difficult if not impossible, but also because that's not the goal that either side in the conflict will actually be fighting for. Consider, for instance, a conflict between loggers and hikers about whether the loggers should be allowed to cut down trees in a particular forest. The hikers might argue that this deforestation is morally wrong because it would deprive families of opportunities to enjoy nature, whereas the loggers might argue that it is morally good because it would create jobs for the support of families. Even if identifying the solution most beneficial to conscious life were possible in this situation, it wouldn't be the goal that either coalition would really be seeking. The loggers would be seeking the solution that most benefited loggers, and the hikers would be seeking the solution that most benefited hikers.

Although it may seem cynical to see morality as a strategy that individuals use to pursue their coalitional interests, this perspective actually points to the most effective way to overcome coalitional moral conflicts: by appealing to the interests of a larger group to which two competing coalitions belong. Richard Wilkinson and Kate Pickett use this strategy in their book The Spirit Level, which focuses on the effects of economic inequality in developed nations. Economic inequality creates coalitional conflict within nations, because it advantages some citizens (the upper class) and disadvantages others (the lower class). The upper class tends to argue that inequality is morally good (e.g., "it's the result of rewarding people who work harder than others"), whereas the lower class tends to say it's bad (e.g., "it's the result of unequal opportunities"). Wilkinson and Pickett make an effort to transcend this coalitional conflict by focusing on inequality's impact on the larger group to which both coalitions belong: they present evidence that developed nations with higher economic inequality score worse on many different indicators of national performance. Their analysis has not been without its critics, and debates about the virtues of reducing inequality will, of course, continue. Still, Wilkinson and Pickett have the right idea about how to be rational about morality, because they attempt to assess the moral value of a practice by demonstrating its statistical relationship with measures of group performance and well-being. In doing so, not only do they appeal to our evolved tendency to make moral judgments in terms of our own coalitional interests, they also show how an appeal to a higher-level coalitional interest (the national interest) can help transcend conflicts between lower-level coalitional interests (socioeconomic classes).

Of course, by focusing on inequality's effects on whole countries, as opposed to just classes within countries, Wilkinson and Pickett don't overcome the coalitional logic of moral rationality; they simply raise it to a higher coalitional level. I doubt that we will ever be able to eliminate people's tendency to base their moral judgments on their own coalitional interests, unless we figure out how to re-engineer the human genome towards this end. What we can do, however, when we observe conflicts of interest between competing moral communities, is to look for higher-level interests that these coalitions have in common, and that could potentially give them reasons to cooperate.

Copyright Michael E. Price 2011. All rights reserved.

advertisement
More from Michael E. Price Ph.D.
More from Psychology Today
More from Michael E. Price Ph.D.
More from Psychology Today