What principles should guide human moral striving? The most common evolution-minded answer has been that morality should resemble some form of utilitarianism, that is, should strive to maximize well-being for the greatest number of beings. For example, neuroscientist Joshua Greene  says that morality should "make life as happy as possible overall, giving equal weight to everyone’s happiness." Author Sam Harris  suggests that morality should promote the general well-being of conscious creatures. And Peter Singer , one of the most influential living moral philosophers, has long advocated the view that we should aim to maximize the welfare not just of other people but of non-human animals as well.
I’m sympathetic to the spirit of these utilitarian ideals, and can understand the reasoning behind them. Unchecked selfishness can become terribly destructive, and utilitarianism can help counteract it. Any workable moral code must demand that we consider the interests of others and not just those of ourself and our in-group. And I understand how utilitarianism could provide fairly clear guidance on some specific moral issues, for example, how a Singer-style focus on animal welfare could lead one to vegetarianism.
Nevertheless, I wonder how feasible and desirable a utilitarian moral system could ever be, in general. One problem with utilitarianism is that it’s often impossible to assess what course of action would maximize general welfare, so moral goodness is often impossible to discern. An even bigger problem is that in requiring individuals to prioritize the interests of conscious beings in general above those of their own self and in-group, utilitarianism positions itself squarely in opposition to human competitiveness, and thus sidesteps some challenges that any truly useful moral system really must confront.
Natural selection is a competitive process, and it designed humans to compete with each other for status, mates, and resources. Human nature
is also intensely cooperative, but a primary reason why people evolved to cooperate
in groups is to more effectively compete against other groups . Altruism
can certainly be extended beyond one's self, in-group, and species. However, there’s no getting around the fact that people are fundamentally and powerfully motivated to compete, and any moral system that doesn't directly confront this reality is, I believe, on a quixotic quest.
Instead of overlooking or bemoaning human competitiveness, the ideal moral system would attempt to manage it productively. Competitiveness doesn’t have to be ugly; there’s nothing wrong with wanting to succeed or with wanting your team to win, and competition can lead to extraordinary achievement—especially when it is fair and nonviolent. A major goal of morality should be to manage competition towards such productive ends. Unfortunately, utilitarianism seems to offer too little guidance in this regard.
In addition to not confronting competitiveness directly enough, utilitarianism has, I think, some other downsides—for example, too little regard for individual autonomy—that prevent it from generating the kinds of moral judgments that would allow society to evolve in the best possible direction. What would a more promising morality look like? I’ll explore this issue in future posts.
1. Greene, J. (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Penguin.
2. Harris, S. (2011). The Moral Landscape: How Science Can Determine Human Values. Simon and Schuster.
3. Singer, P. (1981). The Expanding Circle: Ethics and Sociobiology. Farrar, Straus and Giroux.
4. Alexander, R. (1987). The Biology of Moral Systems. Aldine De Gruyter.
Copyright Michael E. Price 2014. All rights reserved.