I noted in my last post that many science-minded moral philosophers suggest that morality should aspire towards utilitarian goals (maximizing welfare for the greatest number of people/beings). I also noted that although utilitarianism sounds great in principle, in practice it's unrealistic and particularly unhelpful for managing human competitiveness.
Still, I very much admire the idea that we could identify fundamental principles that should guide us morally, and that we could understand these principles in scientific (specifically, evolutionary) terms. So in this post, I’m going to take a first step towards thinking about what an optimally-designed, evolution-minded moral system would look like, and how it could correct some of the shortcomings of utilitarianism.
To take this first step, consider that you can’t design something effectively unless you have some idea of what its function is. I mean function in the sense of a biological adaptation, mechanical device, or useful institution: what is it supposed to accomplish, and for whom? A heart should pump blood for an organism; a hammer should drive nails for a carpenter; a criminal justice system should provide protection for a citizenry. What should morality do for us? Seems like a fundamental question, and seems like any moral system should be expected to have a function, so that it can serve some useful purpose for individuals and the groups they live in.
But does utilitarianism have a function, and if so, who does it serve? All it stipulates is that we should strive to maximize welfare for the greatest number of people/beings. I can see how the realization of this goal (insofar as it could realistically be achieved) could serve the interests of those whose welfare is maximized. The problem is, utilitarianism does not necessarily provide benefits for the people who "use" it. Let’s say you behave in a utilitarian manner by selling everything you own and donating the proceeds to charity, and that you end up homeless and ruined as a result, but that the happiness your altruism creates for others is nevertheless greater in magnitude than the misery it creates for you. Utilitarianism will have fulfilled its function, but not for you, the person who utilized it.
So utilitarianism may be useful, but not necessarily for the people who use it. This doesn’t sound like a very well-designed functional device to me. Useful things should be useful to the people who use them, and if they’re not, they will rarely get used. This is the most basic reason why utilitarianism is so unrealistic.
So that’s the starting point for designing an optimal moral system: make sure it will be useful to the people who use it. And how can morality be useful? It can help individuals and groups to compete with other individuals/groups in pursuit of their evolved interests (for mates, family, status, resources, etc.), and to cooperate with other individuals/groups so that they may compete more successfully . If a moral system doesn't actually help people pursue their interests, then no matter how nice it sounds in principle, the vast majority of people will never be motivated to use it.
I would argue, then, that morality should enable the people who use it to more effectively compete—and cooperate, in order to compete—as individuals and in groups. That’s a very general function, but in future posts I’ll go into greater detail about what specific moral principles would enable this function to get fulfilled.
1. Alexander, R. (1987). The Biology of Moral Systems. Aldine De Gruyter.
Copyright Michael E. Price 2014. All rights reserved.