Though philosophers and religious authorities have debated the question of morality (i.e., what constitutes good and evil) for centuries, Buddhism offers a relatively straightforward answer: good, or right, is that which increases the joy or decreases the suffering of conscious creatures and therefore a priori should always be done, and evil, or wrong, that which does the opposite and therefore should not. Said another way, good is that which protects conscious creatures from harm, and evil that which subjects them to it. Good is also that which is just and fair, evil that which is unjust and unfair.
Interestingly, research has begun to show that belief in good and evil as conceptualized in Buddhism may be far more universal than previously thought. In an Internet study by psychologist Marc Hauser and colleagues, 5,000 subjects in 120 countries were presented three moral scenarios and one control scenario and asked both to render moral judgments and to justify them. One scenario, for example, described the following conundrum: Denise is riding a train when she hears the engineer suddenly shout that the brakes have failed. The driver then faints from the shock. On the track ahead stand five people who are unable to get off the track in time to avoid being hit by the train. Denise sees a sidetrack leading off to the right onto which she can steer the train, but one person stands on that track as well. She can turn the train, killing the one person, or do nothing and allow the five people to be killed instead. Was it morally permissible, Hauser and his colleagues wanted to know, for Denise to switch the train to the sidetrack?
Results showed that an astounding 89 percent of subjects agreed that Denise should steer the train onto the sidetrack; in fact, the subjects agreed about which actions were moral and which weren't in most of the scenarios, delineating in the process a set of moral principles that seem to be shared by members of all cultures—namely, that it’s less morally permissible to intentionally harm someone than to allow them to be harmed, that it’s less morally permissible to invent a way to cause harm than to cause harm with an existing threat, and that it’s less morally permissible to cause harm directly than to cause it indirectly. Yet the vast majority of subjects couldn't name these reasons as their underlying justification for judging the actions in each scenario as they did.
Which, in fact, other research suggests may be the norm. When we take moral action, we seem to rely not so much on moral reasoning as on moral intuition and then work backward to rationalize the judgments we've already made. (Which isn't to say our moral intuition can’t and shouldn't be influenced by reason, but rather that our moral intuition remains the primary driver of our moral decision making.) Where does our moral intuition come from? The answer isn't yet clear. We know only that its rudiments seem to be present far earlier than we previously thought: research shows that children as young as three years old—an age at which it’s been demonstrated we have no ability to articulate or even understand the concepts of right, wrong, or justice—have a negative emotional reaction to being given fewer stickers than their peers when they make an equal contribution to cleaning up a roomful of toys.
All of which raises the intriguing possibility that the Buddhist conception of good as any action intended to prevent harm or provide help (that is, halt suffering or bring joy) may actually be rooted not in the authority of a god or a religion or a philosophy but in the psychological and perhaps even neurological processes of the human mind.
Yet though we all seem to agree, according to Hauser's study, that causing harm is wrong, in many cases we just as clearly disagree—often dramatically, violently, and tragically—about what harm is. In fact, the things people consider harmful vary not only from culture to culture, but also across one culture over time and from person to person within a culture. Consider, for example, how strongly people across the globe disagree about whether harm is caused by premarital sex. What’s more, even when people do agree about what constitutes harm, they often disagree about which harms are worse than others. We can see this in the strong disagreement about whether aborting a fetus on balance causes more harm than forcing a woman to carry an unwanted child to term.
How then are we to figure out what’s right and wrong when confronted with the kind of complex moral conundrums we encounter in the real world? A belief in moral relativism, meaning that right and wrong are determined by local culture and custom, would seem to ignore the fact that people can disagree about what defines harm without disagreeing that harm defines evil. And yet a belief in moral absolutism—that some particular actions are always right or always wrong in all situations and therefore that context is irrelevant—ignores the fact that often the choice we make to prevent one kind of harm also represents a choice to cause another. That is to say, our choices are rarely between right and wrong, but are almost always between wrong and less wrong (meaning harmful and less harmful). And unlike the definition of wrong itself, what’s wrong and less wrong does change depending on the context. Killing someone, for example, would be considered wrong according to the Buddhist definition, but killing someone may be "less wrong" if doing so is the only way to prevent the deaths of five others, or is done to end the life of a person suffering agonizing pain from a terminal disease.
Unfortunately, our ability to calculate all the various harms we both prevent and produce with any moral choice dwindles rapidly as we move away in both time and place from the situation in which we choose it. How can we ever know our action to prevent harm today won’t cause more harm tomorrow, or that our action to prevent harm here won’t produce harm there? The answer, of course, is that we can’t. In fact, given our propensity to make moral decisions based on intuition as well as our inability to foresee all the consequences of our decisions when we make them, we might argue that the only thing of which we should be certain (besides that we should always seek to minimize suffering and maximize joy) is that our best moral judgment will always be, to some degree, flawed.
We can perhaps be helped by logic, which tells us, among other things, that we should assign greater weight to proven harms (like stealing) than to unproven harms (like cursing God's name). We could also argue that if we ever find ourselves convinced beyond any doubt that our action is absolutely right, good, and just, we should consider we’re probably looking at our choice and the context in which we make it too simplistically. If we’re not making our moral choices with some degree of difficulty and even regret, we’re probably not thinking about them carefully enough. Finally, we could say that if we find ourselves unable to identify a person or group who won’t in some way be harmed by our choice, or if we can but we don’t care about the harm our choice will inflict upon them--if we don’t genuinely lament having to pay the price of one harm to buy avoidance of another--we must indeed consider ourselves at risk for becoming a monster. What makes people monsters, in other words, isn't their belief that the ends justify the means. The ends must always justify the means. What makes people monsters is that they don’t agonize over the means they feel forced to choose.
To those who might be tempted to cite our flawed judgment as a reason to avoid choosing at all, Buddhism would counter that not choosing is even worse than choosing something "less wrong." For as Edmund Burke reminds us, all that’s necessary for the triumph of evil is for good men to do nothing. In fact, from a Buddhist perspective, allowing an injustice to occur when we have the power to prevent it is to become complicit in committing that injustice ourselves. Justice, in the Buddhist worldview, exists only because human beings make the effort to stand against injustice.
Note: This post was excerpted from my book, The Undefeated Mind. Readers interested in the references that support the ideas listed above are invited to refer to Chapter 5, “Stand Alone.”
Dr. Lickerman's book, The Undefeated Mind: On the Science of Constructing an Indestructible Self, is available now. Please read the sample chapter and visit Amazon or Barnes & Noble to order your copy today!