4 Keys to Understanding Our Weird, Inconsistent Morality
Research helps explain how we think about right and wrong.
Posted December 26, 2017
What factors do we pay attention to when we make moral judgments? For most of us, it depends.
First, outcomes definitely matter. Research shows that even babies prefer those who are nice to others compared to those who are neutral or mean.
Moreover, babies prefer those who behave positively toward others who are nice. And babies avoid those who behave positively toward others who are mean. Put simply, babies prefer those who are nice to nice individuals, and mean to mean individuals.
From an early age, then, we judge the moral behavior of others and use this information when deciding who we like.
But for adults, it’s not just outcomes that matter. When making moral verdicts, we pay close attention to intentions, too.
Did the Chairman Mean to Harm the Environment?
Cross-cultural research suggests that it is a general principle of morality, a “cognitive universal”, that people consider both intentions and outcomes.
But people think about intentions and outcomes differently depending on the situation.
For example, there’s the Knobe effect. Here is the famous scenario from the original paper:
“The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.
The chairman of the board answered, ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program.
Sure enough, the environment was harmed."
When asked whether the chairman intended to harm the environment, 82% of respondents said yes.
But something strange happened when a single word was changed.
In a different version of the story, researchers replaced the word “harm” with “help.” Every other part of the story was the same except that word. Researchers then asked participants whether the chairman intended to help the environment.
77% said the chairman did not intend to help.
What does this mean? The outcome of an action (harmful or helpful) leads us to retroactively change our perception of facts (in this case, whether a person meant to do something or not).
If something bad happens as a side-effect, we think the person did it intentionally. But if something good happens as a side-effect, we don't think the person did it intentionally. Why not?
One explanation comes from the philosopher Richard Holton. Holton states that the best way to explain the Knobe effect is to identify whether a person violates or conforms to a norm. For example, if a person does something knowing that a side-effect of the action will violate a norm, we view it as intentional. But if a norm is upheld as a side effect, it is not viewed as intentional.
We tend to view others as thoughtlessly upholding norms, and use conscious intent to violate them.
Free Will and the Asian Disease Problem
Moreover, it's not just intent. We’re inconsistent about our attribution of free will, too.
In a series of experiments, researchers presented participants with an adapted version of the Asian Disease Problem. In the scenario, 600,000 people are about to die from an impending disease.
The participants then read about a person who must decide between two options: The “risky” option and the “safe” option.
The risky option offered a one-third chance of saving everyone and a two-thirds chance that everyone dies. The safe option would save one-third of the people for sure but the other two-thirds would definitely die.
Researchers told participants to imagine that they or the person in the scenario selected the risky option.
Half the participants were told that the decision maker in the scenario succeeded in saving everyone. The other half were told that the decision maker failed and all 600,000 people died.
They were then asked how much free will each person had when they made their decision.
Overall, participants assigned more free will to the person whose decision led to all 600,000 people dying.
The norm violation idea from Holton makes sense here, too. If a person succeeds in helping others, they have upheld a norm. But if a person fails at helping others, they have violated a norm.
In sum, people selectively assign free will to others depending on the outcomes of their actions. People assign greater intent and free will when bad things happen.
Blame And Punishment
Recent research suggests we have two cognitive processes that clash when we make moral verdicts. One process is for outcome. Another process is for intent.
Friction between these processes lead us to assign blame and punishment differently.
One mental process evaluates intentions. Did they mean to do it? Or was it an accident?
The second mental process cares about outcomes. What actually happened? Who caused it to happen?
Suppose a driver unintentionally runs a red light. The driver crashes into another person, who dies as a result.
Under the two-process model, we would undergo a conflict between accounting for the driver’s intent and the outcome of the driver’s action.
We know the driver didn’t mean to harm anyone. People wouldn’t assign much blame. But many people would still want the driver to be punished in some way.
Yet people’s intuitions differ for cases where person intends to cause harm, but is unsuccessful.
Imagine a driver wants to hit another person, but misses. Nothing bad actually happened.
Here, people are more willing to assign blame. The person wanted to do something bad, after all. But people would be less willing to punish the second driver, who did not cause harm, compared to the first driver, who did.
In other words, people think those who commit accidental harms should be punished but not blamed as strongly. And people think those who attempt harm but do not succeed should be blamed though not punished as severely. Our compulsion to punish mostly relies on whether something bad actually occurred. And our compulsion to blame mostly relies on the person’s intent.
We think punishment should be based on outcomes, not intentions. And we think blame should be based on intentions, not outcomes.
Thinking Doers and Vulnerable Feelers
Moral judgment is not as straightforward as looking at outcomes and intentions, though. Another factor is mind perception.
According to moral dyad theory, for an act to be perceived as moral or immoral it must contain two individuals. We need a moral agent (a “thinking doer”) and a moral patient (a “vulnerable feeler”).
But it is not as simple as pinpointing an agent and a patient and from there concluding that a moral violation has occurred. The process can run in the opposite direction.
Put simply, when we think something bad has happened, we are driven to identify both a moral agent and a moral patient. For example, when we see harm and suffering, we see moral patients. To complete the moral dyad, we are compelled to find a moral agent. “Who is responsible for this suffering?”
In other words, when people see someone suffering, moral dyad theory says they will attempt to find an agent, a “thinking doer."
Moreover, people will try to find moral patients when confronted with agents who seem intuitively immoral. Even if specific victims are not immediately obvious. Examples include a greedy businessperson, negligent engineer, or disingenuous politician. “This person is obviously bad, there must be victims somewhere.”
Moralized consensual crimes such as marijuana use or prostitution may also elicit an attempt to identify a moral patient. “Maybe it’s not hurting them, but society is being harmed!”
Put simply, then, when individuals perceive harm, they seek to complete the moral dyad by identifying a victim and a perpetrator.
What Even Is Morality?
The researchers behind moral dyad theory state that morality does not consist of “mystical forces that exist apart from humanity, but simply what emerges through the interactions of agents and patients. To create evil, just intentionally cause another mind to suffer (e.g., kick a dog), and to create good, just intentionally prevent another mind from suffering (e.g., stop a dog from being kicked).”
A greater willingness to assign blame rather than praise in situations in which the mental states of agents and patients differ accords with neuroscientist Joshua Greene who states, “Built into our moral brains are automated psychological programs that enable and facilitate cooperation.” This moral machinery operates implicitly, allowing humans to arrive at moral verdicts with little reflective thinking.
Additionally, the social psychologist Jonathan Haidt has described moral systems as “interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible.” Both Greene and Haidt emphasize the tribal roots of human morality. Cooperation enabled our ancestors to survive.
How To Make Moral Decisions
In fact, Greene offers a solution for when to rely on our automated moral machinery and when we should be more reflective about moral judgments. Plainly, when we are dealing with members of our tribe, our in-group, relying on gut feeling is fine. Odds are it will lead us to do the right thing. But when dealing with strangers, or the out-group, our automated machinery is untrustworthy. Here, we should override our automated processes and use reflective thinking to do the right thing.
In-group = Use moral emotions. Out-group = Use moral deliberation.
The role of cooperation could be one reason underlying why individuals are more willing to blame than praise. The willingness to condemn may be guided by the aim of changing a person's bad behavior. And it could serve as a warning signal to others to straighten up. The desire to discourage bad behavior is more powerful than the urge to encourage positive behavior.
One implication is that people closely scrutinize cases where something good has occurred before giving moral praise. And people are quicker to rush to moral judgment and assign moral blame when something bad has happened.
In the eyes of others, it’s easy to be bad, and hard to be good.
You can follow Rob on Twitter here: @robkhenderson.
References
Cushman, F., Sheketoff, R., Wharton, S., & Carey, S. (2013). The development of intent-based moral judgment. Cognition, 127(1), 6—21.
Feldman, G., Wong, K. F. E., & Baumeister, R. F. (2016). Bad is freer than good: Positive–negative asymmetry in attributions of free will. Consciousness and cognition, 42, 26—40.
Gray, K., Waytz, A., & Young, L. (2012). The moral dyad: A fundamental template unifying moral judgment. Psychological Inquiry, 23(2), 206—215.
Gray, K., & Wegner, D. M. (2010). Blaming God for our pain: Human suffering and the divine mind. Personality and Social Psychology Review, 14(1), 7—16.
Greene, J. (2014). Moral tribes: Emotion, reason, and the gap between us and them. Penguin.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage.
Hamlin, J. K., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450(7169), 557—559.
Hamlin, J. K., Wynn, K., Bloom, P., & Mahajan, N. (2011). How infants and toddlers react to antisocial others. Proceedings of the national academy of sciences, 108(50), 19931—19936.
Holton, R. (2010). Norms and the Knobe effect. Analysis, 70(3), 417-424.
Saxe, R. (2016). Moral status of accidents. Proceedings of the National Academy of Sciences, 113(17), 4555—4557.
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453—458.
Wegner, D. M., & Gray, K. (2017). The Mind Club: Who thinks, what feels, and why it matters. Penguin.