Skip to main content

Verified by Psychology Today

Trust

Where Does Our Moral Sense Come From?

Rigid versus flexible thinking about right and wrong

Imagine the following situation:

A runaway trolley is heading down the tracks toward five workers who will all be killed if it proceeds on its present course. Adam is on the footbridge over the tracks, in between the approaching trolley and the five workers. Next to him on this footbridge is a stranger who happens to be very large. The only way to save the lives of the five workers is to push this stranger off the bridge and onto the tracks below where his large body will stop the trolley. The stranger will die if Adam does this, but the five workers will be saved. (Everett, Pizarro, & Crockett, 2016, p. 774.)

What do you think Adam should do?

For most people, it’s intuitively obvious that Adam shouldn’t push the man off the bridge. When asked why, they typically respond: “Because killing is wrong.” If you point out that by killing one person you save the lives of five other people, they still maintain that killing is wrong, no matter the consequences. The quick response and the subsequent resistance to logical persuasion suggests that people are making this moral judgment on the basis of intuition, not reason.

We humans have evolved a set of intuitions—rapid streams of information processing—that have guided us through our social interactions for hundreds of thousands of years. We quickly judge whether we like other people and whether they like us. Furthermore, we effortlessly intuit the emotional states of others and adjust our behavior accordingly.

We also have an innate moral code: Do not kill, lie, steal, or poach another’s mate. These injunctions weren’t just handed down to us on stone tablets. Rather, they’re inscribed in our DNA. We’re incensed when others violate these rules. And we feel guilty when we break one of them, even if nobody else knows.

Psychologists study people’s moral judgments by presenting them with dilemmas. The trolley problem presented above has been an old standby in moral judgment research for decades. Although people generally agree it’s wrong to push the man to his death, a sizeable minority (a quarter to a third) make the opposite choice. They reason that it’s better for one person to die in order to save five other people.

It’s important to note that these people have the same innate moral code as the rest of us. They admit they feel the tug of intuition not to push the man off the bridge. But they allow their rational thought processes to prevail. In this case, they argue, killing the man is justified. In other words, they let their reason and not their emotions guide them.

But when it comes to moral judgments like these, most of us aren’t swayed by argumentation. Our innate moral sense is absolute. It allows no exceptions, regardless of extenuating circumstances. Yet why should this be? Throughout history, humans have repeatedly faced situations in which they had to violate their innate moral code. They kill attackers to save themselves or family members. They lie so as not to hurt the feelings of loved ones. It seems that a flexible moral sense would be more adaptive.

In a recent paper, University of Oxford psychologist Jim Everett and his colleagues laid out a theory to explain the rigidity of our innate moral sense. They propose that our intuitive morality has been shaped not by the day-to-day dilemmas we face but rather by the opinions of other people. More specifically, Everett and his colleagues hypothesize that statements and behaviors consistent with an absolute moral code are signals of trustworthiness.

For example, if we clearly demonstrate that we believe killing is wrong, people will be more willing to trust us, and hence to cooperate with us. Since cooperation is the key to survival, those with an absolute moral code were more likely to pass on their genes than those with flexible moral intuitions.

To test this hypothesis, the researchers recruited participants on Amazon Mechanical Turk, a crowd-sourcing platform that’s frequently used by social scientists to collect survey data. Respondents first read the trolley problem, and then they were told about two other people who’d just completed the task.

Person A said Adam shouldn’t push the large man to save the five workers because killing people is just wrong, even if it has good consequences. Person B said Adam should push the large man because it’s better to save five lives rather than one. (Of course, Persons A and B were fictitious.)

The respondent was asked to rate the perceived morality and trustworthiness of Person A and Person B. As expected, Person A (who said Adam shouldn’t push the man) was rated as more moral and trustworthy than Person B (who said Adam should kill one to save five.) But the researchers didn’t just take their word for it—they quite literally had the participants put their money where their mouth is.

Each participant was given thirty cents and asked to play the trust game with Person A and then again with Person B. In the trust game, you choose to give the other person some amount of your money (from 0% to 100%). The experimenter triples your donation and gives it to the other person, who then decides how much to share with you. If you trust the other person to share evenly with you, then you should donate the whole amount. If you think the other person will keep everything, you should donate nothing.

The trust game is considered an accurate behavioral measure of level of trust. And as predicted, participants offered more money to Person A than Person B. In other words, the person who says killing is wrong no matter what is judged to be more trustworthy.

Before the experiment ended, the participants were asked what they thought Adam should do. That way, the researchers could see whether they trusted the person who thought the same way they did. Those who said Adam should not push the man trusted the person who agreed with them. But those who said it was better to kill one to save five were just as likely to trust Person A, who disagreed with them, as they were to trust Person B, who held the same view.

Lawrence Kohlberg, who pioneered research on moral reasoning in the middle of the twentieth century, distinguished between conventional and post-conventional thinking in adults. In his view, conventional adults held absolute moral values, whereas post-conventional adults were more flexible in their moral thinking. Kohlberg admired Gandhi as the epitome of post-conventional morality, the goal that the intelligent, well-educated person strive for.

The research Everett and his colleague report suggest a different way of thinking about moral reasoning. Flexible morality may be a better guide to making difficult decisions in a complex world. But if you want other people to trust and cooperate with you, it’s better to let them know your moral values are absolute.

Reference

Everett, J. A. C., Pizarro, D. A., & Crockett, M. J. (2016). Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology: General, 145, 772-787.

David Ludden is the author of The Psychology of Language: An Integrated Approach (SAGE Publications).

advertisement
More from David Ludden Ph.D.
More from Psychology Today
More from David Ludden Ph.D.
More from Psychology Today