Weighing moral choices is more flexible than we thought

It’s no secret that people can’t always explain their moral choices. In a phenomenon dubbed “moral dumbfounding,” for example, people will ardently insist on the immorality of sex between consenting siblings, even when they can’t articulate a valid reason why. On the other hand, sometimes people can explain their moral choices...even when they aren’t their own.

In a paper published this week in the open access journal PLOS ONE and covered in Nature News, researchers Lars Hall, Peter Johansson, and Thomas Strandberg cleverly tricked people into accepting moral opinions that weren’t their own, and into happily justifying them as well.

The paper reports two striking results. First, over two thirds of participants didn’t notice when a moral position they had just endorsed was presented back to them as their own, but with its meaning reversed. The authors call this “choice blindness.” Second, having failed to notice the switch, over half of participants went on to justify the position they originally rejected. So, for example, someone who originally judged legal prostitution morally inappropriate would moments later be presented with a rating suggesting that they accepted legal prostitution, and would not only fail to notice, but go on to explain why prostitution should be legal.

Fooling people into accepting the opposite of their original choices is a neat trick. The researchers pulled it off with some help from stage magic. Participants first filled out a questionnaire by indicating how much they agreed with various moral statements. For example, one statement was: “It is morally defensible to purchase sexual services in democratic societies where prostitution is legal and regulated by the government,” which was rated from 1 (“completely disagree”) to 9 (“completely agree”).

Participants thought they were writing on a normal piece of paper, but in fact wrote onto a scrap of smaller paper removed from the full page when it was folded back over a clipboard, bringing the scrap into contact with a sticky patch on the clipboard. When the page was flipped back, the ratings remained, but two of the statements revealed under the scrap were reversed. A participant who had given the statement above a 7, moderately agreeing with legalized prostitution, now saw a rating of 7 for the opposite statement: “It is morally reprehensible to purchase sexual services in democratic societies where prostitution is legal and regulated by the government.” And when asked to explain their rating, many participants went on to provide unequivocal support for the new statement.

Hall, Johanssen, Strandberg (2012), PLOS ONE

What can we learn from this creative sleight of clipboard?

The authors caution that surveys and other measures employed in psychological research may not reflect people’s “real” preferences and beliefs – after all, choice blindness suggests that it doesn’t take much to shift them. 

But another lesson might be that “real” preferences and beliefs are not only malleable, but based on evidence both inside and outside the head. Suppose you come across an old e-mail to a friend in which you advocate buying only dolphin-safe tuna. Or suppose you find your pantry stocked with only dolphin-safe brands. Is it unreasonable to take this as evidence for what you believe? Participants in this experiment were in a similar position, with little reason to doubt the authenticity of evidence for “their” stated opinions.

Put more strongly if more subtly, it may be that individuals don’t have stable moral commitments, but not because there aren’t moral commitments sitting in their heads, but because there aren’t moral commitments sitting in their heads. Many of the moral claims we explicitly advocate could potentially be inferred only as needed from a complex blend of the thoughts in our heads, observation of our own and others’ behavior, and the situation we find ourselves in. This perspective changes how we normally think of moral committments, and may be unsettling to those who wish to ground moral psychology in something broad, universal, and tangible. But it also helps make sense of how broad moral principles could possibly interface with the messiness of everyday life, which so often involves exceptions to those principles, or conflicts between different principles we simultaneously endorse.

About the Author

Tania Lombrozo, Ph.D.

Tania Lombrozo, Ph.D. is an assistant professor of psychology at the University of California, Berkeley and a member of the Institute for Cognitive and Brain Sciences.

You are reading


Is it Rational to Have a Child? Can Psychology Tell Us?

A new paper suggests that life-transforming decisions can't be made rationally.

Are You a Sucker For Math?

New research reveals a "nonsense math effect."

Can You Explain Your Moral Choices?

A clever new study reveals “choice blindness” when it comes to moral beliefs