How Language Can Polarize Us
Research shows how word choices can drive us apart
Posted May 27, 2019
Picture this: you’re told you have lung cancer. The doctor says in the exact same concerned tone, “You have a 70% chance of living if you have this surgery,” or she says, “You have a 30% chance of dying if you have this surgery.” What choice would you make?
This was studied with patients in a hospital (they had a variety of conditions, not lung cancer), with students, and with doctors. Every group – even the doctors, with all their years of medical training – was much more likely to choose surgery if it was explained to them in terms of how likely they were to live.
How we frame issues matters, a lot. Advertisers, politicians, and think tanks are well aware of this. They know that our word choices communicate various assumptions.
Consider an example offered by linguist George Lakoff: the term “tax relief.” These two simple words imply multiple assumptions: a) taxes are negative, b) reducing taxes will help people (much the way that a pain medication provides pain relief) and c) it is moral to lower taxes. So “tax relief” isn’t just a neutral choice of words – it’s a whole frame that may sway our emotions and the decisions we make.
To take another example, metaphors have long defined our views of ourselves. We’ve understood our brains as functioning just like the technology of the day (hydraulic machines, clocks, electrical circuits, and most recently computers, and online networks). Each metaphor offers us some insights, but also limits the range of what we imagine as possible. And each metaphor, it later turns out, is largely inaccurate. (No, our brains aren’t just like steam engines.)
The words we use, and the frames those words create, are an important factor in rising polarization. If different segments of society understand the world through different language – and thus through different frames – that can readily drive us apart.
The individual impact of our choice of words may not be so great, but the cumulative impact, over time, certainly can be. In fact, our entire social norms might shift as words or ideas that were formerly seen as mainstream become marginalized and unacceptable, or vice versa. This possibility was suggested by a recent Polish study, which found that simply being repeatedly exposed to hateful messages can make them feel less distressing and more normal.
In other contexts, the exact opposite might occur, with desensitization helping to reduce hate. We find what we’re exposed to frequently enough less distressingly novel and more predictable and safe feeling. This may be one reason why the right types of contact with people we're prejudiced against can reduce our prejudices over time.
This might suggest that prejudiced language should be pushed to the margins. But proposing that certain words or ideas are offensive and should be changed can lead to its own set of problems. Let's consider a few. Firstly, there is the important question of where to draw the lines around acceptable speech. If we choose words consciously, trying to understand why some phrases are truer to our values than others, what we have is no less than a chance to question and rethink the metaphors we live by. But if choosing wording just becomes a mechanical exercise where we try to avoid offending anyone, we aren’t achieving much.
Furthermore, if I’m told by someone else that I need to change my words, but I don’t understand or agree with the reasons why, it will be tough for me to feel happy about this change. It might seem like something I’m comfortable with is suddenly being taken away from me. If we don’t understand why we’re doing something, whatever it is, it can feel like an empty routine – at best boring, at worst soul crushing. Also, a great deal of research highlights that most of us highly value feeling as if we’re in control (even when we aren’t).
The pace and style of the current battles over words is turbo-charged by the Internet, which significantly changes the ways we communicate. Online communication makes it easier for us to continually engage with content that speaks our language – reinforcing the frames we agree with and distancing us further from those we don't.
In researching my new book I did a fair bit of reading about how people come to hold extremist views. The factors involved are many, and there is no single path into a life of hate. One factor that seemed to come up a lot, though, was perceiving a loss of control and of powers we feel entitled to. This can certainly be contributed to by a belief that forced and unnatural-sounding words and frames are taking over.
This presents a major challenge for anyone seeking to counter online recruitment into hate, because it means that censorship can backfire. For would-be recruits, the very fact that an idea – like the conspiracy theory of a “white genocide” – is marginalized or censored, can make that idea seem more plausible and appealing.
Relatedly, in many accounts I read, people described being drawn to hateful content because it seemed to offer language and explanations that felt like the recruits’ own authentic voice, different from the “politically correct” words forced on them.
From my experience of carefully choosing wording while writing my book, I can say that the words we use are never actually “our own.” The only way I could make a word my own would be if I literally just invented it: waselflug. Otherwise each of us is expressing ideas and assumptions that are built on other existing ideas and assumptions. Our language and thoughts are an elaborate architecture that would not be what they are without the previous usage decisions of people we’ve never met.
None of this means, however, that changes are impossible. Formerly hateful people have changed and different framings of the issues can certainly help to spur this on. In the book I even describe a case where this started through Twitter conversations with then enemies (later friends)!