Two Reasons Political Arguments Aren't More Productive
Come, let us reason together (part 8).
Posted Jun 29, 2019
When we argue about politics on social media, it's easy to form the impression that people never change their minds. This impression is strengthened by the fact that we can find ourselves having the same unproductive arguments with the same people month after month.
Some say changing minds is not the goal. The goal is to assert our identities, get cheers from our in-groups, and stigmatize our out-groups.
To be sure, we do all those things. But are those our main goals? Or do we settle for those, because we have given up on getting what we really want? We've argued all our lives with parents, siblings, spouses, neighbors, friends, and strangers. And the reason we engage in most of those arguments is to try to persuade the other person to our point of view. And, barring that, we might learn a thing or two.
Not every argument in our lives has gone well. But most of us have tasted the fruit of a productive argument now and then. In productive arguments, we learn things. We find common ground. We coordinate vocabulary. We clarify our differences. We correct misperceptions and dispel bad assumptions. And, even if we still disagree at the end of the argument, we understand better where the other person is coming from, and feel better understood. Sometimes people even change their minds—at least a little.
Why wouldn't we want these same things from our political arguments on social media?
Unfortunately, even if we do want more productive arguments, two main obstacles stand in our way:
1. Too much inferential distance for the time available
2. High-stakes tribalism
Too Much Inferential Distance for the Time Available
When we want to convince someone of a claim, we make points. And if we want our points to be persuasive, it helps a great deal if the other person understands our points and sees how they are relevant to the claim.
If I want to convince you that it's time to go, I can point to a clock. And there's a good chance you would understand my gesture and see the relevance almost instantly.
But what if Andrew Wiles wants to convince me that Fermat's Last Theorem is true? He would have to make dozens of points, using terms like "Ribet's theorem," "modularity lifting," "semistable elliptic curves," and "Galois representations." And to explain each of those concepts, he would have to use many equally opaque terms. And this process would go down many levels. And understanding the vocabulary is only the beginning. He's also going to have to guide me through a set of inferences that rely on these categories. Abstractions are built upon abstractions, inferences are built upon inferences. If I am capable of understanding his proof at all, it might take years.
When I try to convince you it's time to go, the inferential distance is small. When Wiles tries to convince me that Fermat's Last Theorem is true, the inferential distance is huge. Political arguments tend to be somewhere in-between. And the inferential distance is often larger than we realize.
If one person has read only Piketty, another has read only von Mises, a third has read only Hayek, and a fourth has read only Marx, they will have very different ideas about how economies and social systems work. They will use much the same language, but they will use their words in slightly different ways. They will have different background assumptions and different mental models. One makes a point and thinks it obviously supports his claim. But the statement triggers very different inferences, and very different hopes and fears, in the others. They are communicating across a large inferential distance and might not realize how great it is.
These distances can be crossed. If they could sit down with each other for hours, they could make a good start at finding common ground and refining their differences. They could negotiate the vocabulary, explain their mental models, explain how they see things playing out, and elicit and reveal their values, hopes, and fears. And, if they had even more time, they could read each other's touchstone texts, synthesize the lessons from each, and come back together for even more productive conversations.
It's hard to cross such distances with tweets. So we wind up in arguments where one person says "taxation is theft," and "government is the problem." Another says "property is theft," and "we need the government to solve collective action problems." And they part ways no closer to mutual understanding than when they started. And they might do the very same dance with each other a week or two later.
Another problem is tribalism. Tribalism is largely endemic to politics. But it is intensified in modern political discourse by large inferential distances, the incentive structures built into social media platforms, and the tendency to gravitate toward issues with high stakes.
"Tribalism" isn't a perfect label for the phenomenon. Actual human tribal groups do have conflict with other tribes, and sometimes their patterns of conflict look a bit like the patterns we see in political arguments, but tribal governance also features many systems and structures that allow tribes to cooperate with other tribes, and we rarely have those things in mind when we label an argument "tribalistic".
What I mean by "tribalism," and what I think most people mean, is a collection of behaviors that deepen our divisions when we engage in political discussions from an in-group/out-group perspective.
We all identify with (or are identified with) various social identity categories. The main classes of identity categories these days are "left"/"right," party affiliation, race, sex, gender, sexual orientation, class, religion, ideology, nationality, ethnicity, ability, and so forth. Different identities will be more or less salient depending on the issue.
Identities can be relevant to policy for many reasons. It could be that a policy is proposed in order to address the grievances of one group against another group, or against the system as a whole. It might be that there is a concern that a policy will have a disparate impact on people in different groups. Or, in the case of ideological identity, it might be that the different groups have different ideas about which direction the larger group should go.
A person's identity is one of the clues we use to help us interpret their words, anticipate their inferences, and discern their motives. This can be good. Interpretation is hard, and if we are trying to understand where another person is coming from, sometimes we need all the clues we can get.
But this tendency also makes it possible to play dirty tricks in political contexts. If we can associate a rival identity with bad motives and irrational beliefs, we can win arguments without having to worry too much about how strong our case actually is. When identity is salient to an issue, our arguments can take on a tribalistic flavor.
Here are some of the tactics groups engage in when identity is salient and stakes are high:
1. Stigmatizing: If we want the moral high ground over an out-group, we can expand the scope of negative labels (such as "racist," "socialist," "witchcraft," or "blasphemy") so they cover more and more people in the out-group. (This is sometimes called the "non-central fallacy/strategy.")
2. Nut-picking: If we want to stigmatize an entire out-group, we can selectively share the worst examples on the other side, while ignoring the best examples, giving the impression that the worst members are typical of the whole.
3. Offense-mining: If we want to create the impression that the out-group is aggressive, we can take offense whenever possible, even when no offense was intended. At the individual level, the misreading is often sincere, arising from the expectation that members of the out-group are generally ill-motivated. There is positive feedback here. The more we take unreasonable offense, the more evidence we have that they are out to offend. It can also be intentional when one is itching for a fight. This is classic honor culture stuff. In a tavern patronized by rival clans, it's best to avoid talking about a rival clan member's sister, even with the purest of intentions, unless you're looking for a fight.
You can find these behaviors all across the spectrum on social media. Social justice groups do these things. Anti-social-justice groups do these things. Republicans do them. Democrats do them. Almost any time there's an in-group and an out-group, and stakes are high, these behaviors are likely to be in the neighborhood.
These practices make our arguments less productive (especially in the short run). People spend more time defending against stigma and counter-stigmatizing than they do trying to understand the legitimate points being made on the other side.
Inferential distance and tribalism can also reinforce each other. If the inferential distance is large, it can be hard to follow the other person's train of thought. And if the conversation takes place in the context of an identifiable in-group and out-group, it's easy to assume their argument isn't making sense to you, because they are in the grip of an irrational ideology or are playing games with their words. So we don't work as hard to arrive at a charitable interpretation. And when groups stop engaging in productive dialogue, their mental models and vocabulary tend to drift apart, which increases inferential distance.
Zoom In, Pan Out
At the ground level, our political arguments on social media sometimes look hopeless. Individual arguments are rarely productive, and no one seems to change their mind.
But progress is often evident at a higher level. Public opinion and Overton windows shift over time. People develop more nuanced views on many issues. Groups separated by large inferential distances start to adopt each other's vocabularies and think in terms of each other's concepts.
But one can also wonder if this process would go a bit more smoothly if our lower-level arguments were more productive.