
Confidence
Must Polarization Be Irrational?
What Bayesianism says about polarization.
Posted September 26, 2020
A key feature of political polarization is that it is predictable. For example, when my friend Becca and I went our separate ways—I, to a liberal university; she, to a conservative college—we could both predict that I'd become more liberal and that she'd become more conservative.
What should we make of the fact that polarization is predictable in this way? Today, I'll describe a theoretical result suggesting that such predictable polarization must be irrational; tomorrow I'll explain why the possibility of ambiguous evidence changes this verdict.
First, we must distinguish two senses of "rationality." Practical rationality is doing the best that you can to fulfill your goals, given the options available to you. Epistemic rationality is doing the best that you can to believe the truth, given the evidence available to you.
It’s practically rational to believe that climate change is a hoax if you know that doing otherwise will lead you to be ostracized by your friends and family. It’s not epistemically rational to do so unless your evidence—including the opinions of those you trust—makes it likely that climate change is a hoax.
My claim is about epistemic rationality, not practical rationality. Given how important our political beliefs are to our social identities, it’s not surprising that it’s in our interest to have liberal beliefs if our friends are liberal, and to have conservative beliefs if our friends are conservative. Thus, it should be uncontroversial that predictable polarization can be practically rational—as people like Ezra Klein and Dan Kahan claim.
The more surprising claim of this series is that polarization can be epistemically rational: Due to ambiguous evidence, liberals and conservatives who are doing the best they can to believe the truth will nevertheless tend to become more confident in their opposing beliefs.
To defend this claim, we need to say more precisely what "epistemic rationality" consists in.
The standard theory is what we can call unambiguous Bayesianism. It says that the rational degrees of confidence at a time can always be represented with a probability function, and that new evidence is always unambiguous, in the sense that you can always know exactly how confident to be in light of that evidence.
A simple example: Suppose there’s a fair lottery with 10 tickets. You hold 3 of them, Beth holds 2, and Charlie holds 5. Given that information, how confident should you be in the various outcomes? That's easy: You should be 30% confident you’ll win, 20% confident Beth will, and 50% confident Charlie will.
Now suppose I give you some unambiguous evidence: I tell you whether or not Charlie won. Again, you’ll know exactly what to do with this information: if I tell you he won, you know you should be 100% confident he won; if I tell you he lost, that means there are 5 tickets remaining, 3 of which belong to you—so you should be 3/5 = 60% confident that you won and 40% confident that Beth did.
In effect, unambiguous Bayesianism assimilates every case of information-gain to a situation like our lottery, wherein you always know what probabilities to have both before and after the evidence comes in.
This has a surprising consequence:
Fact 1. Unambiguous Bayesianism implies that, no matter what evidence you might get, predictable polarization is always irrational.
(The Technical Appendix contains all formal statements and proofs.)
In particular, consider me back in 2010, thinking about the political attitudes I’d have in 2020. Unambiguous Bayesianism implies that no matter what evidence I might get—no matter that I was going to a liberal university, for instance—I shouldn’t have expected it to be rational for me to become any more liberal than I was then.
Moreover, Fact 1 also implies that if Becca and I shared opinions in 2010, then we couldn't have expected rational forces to lead me to become more liberal than her.
Why is Fact 1 true—and what does it mean?
Why it’s true: Return to the simple lottery case. Suppose you are only allowed to ask questions which you know I’ll give a clear answer to. You’re currently 30% confident that you won. Is there anything you can ask me that you expect will make you more confident of this? No.
You could ask me, “Did I win?”—but although there’s a 30% chance I’ll say "Yes" and your confidence will jump to 100%, there’s a 70% chance I’ll say "No" and it’ll drop to 0%. Notice that (0.3)(1) + (0.7)(0) = 30%.
You could instead ask me something that’s more likely to give you confirming evidence, such as “Did Beth or I win?” In that case, it’s 50% likely that I’ll say "Yes"—but if I do your confidence will only jump to 60% (since there’ll still be a 40% chance that Beth won); and if I say "No," your confidence will drop to 0%. And again, (0.5)(0.6) + (0.5)(0) = 30%.
This is no coincidence. Fact 1 implies that if you can only ask questions with unambiguous answers, there’s no question you can ask that you can expect to make you more confident that you won. And recall: Unambiguous Bayesianism assimilates every scenario to one like this.
What it means: Fact 1 implies that if unambiguous Bayesianism is the right theory of epistemic rationality, then the polarization we observe in politics must be irrational.
After all, a core feature of this polarization is that it is possible to see it coming. When my friend Becca and I went our separate ways in 2010, I expected that her opinions would get more conservative, and mine would get more liberal. Unambiguous Bayesianism implies, therefore, that I must chalk such predictably polarization up to irrationality.
But, as I’ve argued, there’s strong reason to think I can’t chalk it up to irrationality—for if I’m to hold onto my political beliefs now, I can’t think they were formed irrationally.
This—now stated more precisely—is the puzzle of predictable polarization with which I began this series.
In tomorrow's post, I'll explain why the possibility of ambiguous evidence offers a solution.