Anthony C. Lopez Ph.D.

Evolutionary Politics

Why Science and Politics Don't Mix

There is no fairytale relationship between politics and science. Here's why.

Posted Apr 23, 2019

What is the ideal relationship between policymakers and scientists? Here’s the fairy tale: Scientists produce generalizable knowledge about the world and policymakers efficiently use that information to make welfare-maximizing decisions on behalf of a political community. Indeed, a recent article in Foreign Affairs, describing the role of the intelligence community, asserts a “fundamental reality: government leaders make better decisions when they have better information.” Scientists and policymakers would seem to be natural allies. So, why aren’t they?

First, I want to acknowledge what should be obvious. Scientists are not actually ignored; policymakers many times do actively seek out and put scientific evidence to good use. Also, however, scientists sometimes get things wrong, just like everyone else. In other words, this explanation for why science and policy don’t mix is not meant to reflexively vilify policymakers any more than it is meant to blindly elevate scientists.

We are political animals, and when we need to make a decision on behalf of our groups, the way we seek out, process, and act upon information is anything but coolly rational. There are at least three dynamics that complicate the otherwise ideal relationship between policymakers and scientists. First, psychological biases defy accurate information processing. Second, political interests compel elites to neglect or politicize unfavorable evidence, and third, there is a tendency for humans to reject evidence-based arguments in favor of moral imperatives, which means that political decision-making is sometimes more a matter of faith than facts.

Errors and Interests

Social and cognitive psychologists have discovered that humans (and therefore policymakers) are subject to a range of biases that predispose us toward errors of judgment that can have detrimental effects on decision-making and policy. For example, we know that, either in general or under specific circumstances, individuals are subject to optimistic overconfidence (falsely overestimating one’s probability of success), confirmation bias (discounting information that is inconsistent with one’s beliefs), and the general effort to reduce uncertainty and cognitive dissonance in one’s environment. All these biases afflict policymakers and undermine the rational collection and evaluation of evidence.

In addition to these biases, sometimes policymakers don’t hear the evidence simply because they have an interest in not hearing it—and not because of weak science or ineffective science advocacy. This is evident not just in terms of policymaker relationships with scientists, but also unsurprisingly in policymaker relationships with the intelligence community.

For example, in his analysis of intelligence failures, Joshua Rovner identifies three ways in which the relationship between the policy community and the intelligence community can break down. First, policymakers may simply neglect intelligence and trust their gut or instinct. This is often, but not always, due to the operation of biases already mentioned (e.g. confirmation bias). Second, and counterintuitively, decision makers may sometimes be excessively deferential to intelligence, which can lead to poor decisions when the evidence itself is faulty. Third, when policymakers commit to extreme policy positions, they may resort to manipulating the intelligence itself in order to follow through on promises and satisfy key constituents. This, Rovner argues, is at least partially responsible for the politicization of American intelligence in the run-up to war with Iraq in 2003.

Faith Trumps Facts

A third factor that undermines a harmonious relationship between science and policy is that political interest is often framed in terms of values, not facts. For example, Ginges and Atran argue that people’s support for policy decisions—such as whether to go to war—is a product of deontological reasoning, meaning that we “follow a rule-bound logic of moral appropriateness,” regardless of the material benefits of the policy. This means that policy is often guided predominantly by moral values rather than scientific evidence of cause-and-effect. When policy threatens values, scientists will need to do more than join advocacy networks and do “better science.” Instead, they will need to change hearts and target values—a task for which they are poorly equipped. Did Americans land a man on the moon because aerospace engineers finally mastered the ins-and-outs of policy advocacy? Hardly. The space program was an instrument of the prevailing ideological conflicts of its time. Values propelled science, and not the reverse.

WikiImages/Pixabay
Source: WikiImages/Pixabay

Lessons Learned?

If scientists wish to be heard and have an impact, then scientific advocacy must give greater consideration to the ways in which bias and interest work in political communities. Elites have their own interests and therefore should not be conceived of as merely the "ear" for which advocacy coalitions compete. The final lesson is that values trump facts. There is a reason why the Golden Rule is more memorable than Pythagoras’ Theorem. We are a social species that prefers to cast political problems in moral, rather than causal terms.

Does this mean we are doomed to a world of gut-following politicians who routinely cast aside forlorn scientists? This is probably as untrue of the future as it is of the past. Rather, the dynamics of errors, interests, and values are tendencies that we would do well not to ignore. Scientists should indeed find ways to be better advocates when their passions and values commit them toward that end. History suggests that when scientists blend scientific evidence with moral force, they are bound to have great impact. The political success of the Montreal Protocol is illustrative in that regard.

There is no reason scientists can not or should not develop new ways to creatively and effectively enter the policy landscape. However, scientists would also do well to think a bit more carefully about what is actually in their comparative advantage to deliver, and to give greater weight to psychological bias, political interest, and sacred values when entering the battlefield of coalitional politics.