A few months ago, I was asked to do an interview on BBC Radio about the outbreak of meningitis in Nigeria. At one point, the interviewer asked me why the world seems perpetually unprepared for these outbreaks. Indeed, in this case, it was not unfathomable that this strain of meningitis would end up in Nigeria — it was in fact quite common in neighboring Niger and we’ve seen epidemics spread like this so many times now that it’s hard to claim ignorance.
As I thought more about the question in the days following the interview, I realized something important: the world’s failure to anticipate these outbreaks is not unique. In fact, it’s based on a psychological feature we tend to attribute mostly to individuals: a tendency to think only in the short-term and to attend to risks only when they seem urgent. We all know that most individuals operate this way. Every time you fail to work on presentation slides until midnight the day before they’re due you are enacting this feature.
But these psychological snafus are not simply the problem of individual people. They are actually problems of entire organizations, institutions, and even large governments. It turns out that large, organized groups suffer from many of the same psychological tendencies that cause individuals to make the wrong decision, to focus on things that are less important, and to ultimately harm themselves and others.
In our book Denying to the Grave, we focused on how individuals, such as patients and healthcare providers, sometimes fail to act in accordance with the best evidence we have on healthcare and medicine. We discussed the psychological principles that cause these errors in judgment and decision-making. As it turns out, these same errors and principles are completely relevant on a macro level as well. Ever heard the term “evidence-based policy”? Ever wonder why many policies you see discussed in the political arena are not at all in line with what you know to be true from the evidence? This disconnect has a lot to do with the same psychological principles that cause individuals to ignore or deny the evidence about the best healthcare decisions for themselves and their loved ones.
Recently, some members of the well-known British “behavioral insights” unit have acknowledged that traditional “nudge” and behavioral science techniques sometimes seem to assume that members of the general public are at fault and that governments need to use these behavioral insights to help people help themselves. But more often than not, there are problems in the ways governments make decisions too. As Michael Hallsworth points out, governments fall into many of the same “irrationality” traps that individuals do.
So how does this happen? And, perhaps more importantly, what should we do?
First let’s take a look at some of the key traps and problems faced at this “macro” level.
Underestimating large risk, overestimating small risk
Do you know anyone who’s afraid of planes but speeds down the New Jersey Turnpike every day, tailgating semi-trucks and texting while driving? Maybe this even describes you. The fact is, this pattern of thought — flying is dangerous but everyday activities like driving are not — is a prime example of the way in which people tend to overestimate small risk and underestimate large risk. While the risk of getting into a car accident and suffering serious injuries or even death are exponentially larger than the risk of being injured or killed in a plane crash, many more people are afraid of flying than of driving. Why? Because mundane, familiar risks often seem smaller to us when compared with unfamiliar, unusual, and frequently sensationalized risks.
But issues with risk perception do not stop at the individual level. Many instances of irrational policy decisions are related to misperception of risk. A key example, and one that’s particularly relevant right now given the current U.S. presidential agenda, is the perceived threat of terrorism in the U.S.
The risk of being killed by a terrorist in the US is about 1 in 3.6 billion. Between 2005 and 2015 94 people inside the United States were killed by jihadists, while during the same time period 301,797 people in the U.S. were shot dead. We’ve all seen these statistics. It’s true that in the larger global context, terrorism is a huge problem and continues to destabilize many countries primarily in the Middle East and Africa and does claim many, many lives in those regions as well. But a U.S. citizen in 2017 should be more worried about being bitten by a tick carrying Lyme disease than about being a victim of Islamic extremist terrorism. And the U.S. government should in turn not be making budgetary and policy decisions based on a perception of risk that is irrationally heightened.
The flipside of the terrorism statistic is also pertinent here. While guns pose a major safety threat to American citizens, once again, when it comes to this threat, both individuals and the U.S. government as an institution have misperceived the real risk. And the misperception actually cuts across political parties. The conservative party tends to make the mistaken claim that guns protect people when in reality having a gun in the home actually exposes people living in that home to more danger than not having a gun in the first place. We discuss this at length in Denying to the Grave.
Perhaps more interesting here, however, is a misperception of risk that occurs more frequently on the liberal side. Policy discussions around guns tend to occur in the aftermath of mass shootings in the U.S. While these events are horrific and every measure must be taken to stop them, the misperception of the likelihood of these events has caused a misguided focus on assault rifles in gun policy in the U.S. The real gun-related crisis in this country has very little to do with assault rifles and mass shootings. If policymakers were acting “rationally” and responding to the greatest risk, they would focus on handguns, suicide, and domestic abuse. We should be focusing on means restriction if we can’t get rid of guns, and we should also be focusing on fixing our mental health system so people who are suicidal are not left without anywhere to turn. This is a perfect example of skewed risk perception not only on the part of individuals but on a “macro” policy level representing the entire political system in the U.S.
Planning for the future is considered a hallmark of the most advanced parts of the human brain. Delaying gratification and even making short-term concessions for long-term gains are considered highly evolved ways of thinking that involve complex brain processes that most animal species cannot master. While it’s true that some non-human primates have the ability to plan somewhat for the future, human beings are uniquely capable of formulating and executing highly complex, long-term strategies.
At the same time, we’re prone to a kind of short-sightedness that can be destructive. While we have the ability and often succeed at long-term planning, we often fail to take full advantage of these capabilities and wind up responding only when there’s a crisis right in front of us. Anyone who’s ever crammed for an exam they knew about for months has experienced this. In those moments, we often kick ourselves for not being more prepared, for not sticking to a well-laid plan we undoubtedly made but neglected perhaps because other, more intriguing opportunities came up in the meantime.
In the case of cramming for an exam, the consequences are usually not so dire. Perhaps we might feel like a crisis is looming, but in reality getting a low grade on a statistics exam in college is probably not the end of the world. But when this human inclination rears its head on a much larger, global scale, the stakes can be very high. Few people in the global health world will probably ever forget about the disastrously delayed response to the Ebola crisis in West Africa that began in 2014. It took the global community about a year to respond appropriately. By that point, the epidemic was in full swing and many people had already died. Health workers were deployed internationally, states of emergency were declared, and pharmaceutical companies raced to make a vaccine, but many people were left asking: What took so long?
The question we’re interested in here is not really about why it took us so long to respond once the outbreak had already begun — much has been written about this and a full examination of some of the major gaps in leadership at the World Health Organization (WHO) has also taken place. The question we’re interested in is much broader than that — why is the world so unprepared for outbreaks of this nature in general? Why are general international protocols for outbreak response not better developed? Why are global health systems so unprepared for outbreaks in general? Why do we know so little about how to communicate risk without incurring widespread panic during a serious outbreak?
We’d contend that it has something to do with the inclination toward short-sightedness on a large scale. It’s also a matter of skewed risk perception — it’s much harder to mobilize the global community and to mobilize funding around what seem like theoretical risks and much easier to mobilize once the risk is right in front of us. But this is a serious mistake, as the poor response to the Ebola outbreak has hopefully shown. The only way to get a handle on an outbreak like this in the future is to invest time and money continuously, especially during “quiet,” non-crisis periods, in developing better responses to outbreaks and crises. Again, this is the rational thing to do, but it continues to be extraordinarily difficult to mobilize governments and organizations around potential future threats.
Social pressure to reduce irrational policymaking
Social pressure and social contagion are often thought of as negatives — these fascinating social dynamics can lead to large groups of people adopting unhealthy and even devastating behaviors and may result in everything from obesity to “outbreaks” of suicidal behavior.
But in some cases, social pressure can result in adoption of important, prosocial behaviors that are beneficial for everyone. For example, it has been shown that making people believe that most of their neighbors use less energy than they do can result in significant cuts to home energy use.
So how can we adapt these findings on a larger scale? And does social pressure work for large entities like entire country governments?
To take the second question first: the answer is a simple and resounding “yes.” There are numerous, real-life examples of macro-level social pressure resulting in changes to government behavior and policy. A good example comes from Latin America, where a number of countries have created an informal public, peer-based network of approaches to poverty reduction that has resulted in serious progress in the fight against poverty throughout the region. Indeed, during the formulation of the Sustainable Development Goals (SDGs), Duncan Green wrote a provocative post about how best to leverage this phenomenon of inter-governmental “social pressure” to ensure real progress on the ambitious goals.
Perhaps there is a way to leverage this notion to encourage more rational, evidence-based policymaking. Perhaps it’s a matter of encouraging informal, or even creating formal, benchmarks for evidence-based policymaking that can result in public, cross-country comparisons. In this kind of environment, country governments may feel a slight but powerful “nudge” to make more rational, evidence-based decisions.
What else can we do to deal with this problem? First and foremost, we need to stop pretending policymakers do or ever will think like scientists. As scientists, we need to understand what that realization means for our way of approaching things. We need to learn how to get on policymakers’ timetables. That means we have to try to provide our best evidence, even if we feel like something is not entirely complete. We need to be more comfortable making recommendations based on incomplete data.
Second we need to do more research on how decisions are made in the policy context and what kinds of evidence and communication are most persuasive. We cannot assume that good data on their own will always win the day — this has been shown to be untrue too many times. We shouldn’t just think about helping them understand technical details about the evidence; we must also use our understanding about how they think about science to craft strategies that will lead them to rational, evidence-based decisions.
In the end, scientists need to operate in an integrated manner with policymakers, not in a completely parallel process. It’s still true of course that the science needs to be protected so that it’s valid, but research that has strong policy implications needs to be much more aligned with actual political processes. Otherwise, we’ll be endlessly stuck waiting for some eventual conversion to an ideal state of completely rational politics, which most of us can see is very far from reality indeed.