Skip to main content

Verified by Psychology Today


The "Pandemic Textbook" Must Include Decision-Making

Good pandemic management requires goal-directed least-worst decision-making.

by Professor Laurence Alison and Neil Shortland Ph.D.

As the death toll from the COVID-19 pandemic passes 100,000 in the United Kingdom, politicians, experts, scientists, and the general public are all asking “what went wrong” in the UK. There are many reasons that the ill-fate of the United Kingdom is surprising, but perhaps most significant is that it has fared badly despite being one of the leading scientific countries in the world (number 4, in fact). Evidence of this can be seen in the University of Oxford vaccine development and early COVID-19 modeling efforts from Imperial College London were enough to sway not only British COVID-19 policies but the policy in the United States too. Further, the University of Liverpool has been at the forefront of SMART Testing in order to examine routes to enable tests to release from quarantine and protect vulnerable populations.

When asked about the government's "legacy of poor decisions," Prime Minister Boris Johnson said ministers, while they followed the scientific advice and did everything they could, had "no easy solutions." Communities Secretary Robert Jenrick stressed there was “no textbook” in how to respond to a pandemic, and that the government “took the right decisions at the right time.”

That, in our view, is the issue. You can have all the scientific expertise at your disposal, but it doesn’t really help unless you know how to make decisions on the back of it. Jenrick’s notion of a specific textbook for dealing with pandemics is, in our view, a frankly ridiculous assertion—if you need a set of "if-then" policies that you simply enact against a tightly prescribed policy book, you aren’t really engaging your brain. The idea that syllogistic reasoning of the sort "all pandemics require a five-month lockdown; this is a pandemic, thus we require a five-month lockdown" will resolve critical incidents reveals just how unprepared the UK has been for dealing with critical incidents. Critical incidents require decision-makers to be creative, adaptive, to have moral courage, to be able to deal with uncertainty but be decisive and able to calculate least-worst outcomes with clearly articulated goals and endpoints in mind.

As scientists who study decision-making early on, we realized that what the COVID-19 pandemic required was rapid "least-worst" goal-directed decision-making. Least-worst decisions involve outcomes in which every course of action has negative consequences, and the decision-maker needs to differentiate which of the many courses of action offers the “best (or least-worst) solution.”

COVID-19 has presented a litany of clear least-worst decisions at all levels of decision-making. At the governmental level, they had to decide between “slowing the spread” via enforcing lockdowns knowing it would harm the economy, and deciding to protect children by closing schools while knowing this would harm educational attainment (specifically in minority and low social-economic status children). There are countless others of course and they evolve rapidly, and a decision taken eight months ago will have an impact now. Jonathan Van Tamm, England’s Deputy Chief Medical Officer, has described this as akin to building a boat whilst you are actually rowing it.

We have studied least-worst decisions in a range of domains, from humanitarian responses to the decisions soldiers are faced with on the battlefield and what we often find is that when presented with “no good options,” decision-makers often stall and fail to make, not just the right decisions (for in many cases this is unknown), but any decision. This phenomenon is referred to as decision inertia and manifests as inaction at a time when decisive action is needed most.

Making least-worst decisions is not easy, but our own work has focused on the processes that expert decision-makers go through when facing a least-worst choice. We refer to this as the STAR model, and, through countless observations in our own work and by reference to many others working on elite decision-makers), it summaries the research that shows what elite critical incident decision-makers tend towards (compared to those that veer towards inertia and indecision).

Stories: Expert decision-makers identify a plausible number of possible "stories’" that diagnose the events they see unfolding, and the possible directions those events could take. Poorer outcomes amongst decision-makers often cement their diagnosis to one and only one plausible explanation or, in contrast, develop a huge proliferation of too many models that then become unwieldy.

Time: A expert asks themselves the question, “Do I need to decide now?” If the answer is yes, they commit to making a decision. If no, then, wisely, they will seek more information to further clarify what it is that they are dealing with. Less successful decision-makers fail to account for time and timings.

Adaptation: Experts are able to adapt their perception of the situation as new information emerges. There are many ways in which they test their hypotheses. Poorer decision making comes when this set of questions and examination of alternatives are not considered.

Revision: Experts are able to revise their course of action based on new perceptions of the situation — even when that revision might not be popular. Moral courage may play a role here.

The idea that one should have a textbook for making decisions in every eventuality is, of course, absurd. Instead, we need to support individuals tasked with making critical incident decisions and that, as with all complex skills, requires training, feedback, and learning. We argue that training in making goal-directed least-worst decisions needs to be "little and often" under the notion of frequency of learning a complex skill being more important than duration.

For example, in 2016, the United Kingdom ran a three-day simulation exercise carried out by NHS England to estimate the impact of a hypothetical H2N2 influenza pandemic on the United Kingdom. The Department of Health and Social Care (DHSC) (known as the Department of Health at the time) and 12 other government departments, as well as NHS Wales, NHS England (NHSE), Public Health England (PHE), local public services, several prisons, and staff from the Scottish, Welsh and Northern Ireland governments took part in the exercise.

At the peak of the scenario, an estimated 50 percent of the population had been infected, with close to 400,000 deaths; furthermore, in this case, while the vaccine had been made and purchased, it had not yet been delivered. We expect a great deal was learned from that scenario — but quite evidently, a great deal relevant to dealing with COVID was not. We suspect that this has much to do with favouring the duration and scale of the exercises (its length and its magnitude) over frequency and efficiency (shorter, more economically viable but regular and varied exercises). This is a bit like spending three days taking penalty kicks in soccer on a real match pitch versus playing as part of a team for 60 minutes per week. The former may allow you a one-off opportunity to intensely practice for a specific environment but the latter enables you to play soccer as part of a team. Of course, we are not saying ministers should do a 60 minute-per-week critical incident scenario, but neither should they exclusively (and only) do one 3-day specific pandemic event.

So, despite this relevant, recent, and more-extreme, training scenario, known as Operation Cyngus, the United Kingdom Government was still slow to respond to rising case and death numbers, failed to get an adequate contact-tracing and isolation system running, and was slow to control its borders.

By the way, this is not a UK specific problem. The United States has equally invested millions in large scale pandemic scenarios (e.g., the crimson contagion), but it speaks to another chapter in the textbook of least-worst decision-making — how we train people to be able to make least-worst decisions when they are faced with one in the real world.

Decision-making, like any skill, is a matter of practice, and while we some (e.g., Bill Gates) have long-predicted that a global pandemic would arise, the specific decisions that will be faced are not always as predictable. This is why we argue that training least-worst decision-makers should look a lot more like training for any skill (golf, violin, chess) in that it is frequent, deliberate, and with constant feedback. Too often, we rely on scale and immersion rather than frequency, resulting in an inability to apply the lessons learned when the scenario becomes real life.

While the axiom of “follow the science” has become an increasingly contested concept (or even more of a “catchphrase”), we cannot deny the immense importance that science has played in responding to the COVID-19 challenge. Today, while the government reflects on the decisions made during the COVID-19 pandemic, we hope that we can expand this integration of science to focus not just on public health and epidemiology, but decision-making science.

There is a science to making least-worst decisions and how to train people to do it well when it matters. It is our hope that in the continuing response to this pandemic, and any ensuing, we integrate this science to ensure that leaders are able to overcome the tendency to delay and dither because in many cases the costs of inaction far outweigh the costs of “incorrect” action, in this pandemic and the next.

More from Neil D. Shortland Ph.D., Joseph M. Moran Ph.D., and Laurence J. Alison Ph.D.
More from Psychology Today
More from Neil D. Shortland Ph.D., Joseph M. Moran Ph.D., and Laurence J. Alison Ph.D.
More from Psychology Today