top of page

UK Covid-19 Inquiry – Report on Module 1

Updated: Sep 29

“The resilience and preparedness of the United Kingdom”



Part 2

In the first part of our review of the Module 1 Report of the Covid-19 Inquiry, we looked at the lessons that could be learnt around “risk assessment”. In this second part we pick up on several other key points identified that could have more general application. 


SYSTEMS

The Report concludes that “The institutions and structures responsible for emergency planning were labyrinthine in their complexity.” The system ought to have been simple, clear and purposeful. It should have been organised in a rational and coherent form.


A look at the organogram shows this clearly.



The system was inefficient with too many entities, groups, sub-groups, committees and sub-committees involved with preparedness and resilience. Yet there were also gaps where no-one took responsibility, and an overall lack of leadership with no ongoing oversight of one of the most significant risks. The effect was that preparedness and resilience were not being scrutinised at the highest levels of decision-making. 


It’s hard to imagine that other organisations have structures of similar complexity. It is true that for Government the range of issues to be considered and the number of teams with an interest is necessarily complex. But other large organisations should also review who makes decisions in emergency situations and who is just “co-ordinating”, how “emergency” is defined (when is it a “normal” emergency?), what level of local autonomy needs to be maintained, how leadership is manifested. 


A further issue surrounds surveillance and alerts. There is a balance to be struck between overreaction and being excessively cautious. More elaborate surveillance may lead to more “‘false alarms’”. Thus, surveillance itself carries the risk of ‘crying wolf’. 


Critical to an effective alert system are transparency and the flow of information. One expert witness on infectious disease surveillance commented: ”The earlier your alert, the more likely you can actually respond to it, because the response will be much, much smaller, and a much smaller response can be mounted more often.”


EFFECTIVE STRATEGY

The only explicit pandemic strategy dated from 2011. The Report concludes it “was outdated and lacked adaptability.” It was virtually abandoned on its first encounter with the pandemic.


Although it stated that it was a strategy for an influenza pandemic, the intent was for it to be sufficiently flexible and adaptable for use in the event of other pandemics. Sir Patrick Vallance suggested: “ [I]t’s not about trying to end up with highly specific responses in the back pocket all ready for every single eventuality. That’s not possible. But there are generic capabilities which are important across the piece.”


A more broadly based and comprehensive strategy, which assessed a range of potential scenarios and had a range of measured potential countermeasures, would have been more capable of preventing the spread of a dangerous disease rather than mitigating its effects.


An effective strategy should cover:

  • analysis of the costs and benefits of a range of interventions; 

  • modelling of the impacts of responses over the short, medium and long term; 

  • explicit recognition of the trade-offs; 

  • assessment of the impact of the responses on different groups; and 

  • consideration of the totality of the intervention and its potential side effects.


Ex-Chancellor George Osborne said: “[T]here’s no point having a contingency plan you can’t pay for, …. [and for your] finances to flex in a crisis.”


There appears not to have been any formal system, nor any direct ministerial oversight, for ensuring that a document as important as the 2011 Strategy was subject to this sort of review.


When the pandemic struck, the UK government did not adapt the 2011 Strategy. The doctrine that underpinned it (ie to respond to the emergency as opposed to prevent it from happening) was effectively abandoned. Mr Hancock described it as “woefully inadequate”.

For futurists, this is shocking.  The whole point of a futures exercise is to provide a range of contingency plans that can be picked up, adapted and implemented rapidly. All organisations should be aiming for this. 


LEARNING EXERCISES 

There was a failure to learn sufficiently from past and from a range of exercises that had been carried out. Over a dozen exercises, some just in individual nations of the UK, were carried out between 2003 and 2019. 


The most significant exercise, of considerable scale (involving 950 people over 3 days), was Project Cygnus in 2016.  It concluded that “[T]he UK’s preparedness and response, in terms of its plans, policies and capability, is currently not sufficient to cope with the extreme demands of a severe pandemic that will have a nation-wide impact across all sectors”

The lessons that were learned were not sufficiently shared and debated. In many cases, learning and recommendations, while nominally recorded in documentation, were simply not acted upon or were forgotten. “[Reports] became an end in themselves rather than a means of learning lessons for the future.” “planning guidance was insufficiently robust and flexible, and policy documentation was outdated, unnecessarily bureaucratic and infected by jargon.”


Causes of inaction 

A number of workstreams for pandemic preparedness were paused due to a reallocation of resources to Operation Yellowhammer (planning for a ‘no-deal’ Brexit). The UK government’s preparedness and resilience system was, quite evidently, under constant strain. It was reliant on stopping work on preparing for one potential emergency to concentrate on another. Yet the trend is for there to be more complex and concurrent risks. 

There was a failure of the institutions to identify and accurately describe the underlying problems, compounded by the use of jargon and euphemism to disguise, for example, tasks that had not been completed.


A third cause of inaction was the lack of institutional memory. This is often caused by frequent and rapid changes in personnel and, as a consequence, a loss of experience and knowledge. It is not a problem unique to government – it is a problem faced by all major institutions. This is especially important when there is a high turnover of people – “churn in the system” or a “revolving door” of decision-makers.


GROUPTHINK, CERTAINTY AND CHALLENGES

Many witnesses who gave evidence to the Inquiry blamed ‘groupthink’. There was often a lack of sufficient freedom and autonomy to express dissenting views and a lack of significant external oversight and challenge. 


The Report notes: “It is necessary to understand how and why it happens and what can be done to remedy it. The dynamics of being part of a group may explicitly or implicitly encourage consensus and discourage internal challenge to consider alternatives. This may result in irrational or poor decision-making. However, consensus by itself is not necessarily a bad thing, provided there is adequate discussion before a consensus is reached and provided it remains open to being challenged.”


But in many ways the consensus was wrong. We’ve already discussed planning only for an influenza pandemic and not planning to mitigate or suppress its spread. 

One fundamental error was a failure to address uncertainty. An integral part of any advice given by scientists should be its inherent uncertainty. Many of the scientific advisory groups provided advice to decision-makers as a “consensus” view. 


Michael Gove admitted: “[W]e seek certainty but it’s often elusive, and it would be better if politicians and decision-makers were to say, ‘Tell me about the debate, what is the lead option within the academic community here, but what also are the alternatives?’ ”.

Ending the culture of consensus and orthodoxy is necessary to allow decision-makers to have access to the full range of opinion or exposure to the uncertainties in the opinions that they are given. 


Oliver Letwin, reflecting on the fact that Ministers were amateurs compared with their scientific advisers, said: “ I think that I should have said to myself, in retrospect, not, ‘Are all these experts wrong?’ but, ‘Have they asked the right questions?’ Because that is something an amateur can do. Perhaps only an amateur can do that. In a sense you have to be outside to the system, I think, to a degree, to be able to ask that question.”

How expert advisers are used is also important. They felt that their agendas were “filled” with tasks set by ministers and officials. As a consequence, they did not have the time to consider “the unexpected”. 


Chris Whitty suggested an “80/20 rule”, where most of the time (80%) should be spent on things the government has asked about, but a significant minority (20%) should be spent on things the government has not asked about. Scientific advisers ought to be “‘licensed dissidents’” with a general remit to provide scientific opinion and challenge throughout the system.


Red teams

The Report looks favourably on the concept of “red teams” – external experts with experience in a range of relevant backgrounds, including scientific, economic and social disciplines, with the licence to challenge the orthodoxy.  These groups of people sit outside  the advisory and decision-making structures.


They are less susceptible to any ‘groupthink’, optimism bias, received wisdom or assumed orthodoxy that can grow within those institutions over time. Secondly, their independence makes them better placed to ask difficult questions and challenge biases, without the defensiveness or fear of personal or professional repercussions that can exist among colleagues working within the same institution.


The distinct structure of independent red teams makes them well suited to carrying out a number of different roles to help improve decision-making, such as: 

  • assessing the strength of the evidence base; 

  • challenging assumptions and beliefs; 

  • considering the perspectives of those enacting or impacted by a plan; 

  • identifying flaws in logic; 

  • widening the scope of enquiry; 

  • stress-testing advice and plans; 

  • identifying how the current approach might fail; and 

  • identifying different options and alternatives.


The use of red teams should also stimulate a change in culture, as it will be known that decisions may have to be justified in a ‘red teaming’ exercise. Access to a wide range of high-quality, competing advice and regular, external, independent input will make the system better equipped to prepare for and build resilience to a pandemic in the future.


SAMI THOUGHTS

The Report does an excellent job of highlighting many of the failures and challenges of the preparedness and resilience system. There are many lessons here that all organisations can learn from. From SAMI’s standpoint though there are a number of key issues the Report does not address strongly enough.


Prediction

The Report repeatedly emphasises that the planning was for an influenza pandemic – which was not was what the country faced. We feel it is wrong to over-emphasise the accuracy (or otherwise) of predictions.  An influenza pandemic was a perfectly reasonable scenario, probably the most likely. The failure comes with not being open-minded enough to consider other scenarios as well. But as Patrick Vallance said it is not possible to plan for every eventuality. 


Monitoring and flexibility

Contributing to that failure is the lack of responsiveness to a changing world. It may be that Module 2 of the Inquiry addresses this issue, but our view is that a plan should not be a fixed, unchanging edifice. Instead, it should have identified important “trigger points” that indicate a change of direction in the environment. This should go along with an extensive monitoring system tracking these issues, so that once they are observed a renewed effort would consider whether actions should change – whether contingency plans should be enacted. We call this “Adaptive Planning”. 


In the case of the Covid-19 pandemic, trigger points would have included noting that the Wuhan outbreak was not influenza but a respiratory disease, the nature of its spread (infection and death rates); another trigger point would be noting when it spread outside China, and certainly its arrival on ski slopes in Europe. 


Fatalism vs mitigation. 

There is a philosophical challenge at the heart of scenario planning that the Report picks up on – and indeed it was a difference of perspective between Oliver Letwin and Matt Hancock.  Do we regard a scenario as a view of the future and try to work out what we would do should that future come about? Or do we regard it as a potential future which we should try to avoid or mitigate?


Perhaps the problem in the Covid-19 case was that the pandemic scenario put a figure on the number of deaths. That leads to developing strategies to cope with that number of deaths – body bags, gaps in labour force, avoiding widespread fear. Whereas had the scenario only been described in terms of the disease characteristics – infection rates, death rates, asymptomatic transmission – the focus would have been on how to limit the spread, to minimise the number of deaths.  


This is an important consideration to build into your scenario planning. Generally, it will be beneficial to look at both perspectives. 


Risk-agnostic resilience

As was pointed out earlier, it is not possible to have detailed plans to cope with every eventuality. But there are things one can do to build resilience to a range of shocks. 


Largely these come under the heading of capability and capacity: 

  • Where can we find resources to use to address the challenge?

  • Are there skills we will need to be able to access? Where will we find them? Can we build them?

  • Is our decision-making system sufficiently agile to react rapidly enough?

  • Can we build strengths into the system (eg better health) beforehand?

  • Do we understand the trade-offs we may have to make?

  • Are there standard responses (eg suspending elective surgery) that we can draw on? Possibly a menu of options or set of building blocks?


Clearly the Covid-19 pandemic was an immense challenge, far beyond scenarios most organisation may have to face. Many of the Report’s recommendations are specific to Government and to pandemics. But many other of the issues highlighted do apply more widely and represent important lessons we all can learn from. 


Written by Huw Williams, SAMI Principal


The views expressed are those of the author(s) and not necessarily of SAMI Consulting.

Achieve more by understanding what the future may bring. We bring skills developed over thirty years of international and national projects to create actionable, transformative strategy. Futures, foresight and scenario planning to make robust decisions in uncertain times.


Find out more at www.samiconsulting.co.uk.


Image by WOKANDAPIX from Pixabay

Comentarios


bottom of page