‘The science’ of prediction
In recent human memory, there obviously hasn’t been a pandemic approaching the scale of Covid-19. This, clearly, meant there was extremely little evidence for policymakers to draw on when designing their initial policy responses. They also had no idea how bad outcomes might get for their populations, a huge amount of uncertainty to work under.
Policymakers looked to best estimates from disease modellers to provide at least some information they could begin to work with. The Ferguson et al. report (published on Monday 16th March 2020) predicted 510,000 deaths in the UK, 2.2 million in the US, before the end of October 2020 in the absence of any “control measures or spontaneous changes in individual behaviour”. This was predicted to largely occur before the end of August 2020, within around five months. They also predicted that control measures would “need to be maintained until a vaccine becomes available (potentially 18 months or more)”, making it clear early on that whatever was implemented would be around for a substantial period of time, through self-fulfilling prophecy if nothing else.
‘The science’
This prediction modelling quickly became ‘the science’ for politicians like Boris and others around the world, and they wanted to be clear that they would follow ‘the science’ by surrounding themselves with doctors and scientists for regular televised public updates .
Now, to be clear, predictive modelling is of course a form of science. Particularly in the earliest parts of the pandemic I can certainly understand why it played such a huge role in dictating policy - we didn’t really know what the disease was capable of, or the extent to which any containment measures for a new respiratory virus like this one could have any effect. Predictive modelling, arguably, also has some role to play in nearly all policymaking: policymakers are almost always looking forward at how to deal with new (or worsening), pressing problems. They don’t have time to wait for a long randomised controlled trial (RCT) or real-world pilot test of their policies before they make an initial implementation decision.
However, like any other form of science, predictive modelling is not without limitations. Like any other science, it cannot stand alone as a definitive answer to a policy problem. It should be considered as part of a body of evidence. And like any other science, the evidence it provides also evolves over time, as the assumptions it makes are crucial to the answers it provides. Basically, there is no ‘the science’, particularly when something new comes along. There is only some current consensus based on current evidence, replaced by new (sometimes drastically different – the world is no longer flat) consensus reached on the updated evidence base.
The limitations of trying to predict the future
What differentiates prediction from other forms of science is the fact that it is future- centred. In essence, any predictive modelling is a slightly fancy mathematical form of trying to predict the future. And, as you probably know yourself, trying to predict the future is no easy task. It’s almost feasible to predict with fairly high accuracy outcomes in the most simple of ‘systems’ and in the immediate future (if I drop a glass on the concrete floor, it will likely break), but it becomes increasingly more difficult the more complex the system, and the further out in time you try to predict.
Weather forecasting, for instance, one of the most common and oldest forms of prediction modelling that we are all familiar with, can currently predict about 10 days into the future fairly accurately, adding about a day of predictive power each decade since the 80’s (although, if you live in the UK, it still feels like they can barely predict 10 minutes into the future). Evidence from just before the pandemic estimated that the theoretical maximum prediction weather forecasters will ever be able to manage, though, is only around 14/15 days. Effectively, this is due to the ‘butterfly effect’ (see also chaos theory), the fact that even extremely small changes early in a chain of complex events can have substantial impacts by the end. Uncertainty tends to increase into the future and compounds exponentially, making it inherently impossible to predict with accuracy at some point in time.
The choices of inputs for predictive models – what goes into the model, or, importantly, what does not go into the model – also matter a lot, perhaps more than in most other scientific methods. In particular, any system that involves human behaviours, like trying to predict social interactions that might help/hinder the spread of a virus let’s say, is, as a given, highly complex and involves a huge number of interacting predictor variables. This includes trying to predict the weather and whether we’re likely to want to be in/outdoors, for instance. If someone tells you predicting a human system isn’t difficult/impossible, just ask to see their bank balance. They should be multi-billionaires (at least) by now with their amazing ability to accurately predict the stock market.
The reality of prediction in a pandemic
Beyond the very early stages of the pandemic, when pretty much every model parameter had to be effectively guesstimated, there have arguably been big gaps in terms of what went into Covid prediction models in general. For example, Professor Nigel Gilbert, a pioneer in the use of agent-based models (which aim to model complexity) in the social sciences, has highlighted that the epidemiological models almost completely neglected (or, very crudely modelled, at best) human behaviour, and, especially, had “almost no modelling of people’s reaction to the spread of Covid”.
It was this voluntary response which we, and others in the mounting empirical literature, hypothesise made certain policies more effective than others. For instance, targeting more ‘compulsory’ social contacts, in schools and workplaces, rather than early lockdowns when people were already choosing to voluntarily limit the social contacts they had more control over.
What should we learn?
In sum, while prediction definitely has a place in scenario planning, policymakers (and everyone else) should learn to take all predictions of the future with a pinch of salt. Ultimately, it’s not possible in a highly complex system to have any complete accuracy to predict over a period of time that would actually be useful. Any scientific papers/media reporting on these studies should probably come with a clear warning of their major limitations, something like: “to be clear, this study is based on an attempt to predict the future over [X] days in a [very] complex system so is [very] likely to be wrong”. These papers should also very clearly, in lay language, detail the assumptions they are making and what is in (/not in) their model so that this can be debated more coherently and democratically considering the reality of the uncertainty.
Okay, so ‘the science’ was highly uncertain at the start of the pandemic. Surely, though, it’s better now and we know that these policies that were implemented did indeed ‘work’? We can at least be certain looking backwards at what has already happened, right? Hmm, not so fast…
P.S. For more on predicting the future and the psychology behind why its prevalence persists in society/the media (very basically, we humans really prefer certainty, even if it’s not ‘true’), I would highly recommend ‘Future Babble: Why Expert Predictions Fail and Why We Believe them Anyway’ by Dan Gardner.