Here's why we need Covid models, even if they are controversial | Adam Kucharski

  • 11/11/2020
  • 00:00
  • 5
  • 0
  • 0
news-picture

hat is the current level of Covid-19 transmission in the UK? This might sound like a straightforward question, but take a moment to think about what it’s actually asking. Are we interested in how many people will catch the virus from those currently infected? If so, this transmission hasn’t finished yet, which means we’d have to try to predict it somehow. Or maybe the question implies we look backwards, and work out how much transmission there was when people who are currently infected originally picked up the virus. If so, what data should we use to do this? And how confident can we be in the results? These are the sorts of questions that epidemiologists around the world have been grappling with since January, against a background of changing transmission and uncertain knowledge. During the pandemic, scientists have been routinely criticised for warning that rising infections could lead to large numbers of hospital admissions and deaths, or for revising their estimates as the epidemic changes path. In Europe, a summer of low Covid-19 case numbers also amplified scepticism about control measures and the possibility of another epidemic. But the challenge is that Covid-19 data – like all epidemic data – is incomplete and delayed. In late March, after the UK government introduced strict lockdown measures, it was initially difficult to tell whether these dramatic, disruptive changes had tipped the rising epidemic into decline. Because of stretched testing capacity in the UK, fewer and fewer infections were showing up in the data as the epidemic grew. What’s more, the Covid-19 cases reported in late March resulted from infections that had initially happened a week or two earlier, before lockdown. This delay was even longer for intensive care admissions and deaths, which reflected transmission in the groups most at risk; the impact of control measures wouldn’t show up in these data sources until weeks later. Given the delays in Covid-19 data, my colleagues ran a survey of social contacts in late March to try to estimate whether interactions had changed enough to bring transmission down. They found that contacts had declined by almost 75% relative to normal patterns, which implied the average new infections caused by a typical case (ie the reproduction number, R) had fallen to somewhere between 0.4–0.9. Although they couldn’t confirm the exact value of R, they were confident it was below the crucial value of 1. These estimates would gradually become clearer in April, as signals of the decline appeared in delayed data sources, such as the numbers of cases and hospital admissions. If data is imperfect and answers are needed quickly, it can help to have the “wisdom of the crowd”. Since the start of the pandemic, academics like me have regularly contributed analysis – in an independent, unpaid capacity – to government advisory groups, such as the Scientific Pandemic Influenza Group on Modelling (SPI-M) in the UK. One of the strengths of bringing together multiple modelling teams is that it’s possible to combine different outlooks and datasets that can tackle a problem in complementary ways. This “ensemble” approach can produce insights that are more reliable than any one individual model. Such methods have long been common in weather forecasting: to predict the path of a hurricane, meteorological agencies use a large number of models, each of which may generate a slightly different result. Some might overshoot the true path, while others undershoot; combined estimates can therefore give a better idea of the uncertainty involved and likely overall trajectory. Similar ensemble approaches have been successful in modelling influenza and Ebola epidemics and now Covid-19. When weekly R estimates are published in the UK, the numbers are the statistical “consensus” of analysis from multiple groups, spanning a range of different methods and data sources, from social behaviour to hospital admissions. If teams agree, it can provide more confidence in the consensus estimates; if they disagree, then the resulting discussions often reveal something useful about the underlying data or epidemic dynamics. Several research teams – both in the UK and elsewhere – also display their estimates on public dashboards, making it possible to compare how different approaches influence results. When dealing with uncertainty, though, it’s important to distinguish between specific results and overall conclusions. Models might disagree on the exact number of hospital admissions there could be if transmission remains the same, for example, but still agree on the implications of these results – such as an imminently overwhelmed health system. In the middle of an epidemic, drawing a broad conclusion now can be far more useful than a precise estimate later. Calls to wait for more data overlook the urgency of a disease such as Covid-19. “All scientific work is liable to be upset or modified by advancing knowledge,” as Austin Bradford Hill, who established the link between smoking and lung cancer in the 1950s, once put it. “That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demand at a given time.” Unlike a hurricane, governments can change the path of a Covid-19 epidemic through control measures. This is why disease modellers typically look at “what if?” scenarios, such as what could happen if more restrictions are (or aren’t) introduced, rather than trying to guess the future decisions of policymakers. But when R is already near 1, things are particularly tricky because epidemic trajectories can be very sensitive to small changes. A slight reduction in transmission from newly cautious behaviour – that might have had little noticeable effect when R was larger – could tip the epidemic into decline, just as virus-friendly winter conditions could push the situation back into growth. When estimating the effect of control measures, there’s also the question of school and work rhythms. For example, when did UK schools close in March? The obvious answer is Friday 20th. But schools are always closed at weekends. From an epidemiological point of view, we wouldn’t have expected to see any effect on reduced transmission from the closures until Monday 23rd, three days later. Given the patchiness of Covid-19 data in March 2020, it’s unlikely we’ll ever have a definitive answer about exactly how and when transmission changed. Fortunately, the autumn picture is clearer. As well as routine studies such as those run by the Office for National Statistics (ONS) and Imperial College London, which track infections even if people currently feel well, there is the Zoe symptom app and wider population testing for those with Covid-19. This means we have far more warning about what’s coming; current hospital admissions and deaths are the result of a growing Covid-19 epidemic that was flagged in these four data sources in September. Likewise, the signals we are now seeing mostly reflect changes in transmission that happened in October. Following a tightening of measures, R estimates have fallen below 1 in Northern Ireland and are near 1 in Scotland and Wales. In England, last week’s consensus R estimates were still above 1 in many areas, but there are indications in the ONS study that the speed of epidemic growth slowed during late October, while smartphone mobility data suggests that the 5 November measures have substantially changed behaviour. Which brings us back to the question: what is the current level of coronavirus transmission in the UK? In a few weeks, the answer will be obvious. But we can’t rely on the promise of future certainty when there is an epidemic to deal with right now. • Adam Kucharski is an associate professor in infectious disease epidemiology at the London School of Hygiene & Tropical Medicine and author of The Rules of Contagion: Why Things Spread – and Why They Stop

مشاركة :