The mathematics of disease: A discussion about epidemic modeling in the pandemic era

BLOG: Heidelberg Laureate Forum

Laureates of mathematics and computer science meet the next generation
Heidelberg Laureate Forum
Hot Topic panel discusson at the 8th HLF: Amrish Baidjoe, Sheetal Silal, Sebastian Funk, Julia Fitzner and Martin Enserink.

“All models are wrong, but some models are useful,” epidemiologists sometimes say. Their old aphorism has certainly been proven right during the COVID-19 pandemic. Mathematical models of the spread of the coronavirus have often been off, sometimes spectacularly so, and have contradicted each other, which has not helped instill confidence in their value.

But they have also provided valuable insights about the nature of the virus, where the pandemic might be headed, and what the impact of control measures could be. Policy makers have used them as guides during momentous decisions, such as whether to close borders or shut down entire societies. Rarely have these sets of mathematical equations had so much direct impact on the lives of so many as in the past 18 months—and never have so many people become at least somewhat familiar with them.

All of that made the Hot Topic of the 8th Heidelberg Laureate Forum an excellent opportunity to discuss models of infectious disease with scientists who produce them for a living and people who use them. Fittingly for the uncertain phase in the pandemic, it was a hybrid session, with two speakers live on the stage in Heidelberg and two others joining remotely. That didn’t make their meeting any less lively. At the end of the 90-minute session, all four participants were available for additional questions, each in their own ‘fishbowl group,’ an opportunity the young scientists attending the forum made good use of.

In his kick-off to the session, outbreak modeler Sebastian Funk gave a quick overview of the field. Funk, a professor of Infectious Disease Dynamics at the London School of Hygiene & Tropical Medicine and a Senior Research Fellow at the Wellcome, defined outbreak models as “a tool to combine data (what we know) with theory (what we think) to learn about what we don’t know.” He went on to describe the basics of so-called SIR models, which break down a population into three groups—Susceptible, Infected, and Recovered—with people moving from one group to another.

Such models can forecast what we all want to know: how an epidemic will unfold. But there are other uses as well, Funk explained. Early on in the COVID-19 pandemic, when we didn’t know much more than that Wuhan was seeing a massive outbreak of a new coronavirus, he and his colleagues combined data from Wuhan and reports of infected travelers elsewhere in the world in a model that helped estimate SARS-CoV-2’s reproductive number (R), a measure of infectiousness. The study, published by the Lancet Infectious Diseases on March 11, 2020, made clear that there was an important risk that the virus would spread elsewhere once it was introduced—in other words, that a pandemic might be inevitable.

Another way to use models, Funk explained, is to explore the consequences of various scenarios. Once COVID-19 vaccines arrived on the scene, for example, it was unclear how fast they could be rolled out. So researchers in the United Kingdom made projections based on various assumptions, to give policy makers an idea of how fast the epidemic might be brought under control as people lined up for their shots.

The second speaker, Sheetal Silal—also a modeler—emphasized that in addition to computer savvy and mathematical skills, outbreak modelers need something else: A good eye for what she called “context, culture and diversity.” Silal is director of the Modelling and Simulation Hub, Africa, (MASHA) and an associate professor in the department of statistical sciences at the University of Cape Town.

To model a disease, she said, “you need to do quite a bit more than you can learn from a textbook” —including reading policy documents and monitoring and evaluation reports; trying to understand how well control measures were implemented in the past; and finding out what the local health system looks like and how people use it. Take insecticide-treated bednets, which have been distributed on a massive scale in Africa to help control malaria. To model their impact, it’s important to understand whether people actually use the nets, and use them correctly, Silal explained.

She stressed another word: empathy. Modelers need to empathize with the people whose diseases the models describe because “they are the ultimate end user,” she said. But they also have to empathize with the policy makers who use models to make decisions. Models are useful only if their outcomes are presented “objectively, honestly, and with humility,” she said, and in a way that non-experts can understand them. That’s why Silal stressed the importance of developing tools such as interactive dashboards, maps, and crystal-clear graphics.

Julia Fitzner, the third speaker, is a ‘consumer’ of outbreak models—and she knows first-hand that they can sometimes create confusion. Fitzner sits at the apex of the global health system, the headquarters of the World Health Organization (WHO) in Geneva, where she is the Team Lead for data and analysis in the Global Influenza Programme. She was involved in the response to many outbreaks and epidemics, including yellow fever, SARS, the influenza pandemic of 2009, and the coronavirus pandemic.

Fitzner started off her talk by giving the audience a taste of the staggering amount of data that is being collected day in, day out about COVID-19, such as cases, hospitalizations and deaths in every country, control measures, vaccine coverage, and genomic data about the virus.

Those data find their way into models that often produce very different outcomes, “which is then really difficult to grasp,” Fitzner said. In order to understand why models reach the conclusions that they do, “we need to understand the data, the assumptions, and the process of how the results have been achieved,” she said. But often such discussions are not taking place. That is one reason the WHO recently opened a new hub for pandemic and epidemic intelligence in Berlin. The agency aims for “better data, better analytics, better decisions,” Fitzner said.

The fourth and final speaker, Amrish Baidjoe, set out to play devil’s advocate, as the title of his talk (“Epidemic modelling: helpful tool or distraction? A humanitarian perspective”) made clear. Baidjoe is a field epidemiologist and microbiologist who has worked on disease outbreaks in refugee camps and other crisis situations. He recently became director of the LuxOR operational Research Unit of Médecins sans Frontières (MSF), also known as Doctors without Borders.

Baidjoe was blunt: From a humanitarian perspective, models often aren’t very helpful. He presented a 2020 study that looked at the potential for COVID-19 spread in the big refugee camps for Rohingya people in Bangladesh, which concluded that an epidemic might be really bad and that “novel strategies” were needed to fight it. “None of this was actually new to us or contributed anything to what we were doing,” Baidjoe said. (He also took offense to the fact that none of the authors were from Bangladesh, even though that country has the necessary expertise, and that none of them had worked in the camps.) Similarly, a model that tried to predict cholera outbreaks using climate data and artificial intelligence does not help organizations like MSF, because it still can’t pinpoint exactly where such outbreaks will happen, Baidjoe said.

The discussion that followed brought various other issues to the fore, including trust in epidemiological models. With some having been so wrong, Silal said she could understand why people might get confused or lose confidence. Better communication might help, she said: “What we have learned during the pandemic is that modelers almost need a communication team behind them.” More resources could help people understand, for example, the difference between predictions, projections, and scenarios.

Funk agreed but said modelers receive little training in communication. “I learned the hard way—by being misunderstood enough that you learn to add enough notes of caution,” he said. “It’s on us to become better at communication, to also explain the limitations and assumptions that go into the models.”

Asked by one of the young scientists in the audience which areas of computer science are most helpful for epidemiologists, Silal mentioned the skill to process large sets of data, such as mobile phone data, and the ability to optimize computer code so that it runs more quickly and can be altered with ease. She added that another practical skill very much in demand is producing epidemiological dashboards and other tools to present data clearly.

Funk agreed and said he hopes those skills will become more widely used in low- and middle-income countries as well. “Modelling capacity is hugely concentrated in rich countries,” he said. “There is a huge scope for more general tools and documentation of those tools, to boost capacity around the world.”

Funk and Silal both said that modelers face the problem that many of the things they do are not valued in the academic system, where careers are built on publishing papers, preferably in high-impact journals. Advising politicians, going on television to explain what the next pandemic winter might look like, or producing an easy-to-use online dashboard are generally not rewarded. Although she has been busier than ever, Silal said she published very little the past year and a half.

Studio in the New University in Heidelberg during the 8th HLF.

“The academic model is an obstacle and it really needs to change,” Funk said. “The incentive is to publish that one high-impact paper and move on to the next thing. It’s really not sustainable.” Fitzner added that the system in fact does harm when scientists withhold information that should be shared immediately just so they can first publish a paper.

At the end of the session, the panelists discussed what it takes for a scientist to become an epidemic modeler. “Not much more than an open mind, a willingness to listen,” Funk said, “Being able to learn what the key concepts are in another field of science.” Fitzner said that—beyond a good grasp of data, math, and computing—modelers need to be willing to think about the context, and be open and transparent, in order to be able to build trust.

Baidjoe stressed humility as well. “You need a proper understanding that whatever dataset you are working on is a small subset of reality,” he said. ”You need to think in limitations.”

Silal, who had the last word, added the word integrity. “When you’re operating in the face of the media, in the face of powerful members of the government, all you have is your integrity,” she concluded. “That should really be at the forefront of any modeling endeavor.”

Watch the Hot Topic panel discussion on the HLF YouTube channel.

Posted by

Martin Enserink is International news editor at Science magazine. His writing focuses on infectious diseases, global health, research policy, and scientific integrity. He has covered outbreaks and epidemics around the world and won multiple journalism awards.

1 comment

  1. Quote:

    “You need a proper understanding that whatever dataset you are working on is a small subset of reality,”

    Yes, a trusted data-driven epidemiological model begins with data collection, an event timeline, and understanding and assessing the context in which the data is generated.

    The standardization of data collection and data annotation should improve all further steps in epidemiological modeling.

Leave a Reply

E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +