How should we talk about Climate Models?
This year’s Hot Topic, Climate Change, was a wild ride, with an intense three-and-a-half hours of compact talks, each with a different perspective on the problem. The reliability of our climate models was one of the threads woven through the whole afternoon. For me as a science communicator, it was an interesting exercise to listen to the speakers talk about this, and think about what it meant for communicating about climate change with the public – an issue that had its own slot, near the end of the Climate Change marathon, with the talk on the communication of climate change by Jennifer Marlon, from the Yale Program on Climate Change Communication.
Models are always approximations, simplifications. Tim Palmer showed this very clearly in his talk, separating the resolved scales of such models (such as energy flows between large regions) from the unresolved scales, whose properties are then described summarily, using model parameters introduced for the purpose.
How not to talk about models
It follows immediately how we should not talk about models. We should not – but hey, it’s more fun to do it the other way around, no? Imagine you’re a spin doctor, and your task is to sow doubt about climate change. If you want to manipulate your audience, assuming that they do not know much about models, the basic simplification property shared by all models is a godsend. Models are never 100% accurate; find the discrepancies (they are guaranteed to be there), and use them to call the model “unreliable”. Models are by definition simplified, list the simplifications and ask the question (whether rhetorical or not) whether such a model can even hope to come close to reality.
Conversely, we must be careful not to hype models. Models only show part of the picture. If there are parts we do not sufficiently understand, we as scientists must be open about that. A model of a physical system does not have the same absolute level of certainty as a mathematical proof.
How also not to talk about models
So far, so good. The problem: If we talk to the public about models in the same way we talk among ourselves, as scientists, there are bound to be misunderstandings. As Paul Edwards, who participated in part two of the Hot Topic, pointed out: when scientists talk about models in terms of uncertainty, many people understand something different than the scientific meaning of the word. In science, all measurements have uncertainties, and a measurement with small uncertainties can be very reliable and precise indeed. The mere fact that there are uncertainties does not invalidate the model or measurement as a whole. In everyday terms, uncertainty means something much more drastic: that we don’t really know, that it could be anything, that we can’t really tell.
Manfred Milinski from the Max Planck Institute for Evolutionary Biology, whose talk in the session had included some rather depressing psychological experiments on human behaviour (depressing that is if you had hoped humans would be smart enough in crisis situations like this), added that understanding probability was a general problem. Certainty, yes; impossibility, yes; everything in between: problematic.
How to talk about models – possibly
Wrong if we do, wrong if we don’t? At some level, there is a good, and scientifically sound, way of talking about models. Go deeper. Talk about what makes a model reliable, and what makes our understanding robust. We heard about this in the first half of the Hot Topic, where Chris Budd (University of Bath) talked about the mathematics of climate change, and in the second part, where Paul Edwards (Stanford University), took us on a tour of historical climate change predictions.
The upshot is that at least the general trend of warming due to CO2 emissions has been described quite consistently even by the simpler physical models. Have there been deviations? Of course, and if you’re the spin doctor from a few paragraphs back, you would find plenty of ammunition to mislead your audience. But the basic physics linking greenhouse gases and temperature changes, even in simple models (Chris Budd has some that high school students can understand!), makes for a general trend that fits the data quite well, and has done so far the past decades. For physicists, this should not come as a surprise. At its core, this is an energy budget problem; if more energy comes in than is radiated off into space, that energy will accumulate, and temperatures (which, after all, are a measure of average energy per degree of freedom) will rise.
Does that mean we currently know, and can predict, everything about or changing climate? Of course not, and as soon as we aim to understand and predict regional changes, to pick an obvious example, the models become significantly more complicated. Tim Palmer (University of Oxford) told us about the limitations of such models, the need for using supercomputers at a scale comparable to that of cosmological or particle physics simulations, and various ways of checking up on, and improving upon, such models by looking not at isolated simulations, but at whole ensembles with slightly different initial conditions.
In talking sensibly about models, we must separate those issues of more complex models from the underlying general physics. Specific details are likely to change as our models improve. Regional predictions will become more reliable as simulations become more detailed. But the basic physics of anthropogenic climate change isn’t all that complicated, can be understood with comparatively simple models and has been modeled quite consistently, compared with the data, over the past decades. When it comes to anthropogenic climate change, the upshot is encapsulated in the 10-word-version Jennifer Marlon showed us: It’s real. It’s bad. It’s us. Scientists agree. There’s hope.
Time to do something about it.
A few links for further reading: