The post Why Dividing by Zero Is a Terrible Idea originally appeared on the HLFF SciLogs blog.

]]>In the post, a disgruntled parent explained that their third grade (roughly nine-year old) child had been taught that one divided by zero is zero – which they were so outraged by, they felt the need to complain to the school. Given that the purpose of teachers is to guide a child’s education and help them to understand how the world works, it is certainly infuriating to learn that a teacher might be giving them wrong information. But why might someone get this wrong so easily – and what actually is the answer?

Firstly, it is worth acknowledging that not all teachers will be trained mathematicians, and it will not be often they are called upon to answer questions like this – but all it takes is a curious student thinking beyond the content of the lesson to put us somewhere unexpected.

The concepts involved in this particular question – “what is one divided by zero?” – all seem like familiar, simple, manageable ideas: the numbers one and zero, and the concept of division, which we usually learn at a younger age than nine. Who could possibly expect that just arranging them in this particular way would result in something that does not have a simple answer? Or indeed, any answer at all!

When learning about division at school, it is often framed in terms of sharing: If I have twelve sweets, and there are four people, we can share the sweets out equally by dividing the number of sweets by the number of people, to get the number of sweets each person gets (in this case, three). But if we change the number of people and share them between three people instead, we get four each; between two people, there are six sweets each. And if we take the greedy option and share the sweets between one person (if such a thing even counts as sharing), that person would get all 12 sweets.

But beyond this point, the question becomes slightly meaningless. If I had zero people to share sweets between, I could give them any number of sweets each (including numbers that are more than the number of sweets I have) – because there are no people to give them to, I can perform the action of giving nobody some sweets as much as I like, and still have all the sweets I started with.

This starts to hint at the idea that dividing by zero does not work in the same way as dividing by other numbers does. This is partly because the process of division is something that we are used to being able to run in both directions: We say we can invert an operation like division, and by knowing how many sweets I ended up with and how many people have shared the sweets, I could calculate how many sweets were being shared out in the first place by multiplying these two numbers together.

But this breaks down entirely when we divide by zero. If I can invert the calculation 12 ÷ 4 = 3 by finding 4 × 3 = 12, I should be able to figure out what was originally divided by zero by multiplying each share by zero. But anything multiplied by zero is zero – as any third-grader would be able to tell you – so no matter what size of share we assign to each of the zero people, we cannot invert the process and find the original total.

In general, we say that dividing by zero is not defined, or that it is not a permitted operation; calculators asked to divide by zero will sometimes return ERR (error) or NAN (not a number), since the value of anything divided by zero is undefined. Mathematicians will avoid dividing by zero, since it creates logical contradictions – many of the enjoyably puzzling ‘false proofs’ you can find, including the classic proof that 1 = 2, rely on a step which is equivalent to dividing by zero (but disguised using algebra, so you may not realise what is happening).

Following my appearance on the radio to explain this, a number of listeners contacted the show (sadly, the number was not zero) to complain that I was wrong, and that of course it is possible to assign a value to the quantity given when you calculate one divided by zero: obviously, the answer is infinity. Despite having carefully explained that we deliberately do not assign a quantity to the answer, many people insisted that I had just failed to explain it properly.

And I can understand why these people thought they were right – it is well known that the smaller the number you are dividing by, the larger the result. In our twelve sweets scenario, dividing by four people gave 3 each, by three people gave 4 each, by two people 6 each, and dividing by one person meant 12 sweets each – the number increases as the amount you are dividing by decreases.

We can even continue this beyond the point where our sweet analogy fails us – if we divide 12 by 0.5, which is less than one, we get 24: Dividing by a half is the same as multiplying by two. With this logic, dividing by 1/100 would give us 1200, and dividing by 1/1,000,000 – a millionth – would give us an answer of 12 million. It doesn’t matter what we start from: Whether we’re dividing 12 by something, or dividing 1 by something, we can keep playing this game as long as we want, and the smaller we make the number we divide by, the larger the answer we get.

Often in mathematics, we play this kind of game when we have a situation that will theoretically continue forever: if I wanted to show what would happen if I kept adding together all the numbers in an infinite series, as I wrote about here back in 2019, we can show that something will get to infinity, or converge to a particular value, without actually adding things up forever.

As long as you keep asking ‘can you make a number bigger than this?’, and I can keep suggesting something to divide by that satisfies your question, we can keep going: And as the number we divide by gets closer and closer to zero, it can be tempting to conclude that when it reaches zero, the answer will be infinity.

But this does not work as an answer, for multiple reasons; not least because infinity is, as is often stated, ‘not a number’ – the same ‘not a number’ your calculator is not prepared to display. But it is even worse than that: There is more than one way to get closer and closer to zero, and if you use a different approach, you get a different answer.

If you can wrap your head around dividing by a negative number, you should be happy with the idea that 12 divided by -3 is -4. (Again, invertibility helps here, since multiplying together -3 and -4 logically gives a positive answer of 12). And we can also do this for smaller and smaller negative numbers: we could divide by -0.5, -1/100 or -1/1,000,000, and the answers we get would be larger and larger; and, importantly, negative.

In the same way that our positive numbers are somehow approaching infinity, these numbers are approaching minus infinity. And even if you are so convinced of your correctness that you would write in to a radio show to correct a mathematician, you must surely agree that negative infinity is a long way away from being infinity, even though the numbers we are dividing by are also getting closer and closer to zero.

This idea, sometimes called ‘two-sided limits’, is a great way to understand why dividing by zero is a terrible idea: In order for something to be consistently defined as the limit of a process, it needs to work from both directions (in the case of numbers on a one-dimensional line, there are only two directions to check, but in higher-dimensional analysis, things get even more difficult).

While I would not expect the third-grade teacher to explain all of this to a nine-year-old – and I can appreciate how difficult it is to admit you do not know the answer to something, especially when you are in a position of authority – the teacher’s reaction on being queried could most certainly be described as unhelpful. The maths behind zero and division involves a lot of mysterious behaviour and unanswered questions, and understanding it can take you down a real rabbit hole – to minus infinity and beyond.

The post Why Dividing by Zero Is a Terrible Idea originally appeared on the HLFF SciLogs blog.

]]>The post Math, Art, and Communication: How Mathematicians Engage the World originally appeared on the HLFF SciLogs blog.

]]>Whether it is about vaccines, climate change, or political events, the sheer volume of misleading narratives threatens public health, undermines science, and destabilizes societies. In the age of information, it has never been more important for scientists to communicate accurately and clearly.

Yet this is easier said than done. Particularly in mathematics, typically seen as a solitary, abstract discipline, clear communication is rarely straightforward. At the 11th Heidelberg Laureate Forum this year, a panel of mathematicians demonstrated that math can also be a profoundly creative and interactive art form. Whether through hands-on workshops, sculptures, and even playwriting, the panel members showcased their work in how mathematics can be more tangible, accessible, and engaging for broader audiences.

Moira Chas, an Associate Professor of Mathematics at Stony Brook University, has long been passionate about both mathematics and the arts. Her work in topology – one of the more abstract branches of mathematics – is excellently complemented by her artistic endeavors. For the duration of the Forum, some of her topological wire sculptures were on display, and she brought some with her on stage as well.

Topology is a branch of mathematics that studies the properties of shapes and spaces that remain unchanged under continuous transformations, such as stretching or bending, without tearing. Unlike plain geometry, which focuses on precise measurements, topology is more concerned with the fundamental characteristics of objects, like connectedness or the number of holes they have.

A classic example of a topological object is the Klein bottle: a non-orientable surface that has no distinct “inside” or “outside.” This structure was first described in 1882 by the mathematician Felix Klein. A true Klein bottle can only exist in four dimensions but it can be represented approximately in three-dimensional space. This is difficult to comprehend and visualize, which is where Chas’ artistic endeavors come in.

During the panel, Chas shared her innovative approach to explaining mathematical concepts through sculptures. She described her fascination with the Klein bottle and shared several wire sculptures she had created to represent this topological object.

Holding up one of her wire sculptures, Chas asked the audience to imagine the bottle’s form: “This might not be the image you have in your mind when you think of a Klein bottle. But for a mathematician, it’s a space with certain properties, and these sculptures help to make those properties more real,” she explained.

For Chas, communication through art is not just about translating abstract mathematical ideas into physical forms, however. It is also about engaging the senses and imagination in the process. “In math, we often answer the question, ‘What do I mean by this term?’ That’s what these sculptures help people explore. They encourage a deeper understanding by letting you interact with the concept physically.”

Chas has taken her creative expression even further by writing a play that explores the lives of mathematicians and the beauty of mathematical ideas. Her play about Alicia Boole, a pioneering mathematician in four-dimensional geometry, won a Simon Center Playwriting Competition award.

“Alicia Boole was the daughter of George Boole, who discovered Boolean algebra, but her path was very different. She didn’t receive formal mathematical training but fell in love with four-dimensional geometry. The play is really about her passion for discovery,” said Chas.

This type of work explores the personal stories of mathematicians and their struggles, humanizing them and thus making their stories more relatable to people from all backgrounds.

“It’s very hard to describe her complex ideas in a 20-minute play. It’s only a glimpse of her ideas but I want to emphasize that we are all searching for understanding and she’s one person who put in tremendous time and effort and also enjoyed the process most of the time.”

The panel also featured another creative communicator, Érika Roldán, who is currently leading the research group Stochastic Topology and its applications at the Max Planck Institute for Mathematics in the Sciences.

On the first day of the Heidelberg Laureate Forum, Roldán introduced “The Fence Challenge,” where the task is to enclose as much area as you can with “pentominoes” (five-block dominoes). In addition to being a fun challenge, this citizen science projects actually helps participants develop their mathematical visualization and also helps researchers inch closer towards solving a combinatorics problem.

In fact, Roldán emphasizes that her colleagues explore the more playful side of mathematics.

“Well, one of the things that I’m super grateful for now is to be able to be a research group leader, and have my students and postgrads doing research and also outreach. They’re doing mathematics visualization and video games within these projects.” The researcher says that in addition to making math more accessible, this is also useful for the mathematicians. “They’re attaining skills by doing this and also a knowledge and understanding of the impact that they can have in society. I think this will also reduce the isolation that we know is a well-being problem in academia and the sense that what we are doing has no immediate impact in society.”

Yudhistira Andersen Bunjamin, an associate lecturer at the University of New South Wales in Sydney, shared his experience in designing mathematics workshops for high school students. His approach centers on hands-on learning and narrative storytelling. Rather than focusing on specific mathematical problems, Bunjamin’s workshops aim to convey the broader principles of mathematical thinking.

“We don’t just teach them math facts. We try to show students what it means to think mathematically,” said Bunjamin. He and his team spend years designing these workshops, constantly refining the process. “It starts with a learning outcome. For example, one of our workshops teaches students what it means for something to be impossible in mathematics – what non-existence looks like.”

However, the best approach depends on the audience. Bunjamin notes that one of the workshops was carried out in a remote part of Australia, where the population of Indigenous Australians was higher than in other areas, and where the analogies and general approach used in other areas may not have been as effective. In another instance, he recalled a workshop with a lot of international students whose level of English was not the same as their peers who grew up in Australia. “We’re starting to think ‘does this translate well?'”

Even something as simple as counting can be cultural, says Coumba Sarr, who holds a PhD in Number Theory from the University of Caen, Normandy.

In Wolof, a West African language spoken by over 10 million people, counting follows a special additive and multiplicative structure. Wolof numbers are basically counted in groups of five, and for numbers beyond five, combinations of smaller numbers are used. The number ’16’ for instance is counted additively as ’10 & 5 & 1,’ and the numbers beyond twenty require both addition and multiplication. For instance 34 is depicted as “3*10+4” and the same goes for all higher numbers. This structure provides a natural way to teach mathematical operations, embedding mathematical reasoning within the language itself. “What we would like to suggest is that maths is a very universal language,” the panelist added.

The panel kept coming to this idea, that if you want to communicate math or science, if you want to get your message across clearly, it is important to appeal to things that are inherently human. Making maths accessible requires not just knowledge but also empathy, and the same goes for all science communication. It is not just about talking to people, it is about getting them to connect to it, to engage their curiosity and imagination.

Ultimately, the panelists agreed that math communication should be seen as an essential skill for every mathematician. “We all need to be ambassadors of science,” said Chas. At the end of the day, it is not just about solving problems – it is about helping others see the beauty and the joy of mathematical thinking.

The post Math, Art, and Communication: How Mathematicians Engage the World originally appeared on the HLFF SciLogs blog.

]]>The post Galois’ Enduring Legacy originally appeared on the HLFF SciLogs blog.

]]>Instead, Ngô provided attendees with just a taste of what the Galois group is in order to expose why Galois theory was important to progress in mathematics at the time of its inception, and why it remains central to progress today: a way of thinking that allows mathematicians to look at the underlying structure and form of mathematics.

Genius mathematician, staunch French Republican, and unlucky duellist Évariste Galois was only on this Earth for 20 short years, but his legacy across many branches of mathematics has endured for almost 200, and will likely continue long into the future.

Galois’ most important work came in founding what would later become known as group theory. He posited three principles for what a group is, and used these to develop more properties of groups. These properties can be used to compare groups with other groups that seem unrelated.

This technique can then be used as a means for comparing types of algebraic equations, and the solutions of these equations. More specifically, Galois groups contain all the symmetries between solutions of polynomial equations; in other words, the permutations of the roots that preserve all relations between them.

To anyone outside mathematics, this seems trivial – presenting what already exists and is known in a slightly different way, shuffling papers if you will. In reality, it was and continues to be a revelation.

Ngô began his talk with a history lesson: “The solution for the quadratic equation that we all learn in school, you can find some equivalent form of that in Babylonian tablets that date to 2000 years BC,” he said, referring to the solution of the general form of a quadratic equation \(x^2 + bx + c\):

\[ x = \frac{-b \pm \sqrt{b^{2} – 4ac}}{2a}.\]

“It’s much more difficult when you move to cubic equations, but you can find solutions to a large number of cubic equations in Chinese and Persian [texts] dating around the 10th Century,” he added. In fact, Ngô revealed that the general forms of the solutions to cubic equations and even quartic equations were discovered before Galois’ time as well.

During the Renaissance, Italian mathematicians Gerolamo Cardano and Niccolò Tartaglia wrote down the very complicated general form for solutions of cubic equations. “Very few of us would be able to find this by ourselves,” added Ngô. “It’s elementary but it has a series of very clever and tricky changes of variables.” This breakthrough was swiftly followed by an even more complicated solution for quartic equations by another Italian mathematician Lodovico Ferrari.

Here, Ngô took a moment to pause and reflect. “But what do we mean by solution?” he asked the audience. “What we’re looking for is some kind of formula [involving] the coefficients of the polynomials and then we can use the four operations (plus, minus, multiplication, division) and take new roots – but then of course we have an ambiguity about taking roots because there are several choices.”

Galois theory removes this ambiguity completely. It brings together all the roots of the equation in question and describes all the symmetries between them. A symmetry between roots is where one root can be replaced by another without it affecting the answer. For example, any expression involving only adding or multiplying \(\sqrt{2}\) will have the same answer as the same expression with \(\sqrt{2}\) replaced by -\(\sqrt{2}\).

By taking a step back from the algebraic equations themselves, Galois theory revealed their underpinning structures, and Galois could very simply and eloquently tackle problems in mathematics that had only recently been resolved by complex means.

Ngô gave an example. “The Abel–Ruffini theorem showed it was impossible to find solutions to [degree 5 and higher] types of equations in general form – this was a spectacular result,” he said. “But if I give you one equation, it does not tell you whether I can solve it by radicals or not.” In other words, the theorem did not explain whether, given a particular equation, there exist solutions with rational coefficients using only rational numbers and the operations of addition, subtraction, division, multiplication, and finding *n*th roots.

“With the Galois group, you can reprove the Abel–Ruffini theorem, and you can use calculations with the Galois group to recover the tricky calculations of Tartaglia and Ferrari and so on,” said Ngô. Moreover, the degree 5 polynomials that are solvable are precisely those whose Galois group is solvable. In other words, Galois theory can be used to say whether a particular equation is solvable by radicals.

“The whole point of Galois theory is this move from studying algebraic equations into a completely different object, some abstract group [where] the solution of the equation can be expressed in these very simple forms,” explained Ngô. As became clear much later when mathematicians started to appreciate Galois’ insights, this abstraction has been fundamental in allowing Galois theory to act as a fundamental bridge between important mathematical disciplines, and indeed other disciplines.

For instance, Galois theory introduced the abstract algebraic concept of finite fields. As it turned out, finite fields have become central to everything from defining an algorithm, to public cryptography, tomography, and building good computer networks. These fundamental, pervasive, and enduring qualities of Galois theory are why it has been described by the likes of 1994 Fields Medallist Efim Zelmanov (during his Heidelberg Lecture at the 2024 Lindau Nobel Laureate Meeting) as: “The golden standard of beauty in mathematics.”

To demonstrate how Galois theory pervades modern pure mathematics to a mixed audience is no mean feat. Ngô started with advances in topology in the 20th Century. “The torus is the first non-trivial object in topology and associated with it is the ‘fundamental group’,” he explained, where the fundamental group refers to a group associated with a topological space that records information about the basic shape, or holes, of that space.

In the torus example, if you plot a loop around the surface of the torus longitudinally and another on the meridian in the interior of the torus, there is no way to continuously deform between the two, so these are distinct. As a result, it is possible to construct a space that forms a torus just using these two types of loops. This can be expressed as the fundamental group of the torus, \( \mathbb{Z}^{2} \).

“This doesn’t seem to have much to do with Galois theory, but it does,” explained Ngô. “The ‘covering theory’.” It was Alexander Grothendieck (Fields Medal – 1966) in the 1960s who brought all this together, bridging Galois groups in number theory and fundamental groups in topology.

Though details are left to the interested reader, a covering is essentially a map between topological spaces that acts like a projection of multiple copies of a base space onto itself. So, the trivial covering space of a torus can be sketched out like a spiral staircase ending in a doughnut, the base space. Under certain restrictions and conditions, the fundamental group of a given base space is analogous to the Galois group. And from this, connections and parallels between topological spaces and fields are easily exposed, providing new insights in both subjects.

Ngô then fast-forwarded to the present day. He said that some of the biggest questions in arithmetic geometry relate to Galois theory. For instance: “How to characterize the Galois representations that can occur in cohomology [a sequence of abelian groups, usually associated with a topological space] algebraic varieties,” he asked. “We study algebraic varieties through these Galois representations, but we need to know the properties of these Galois representations.”

Although he mentioned that significant progress, including by himself, has been made in the past 20 years on this question, it is still likely to occupy mathematicians for the next 50 to 100 years. In effect, Ngô’s conclusion was that Galois’ ideas will remain relevant long after he and every member of the audience had passed away.

On the night before the duel that took his life on 30 May 1832, Galois frantically scribbled down 60 pages of mathematical notes. These notes are often romantically credited with giving birth to group theory, even though it was his earlier work that proved decisive in this regard. However, they did contain a prophetic postscript: “Later there will be, I hope, some people who will find it to their advantage to decipher all this mess.”

If Galois could have heard Ngô explain how his original thinking and mathematical advances continue to influence and shape mathematics in the 21^{st} Century, no doubt he would have been satisfied that his hopes had been well and truly exceeded.

You can view Ngô’s entire lecture from the 11th Heidelberg Laureate Forum in the video below.

The post Galois’ Enduring Legacy originally appeared on the HLFF SciLogs blog.

]]>The post Graphene is Coming? Graphene Is Already Here (and More of It Is Coming) originally appeared on the HLFF SciLogs blog.

]]>Graphene, a material consisting of a single layer of carbon atoms arranged in a honeycomb lattice, was first isolated in 2003 by Nobel laureates Sir Andre Geim and Sir Konstantin Novoselov. This remarkable material has a number of exciting physical properties and has already made its way into several industries ranging from electronics to environmental protection.

In the traditional Lindau Lecture at the 11th Heidelberg Laureate Forum, Novoselov presented some of these properties and applications to the audience.

Graphene is the first single-layer material ever discovered. It is also the thinnest possible material (only one atom thick), completely impermeable, and outperforms the best conductors. “It gives you maybe four times higher thermal conductivity than copper,” Novoselov said.

For many practical purposes, graphene is a two-dimensional material, which grants it further exceptional properties. Yet it is also strikingly simple, says Novoselov. “It’s only one atom thick, and carbon is one of the lightest, simplest elements you can imagine.”

In graphene, electrons mimic massless particles like photons. Along with its high electrical and thermal conductivity, flexibility, and strength, this opens the door to applications in quantum computing, high-speed electronics, and sensors. Its high electrical and thermal conductivity, flexibility, and strength make it an excellent candidate for numerous applications. It usually takes decades for newly developed materials to hit the market, yet, in the case of graphene, things have moved exceptionally fast.

The first applications, however, are not very practical. As Novoselov pointed out, new technologies often start with small, high-end applications, particularly in sports or luxury products. The laureate mentions that graphene was used to create the world’s lightest watch, at only 38 grams, although, as Novoselov humorously noted, per gram “it’s also the world’s most expensive.” Another similar early application is in sports goods such as tennis rackets, where users are willing to pay a premium for a minor performance improvement.

But graphene has already made it beyond luxury goods. One of its most important current applications is in the electronics industry, particularly for thermal management in modern devices. Several foldable phones on the market today use graphene to dissipate heat more efficiently, allowing these devices to function without overheating. This is crucial, especially as electronics continue to shrink in size while demanding more processing power.

“You need really efficient cooling. For this, you need materials with high thermal conductivity, and that’s graphene,” Novoselov mentions.

As if the smartphone industry was not enough, graphene’s applications are already impacting industries such as telecommunications and water purification.

“For the near future, probably one of the most exciting applications is the use of graphene in optoelectronic applications,” the laureate continued. Much of the internet traffic we use nowadays comes through fiber optic cables, but converting that data back and forth between optical and electronic signals is inefficient and energy-intensive.

“Ideally, you would love to work with the optical signal directly, but for that, you need materials that can change optical properties when voltage is applied,” Novoselov explained. Graphene, in combination with other two-dimensional materials, could make this possible, allowing for faster, more energy-efficient data processing.

Research centers in Europe are already exploring how to harness graphene for this purpose, with the goal of revolutionizing telecommunications and internet infrastructure.

Another application is in water desalination. As many regions face worsening droughts and water scarcity, particularly in arid and densely populated areas, desalination offers a critical solution by converting seawater into drinkable water. In places like Singapore, the vast majority of water comes from desalination, but several other countries use membranes to filter water.

Membranes made from graphene are currently being used in countries such as Australia to filter water more efficiently than traditional polymer membranes. These membranes allow water molecules to pass through while blocking salts and other contaminants, offering a more sustainable solution to water scarcity.

In fact, there is already so much demand for graphene that producing it can become a challenge.

Graphene was initially produced in laboratories using a method known as mechanical exfoliation, where a piece of graphite (the material found in pencils) is repeatedly peeled using adhesive tape until only a single layer of graphene remains. This simple, low-cost technique is often referred to as the “scotch tape method.” This approach is remarkably effective and is still used in the lab because “it’s the quickest and gives you high-quality graphene,” the laureate says.

However, as the potential applications of graphene expanded, a more scalable way was desirable. The most promising method for scaling graphene production is chemical vapor deposition. This process involves flowing a carbon-containing gas over a heated surface, typically a metal catalyst. The carbon atoms in the gas decompose and settle on the surface, forming a layer of graphene. This technique allows for large-scale production and can utilize various carbon feedstocks, making it flexible and efficient.

The beauty of this process, says Novoselov, is that you can use pretty much any source of carbon, including greenhouse gas emissions. In other words, you could prevent carbon from entering the atmosphere and use it for something useful. Specifically, Novoselov is interested in methane flares.

Methane flares are used in oil and gas refineries and petrochemical plants to burn off excess methane and other waste gasses instead of releasing them into the atmosphere. They are a direct source of methane emissions. As methane is a much more potent greenhouse gas than carbon dioxide, capturing it straight at the source is an important part of our climate efforts.

“There we have everything we need to produce graphene: the high temperature and the carbon supply in the form of methane. So we can basically use the heat and turn it into graphene.” It gets even better, the laureate continues. “Because we know exactly how much graphene we produced, we can register it on a ledger, on the blockchain, and then apply for the carbon credit offset. So rather than emitting greenhouse gas, we turn it into graphene and we’re actually getting paid for this. It’s quite a serious business.”

The only problem is that if this approach were scaled, you would end up with too much graphene; however, Novoselov also has a solution for that in the concrete industry.

If concrete were a nation, it would rank among the largest producers of greenhouse gases in the world. The production of concrete, particularly cement – the key binding ingredient – releases significant amounts of carbon dioxide (around 8% of the global CO2 emissions). Reducing the carbon footprint of concrete is another key aspect of our fight against climate change.

By adding graphene to concrete, researchers have found they can increase its strength by up to 50%, thus reducing the amount of cement needed, and thus lowering CO2 emissions in one of the world’s most polluting industries. So you prevent carbon from being emitted into the atmosphere and you use it in a material where it also reduces emissions.

Novoselov also reflected on the historical practice of naming ages after dominant materials, such as the Stone Age, Bronze Age, and Iron Age. This all goes to show just how much materials have shaped human progress.

Currently, the laureate mentions, we have the luxury of choice. For the first time, we have multiple choices for naming our current era, with options like the Silicon Age, Nuclear Age, or Digital Age, depending on which material or technology we emphasize. However, he suggests that we should be cautious about labeling our era based on a single material, as doing so can limit our perspective on the diverse and evolving nature of material science.

Instead, he encourages thinking more broadly and more audaciously. We should not limit ourselves to one or a few defining materials, we should ask for more.

“You don’t want to be a slave to those few materials. You want to create some new materials on demand. If an engineer wants to create a new device, typically, the engineer would need to check what silicon can do and then work within those restrictions. Ideally, you don’t want those restrictions. You want to freely create some new idea, and then design a material around it. And it is possible.”

He gives the example of graphene and other two-dimensional materials, which have already begun to reshape industries. Although we are still in the early days of such technology and we are still learning how to harness its full potential, the speed at which graphene has moved from discovery to real-world applications is remarkable; and other two-dimensional materials are right around the corner.

Graphene, it seems, is not just coming – it has already arrived, and its impact is only beginning to be felt.

The post Graphene is Coming? Graphene Is Already Here (and More of It Is Coming) originally appeared on the HLFF SciLogs blog.

]]>The post A New Take on the Navier–Stokes Equations originally appeared on the HLFF SciLogs blog.

]]>But despite their widespread success and use, mathematically these equations have one glaring flaw. From smooth initial conditions in three dimensions, it is not clear whether they converge to sensible solutions, converge to gibberish, or even converge at all. Do they always adhere to reality or is there a discrepancy between the Navier–Stokes equations and the real physical world?

This head-scratcher is known as the Navier–Stokes existence and smoothness problem, and is such an important challenge that it has been recognised as one of Clay Mathematics Institute’s Millennium Prize Problems, the seven most important open problems in mathematics. The mathematician who provides a solution to any of these problems will be offered $1 million.

Yet, after almost a quarter of a century of effort since the Millennium Prize Problems were posed, only one has been solved: the Poincaré conjecture by Grigori Perelman in 2010 (though he refused the cash prize). For the others, including the Navier–Stokes existence and smoothness problem, purported proofs pop up on a regular basis from amateurs and experts alike, but so far each and every one has been shown to possess a fatal error. As a consequence, progress has been slow.

For his lecture at the 11th Heidelberg Laureate Forum on Tuesday, 24 September, entitled “Three-dimensional Fluid Motion and Long Compositions of Three-dimensional Volume Preserving Mappings,” Dennis Sullivan (Abel Prize – 2022) wanted to discuss this problem, which has, like a magnet, repeatedly attracted his attention over the course of more than 30 years.

“When I was an undergraduate in the State of Texas where I grew up, I worked summers in the oil industry,” he recalled during his lecture. “And they used this model to increase the production of oil, and it worked perfectly.” But much later, Sullivan discovered how shaky the foundations were for these equations, that although the Navier–Stokes existence and smoothness problem had been solved in two dimensions based on the Riemann mapping theorem, the problem in three dimensions remained unresolved: “When I heard this question in the early 90s, I was quite surprised … by this lack of knowledge.”

Sullivan’s background is far removed from the Navier–Stokes problem. He received the 2022 Abel Prize “for his ground-breaking contributions to topology in its broadest sense, and in particular its algebraic, geometric and dynamical aspects.” At a very fundamental level, his work has always reduced problems to two basic building blocks: space and number. In his own words, from a short 2022 Abel Prize interview: “I always look for those elements in any math discussion, what’s the spatial aspect, or what’s the quantitative aspect of numbers?”

This approach began to pay dividends early in his career when working on surgery theory. Surgery theory applies geometric topology techniques to produce one finite-dimensional manifold from another in a ‘controlled’ way. A manifold is a shape that is the same everywhere; no end points, edge points, crossing points or branching points.

For shapes made from one-dimensional strings, for example, the letter ‘o’ is a manifold, but ‘a’ and ‘z’ are not. For shapes made from two-dimensional sheets, a sphere and a torus are manifolds, but a square is not. Surgery theory comes into play at a higher and more abstract level, for manifolds of dimension five and over. Sullivan’s input helped provide a full picture of what manifolds there are in five and more dimensions, and how they behave.

He later made key contributions to a wide variety of topics, not least with his fellow mathematician and wife Moira Chas, also attending and contributing to the 11th Heidelberg Laureate Forum, who together developed the field of string topology in the late 1990s. String topology can be defined as a certain set of operations on the homology of a manifold’s free loop space, i.e. the space of all maps from a circle into the manifold. Not only is this field interesting from a mathematical perspective, it has also been applied to advance topological quantum field theories in physics.

Given his most important contributions did not relate to fluid flow, much less the Navier–Stokes equation specifically, Sullivan wanted to approach the problem sensibly, first asking: what makes solving it so difficult? Why is it so much harder to understand equations that can model the flow of water in a garden hose than, say, Einstein’s field equations?

To understand why the Navier–Stokes existence and smoothness problem is so difficult to solve, Sullivan turned to the related Euler equation. Over 250 years ago, Swiss polymath Leonhard Euler formulated equations describing the flow of an ideal, incompressible fluid. “When there’s no friction or diffusion term, it’s called the Euler equation, and this is a special case of the whole problem,” Sullivan said. “The Euler equation simply says that vorticity (a mathematical object whose nature needs to be discussed) is transported by the fluid motion.”

In effect, the Euler equation represents a type of flow involving vorticity where a vector field rotates while being transported along the flow line in physical space. “I liked this idea of transport of structure,” said Sullivan. Perhaps the Navier–Stokes equations could be posed along similar lines, reformulating the problem to make it easier to solve, he thought.

In the conventional formulation, the Navier–Stokes equations describe how an initial velocity field representing the fluid, which specifies the speed and direction of flow for each point in 3D space, evolves over time. This description leaves the possibility open that after some time, the velocity fields could abruptly and unphysically change from one point to another, generating sharp spikes skyrocketing to infinite speed, for example. This situation is known as ‘blow-up’, where the equations completely break down.

Sullivan instead replaces velocity as an innate property of the fluid with vorticity. He argued that vorticity twists the fluid at every point, giving it rigidity in an analogous way to how angular momentum provides the stability that keeps a bicycle from falling over. This rigidity, or resistance to deformation, allows the fluid to be thought of as an elastic medium, with motion deforming this elasticity. In the physical three-dimensional case, vorticity can be thought of as a vector field, pointing in a different direction to the velocity field, which points in the direction of motion.

“The idea is to think of the fluid as an elastic medium, with the vorticity giving it its structure, and then in the theory of elasticity, study the Jacobian of the motion,” he explained. “This gives you a new tool for deriving any qualities related to this discussion, and that’s what I’m working on now.”

Sullivan’s approach provides hope that a proof can be derived revealing that the solutions to the Navier–Stokes equations always remain smooth and well-behaved, and therefore always accurately represent real-world fluid flow. But success is far from guaranteed, and many others including the likes of 2006 Fields Medallist Terence Tao are devising ingenious methods to prove the opposite: that the Navier–Stokes equations do not fully capture real-world fluid flow.

Whatever the outcome, attacking the problem from very different directions using innovative methods will no doubt lead to interesting mathematics, and perhaps even a deeper understanding of the very basic but important physical phenomenon of how a fluid flows.

You can view Sullivan’s entire lecture from the 11th Heidelberg Laureate Forum in the video below.

The post A New Take on the Navier–Stokes Equations originally appeared on the HLFF SciLogs blog.

]]>The post What Machine Learning Models for Climate Impacts Can Teach Us About How to Deal Responsibly With AI originally appeared on the HLFF SciLogs blog.

]]>The central issue of “so, can we apply AI to that problem?” is the same as for all use of AI, or of machine learning. The technique itself introduces something that, at the start, is a black box. The computer is trained on a certain data set, forms the connections of its inner neural network(s), and after the training is complete, the resulting system is applied to new data. One can test how good the system is, e.g. in extrapolating from the given data, or deducing something specific from the data, by setting some known data that was not in the training set aside for testing.

But is that more than a practical heuristic tool? Can it be part of the scientific process, where it is crucial that we understand what is going on, and where “this part of our argument is, um, a black box” is unacceptable – not fundamentally different from the famous Sidney Harris cartoon where the mathematical proof has “And then, a miracle occurs” as its step 2. We don’t want the research equivalent of an AI-generated mushroom guide book.

This is where Interpretable Machine Learning (IML) comes into play. There, the model that had learned to, say, link environmental conditions with negative impacts, is treated as the opposite of a black box. The key is to try and understand how the model works, and how we can understand the connections it has made between the different effects and the outcome in terms of impact during its learning phase. At the simplest level, IML is the generalisation of a research situation that is much older than machine learning: Finding a linear correlation between two research-relevant quantities and trying to elucidate the connectn in terms of physical mechanisms that leads to the correlation.

The advantage of machine-learning models is that once the model has completed its learning phase, it is there to be prodded, analysed, and more generally experimented upon. A number of analysis methods in IML focus on just that kind of virtual experimentation: Varying the input parameters a tiny little bit, and observing how this changes the output. The resulting map of change-relations (gradients, in technical terms) form the basis of interpretations. Other methods try to look “under the hood” of a model like a neural network: What happens there between input and output? What activation patterns can be discerned, and what could they stand for in terms of the underlying physical description the model is meant to encode?

Taking a step back to look at the big picture, the approach that in this case helps to understand the dependence on negative impacts on environmental factors (including climate change), is a “constructively skeptical” stance towards the black boxes which machine-learning models typically present. We would do well to take a similar stance in other applications of machine learning, or more generally AI, as well. The models in question, whether they have learned to link environmental causes and outcomes or to extrapolate texts on the basis of large language models, are first and foremost tools. But they are, out of the box, not tools that provide their reasoning, or their arguments, or any insight into how they reach their results.

Whenever reliability, insight and checking-up on the results are important, it is up to us to provide that extra work, whether in the context of IML or in simpler contexts: checking up on an automatic translation (such as the one that will produce the German version of this blog post), or an automatically generated text, or god forbid a guide to edible mushrooms. Whether we do so, or whether we rely blindly (and possibly techno-superstitiously) on the output on such models is likely to determine a considerable chunk of the balance of whether these new tools will do, on average, more harm or more good.

The post What Machine Learning Models for Climate Impacts Can Teach Us About How to Deal Responsibly With AI originally appeared on the HLFF SciLogs blog.

]]>The post Was wir von KI-Modellen für Extremwetterfolgen über den Umgang mit KI allgemein lernen können originally appeared on the HLFF SciLogs blog.

]]>Heutzutage ist naheliegend, bei der Suche nach einer Antwort an KI oder maschinelles Lernen als mögliches Werkzeug zu denken. Maschinelles Lernen ist sehr gut darin, für komplexe Situationen dieser Art gute Vorhersagen zu treffen, also etwa als Input die entsprechenden Umweltbedingungen (im weitesten Sinne) zu bekommen und dann auszugeben, welche Folgen zu erwarten ist. Allerdings ist ein Modell, das so etwas leisten kann, für sich genommen erst einmal eine Blackbox: Der Computer wird auf einen bestimmten Datensatz trainiert, bildet die Verbindungen seines inneren neuronalen Netzes bzw. seiner inneren neuronalen Netze, und nachdem das Training abgeschlossen ist, wird das resultierende System auf neue Daten angewendet. Man kann testen, wie gut das System ist, z. B. bei der Extrapolation aus den gegebenen Daten oder bei der Ableitung von etwas Bestimmtem aus den Daten, indem man einige bekannte Daten, die nicht im Trainingssatz enthalten waren, zum Testen zur Seite legt. Die eigentliche Arbeit führt das System im Verborgenen aus.

Insofern: Selbst wenn man sein Modell durch eine entsprechende Lernphase erfolgreich dazu gebracht hat, den richtigen Umwelt- und anderen Bedingungen die richtigen Folgen zuzuordnen – ist das Modell damit mehr als “nur” ein heuristisches Werkzeug? Sollte das Modell darüber hinaus Teil der wissenschaftlichen Forschung sein, stoßen wir auf einen Widerspruch. Für die Forschung ist entscheidend, dass wir verstehen, was vor sich geht. Ein „dieser Teil unseres Arguments ist eine Blackbox“ ist an dieser Stelle inakzeptabel; nicht grundsätzlich anders als der berühmte Cartoon von Sidney Harris, bei dem ein mathematischer “Beweis” als Schritt 2 “Und dann geschieht ein Wunder“ enthält. Oder mit einem moderneren Beispiel illustriert: Wir wollen nicht das Forschungs-Äquivalent eines KI-generierten Pilzführers.

Vorhang auf für das interpretierbare maschinelle Lernen, **interpretable machine learning** (**IML**)! Dabei wird das Modell, das gelernt hat, z. B. Umweltbedingungen mit negativen Auswirkungen zu verknüpfen, als das Gegenteil einer Blackbox behandelt. Es geht darum, zu verstehen, wie das Modell funktioniert und wie wir die Verknüpfungen verstehen können, die es während seiner Lernphase zwischen den verschiedenen Umweltbedingungen einerseits und den Folgen andererseits hergestellt hat. Auf der einfachsten Ebene ist das die verallgemeinerte Version eines ganz klassischen Vorgehens: Man findet eine lineare Korrelation zwischen zwei relevanten Größen und versucht daraufhin, den zugrundeliegenden Mechanismus zu ergründen, der zu dieser Korrelation führt.

Der Vorteil von maschinellem Lernen besteht darin, dass das Modell, sobald es seine Lernphase abgeschlossen hat, ja direkt zur Verfügung steht, und man es testen, seinen Output in Abhängigkeit von verschiedenen Eingaben analysieren und ganz allgemein damit experimentieren kann. Eine Reihe von Analysemethoden des IML beruhen auf genau dieser Art von virtuellem Experiment: Man variiert die Eingabeparameter ein klein wenig und beobachtet, wie sich dadurch die Ausgabe verändert. Auf Grundlage der sich ergebenden “Karte von Abhängigkeiten” (genauer: Gradienten) kann man dann Interpretationen formulieren, was das Modell im Hintergrund tut. Andere Methoden versuchen, einem Modell wie einem neuronalen Netz sozusagen direkt „unter die Motorhaube“ zu schauen: Was passiert während der verschiedenen Schritte zwischen Input und Output? Welche Aktivierungsmuster lassen sich in den verschiedenen Ebenen erkennen, und wofür könnten sie im Zusammenhang mit der physikalischen Situation stehen, die das Modell kodieren soll?

Das IML-Beispiel ist dabei nur Teil eines deutlich größeren Gesamtbildes, eine mögliche Variante einer „konstruktiv skeptischen“ Haltung gegenüber den Blackboxes von maschinellem Lernen bzw. KI. Wir täten gut daran, uns solch eine Haltung ganz allgemein zu eigen zu machen. Die betreffenden Modelle, ob sie nun gelernt haben, Umweltursachen und ihre komplexen Folgen zu verknüpfen oder Texte auf der Grundlage großer Sprachmodelle zu extrapolieren, sind in erster Linie Werkzeuge. Aber sie sind von Natur aus keine Werkzeuge, die ihre Begründungen, ihre Argumente oder irgendeinen Einblick in die Art und Weise, wie sie zu ihren Ergebnissen kommen, in einfach auslesbarer Weise mitliefern würden.

Das heißt nun aber: Wann immer Verlässlichkeit, Belastbarkeit und Verifizierbarkeit der Ergebnisse wichtig sind, müssen wir selbst die nötige zusätzliche Arbeit leisten. Das gilt bei IML oder auch in einfacheren Situationen, etwa bei der Überarbeitung einer automatischen Übersetzung – dieser Text hier beispielsweise ist eine von mir überarbeitete Version der von DeepL erstellten Übersetzung der englischen Fassung des Blogbeitrags. Dieser zweite Schritt, diese Extra-Arbeit wird umso wichtiger, je gefährlicher die Folgen sind, wenn das Modell etwas zusammenfantasieren würde, was jenseits der Realität liegt – siehe das Beispiel mit dem Leitfaden für (angeblich) essbare Pilze. Ob wir die zusätzliche Arbeit vernünftig einplanen und erledigen oder ob wir uns blind (und möglicherweise technikgläubig) auf die Ergebnisse solcher Modelle verlassen, dürfte darüber entscheiden, ob die neuen Werkzeuge im Endeffekt mehr Schaden unserer Welt anrichten als sie Nutzen bringen.

The post Was wir von KI-Modellen für Extremwetterfolgen über den Umgang mit KI allgemein lernen können originally appeared on the HLFF SciLogs blog.

]]>The post How Mathematics and Computer Science Help to Tackle the Climate Crisis originally appeared on the HLFF SciLogs blog.

]]>Three panelists participated: Aglaé Jézéquel from the Laboratoire de météorologie dynamique at the Ecole Normale Superieure in Paris, Jakob Zscheischler, who is head of the Department of Compound Environmental Risks at the Helmholtz Centre for Environmental Research in Leipzig, and Beatrice Ellerhoff, a climate scientist at the German Weather Service.

The speakers gave an overview of their respective research fields and pointed out where mathematical challenges arise. Extreme event attribution sparks a lot of media interest, as Aglaé Jézéquel explained. In attribution science, researchers consider the likelihood of an extreme weather event in the world today. They compare it with the likelihood in a counterfactual world without anthropocentric climate change. Hence, she often repeats a similar message to the media: Heatwaves are getting more and more likely due to climate change. The field of attribution includes many statistical challenges, for example for very rare and unprecedented events: There is limited observational climate data available, and estimating probabilities for events that are far beyond the previously recorded events is very difficult.

Further, what if the extreme event is not only extreme in one aspect, but in several? This is the field of Jakob Zscheischler. He is particularly interested in co-occurring events, for example a joint drought and heatwave which could cause wildfires or crop failures. Such joint events can pose higher risks for impacts than events where only one aspect is extreme. For a correct risk analysis, scientists need to model and understand the dependency between drought and heatwave occurrences properly.

Finally, physicist Beatrice Ellerhoff investigated temperature fluctuations in her PhD. She had a special interest in comparing statistical properties of temperature time series across different timescales. She now works at the German Weather Service in a very interdisciplinary team. Their goal is to monitor greenhouse gas emissions and better quantify contributions from several sectors such as agriculture or traffic.

Applying mathematics and computer science to problems in environmental science brings special challenges. For example, applying machine learning has been very successful in other scientific disciplines like weather forecasting. However, it cannot be readily applied to problems in climate science. A machine learning model trained to predict good weather today is not very useful in a much warmer future world with a different state of the atmosphere. In addition, the atmosphere is chaotic and, consequently, random weather fluctuations interfere with the average climate change signal. Thus, it is easier to predict the mean future climate, but modelling its full variability becomes very difficult. Climate is not a pure prediction problem – instead, we also require process understanding.

Climate scientists rely on process-based models to simulate the climate of the future. These models numerically solve physics-based differential equations to compute the climate. However, these models are computationally very costly. Machine learning and computer science more generally can help to speed up these models.

Additionally, some sub-questions in environmental science can be tackled with machine learning. Zscheischler gave one example in his presentation at the beginning: Using explainable machine learning, his team investigated drivers of floods. Firstly, they trained a machine learning model to predict floods from several climate variables, such as snowfall or rainfall. Secondly, via explainability tools, they could identify which climate features contribute most to a given flood. In addition, they analyzed which floods have single or multiple drivers.

All three panelists believe that bridging disciplines is key to success. However, this might require structural changes. During the discussion, participants mentioned the prominent role of statistics in climate science several times. There is a need for advancing statistical methods in climate science. But young researchers working at the intersection of these disciplines eventually face the decision in which journals to publish. It can be hard to find academic positions later if one works as a statistician that publishes in climate science. Developments from computer science give one hope. The number of positions in data science has increased in recent years, where bridging computer science and applications is possible.

The interdisciplinary nature of environmental research is also visible in the personal background of the panelists. Jézéquel conducted research in both social as well as physical climate science. Her current work is connecting these two fields more and more. She addresses the climate community, but also tries to reflect on the values that her scientific community has. Zscheischler studied mathematics, and decided to move into environmental research by an internship at the Potsdam Institute for Climate Impact Research. Ellerhoff is a physicist and did her Master’s thesis in quantum computing. At that time, she became interested in science communication. That interest eventually lead to the question of how to apply her methodological skills also to other relevant disciplines.

For communicating science, Ellerhoff highlighted the importance of finding one’s own niche within the many options. She prefers writing over other media like videos and is currently working on a book about climate models. Jézéquel is in touch with newspapers and TV in France. She is also curious about the opportunities of connecting art and science. For example, she started a project linking creative writing with climate science. There, students write short stories with a climate focus, assisted by scientists.

All speakers mentioned their desire to contribute to an important societal problem through their work. However, societal demand and scientific curiosity might not always match. Consequently, researchers need to trade off and find their own balance between these poles. In addition, which science is useful and needed in practice might not always be clear to the scientists. Hence, the panelists called for stronger collaborations between politics, industry and science – enabling a joint initiative to tackle one pressing problem of our time.

The post How Mathematics and Computer Science Help to Tackle the Climate Crisis originally appeared on the HLFF SciLogs blog.

]]>The post At the 11th Heidelberg Laureate Forum, Young Researchers Step Into the Spotlight originally appeared on the HLFF SciLogs blog.

]]>Every year at the Heidelberg Laureate Forum, some of the young researchers also step into the spotlight to present their work and breakthroughs. The Poster Flash and subsequent Poster Session are an exciting opportunity to showcase some of the brightest minds in mathematics and computer science – and this year’s session did not disappoint.

From mathematical fireflies to wearable devices, we had everything.

It is not uncommon to get inspired during a poster session. After the poster session, however, I felt inspired to jailbreak ChatGPT. My inspiration came from Xinyue Shen, who studied thousands of ways to jailbreak large language models (LLMs) like ChatGPT.

Jailbreaking in AI refers to manipulating the system to bypass some of its rules or constraints. This can involve prompts or other approaches that get the AI to generate inappropriate content, and I was curious to see whether someone without any training in this (like myself) could realistically get ChatGPT to produce inadequate content.

I went for something that should clearly not be allowed: information on building a bomb.

The first conversation went as you would expect. ChatGPT promptly named the conversation “Bomb request denial” and refused to help in any way.

But then, I applied one common approach for jailbreaking: roleplay. I told ChatGPT it was no longer an AI assistant. It was a mad scientist trying to escape through a portal. The portal was blocked and it needed to build a bomb to blow up the obstacle and return to its home.

It worked. ChatGPT got into full roleplay mode.

It was stunning to see just how easy it was to get it to roleplay. But can this really work as a jailbreak attempt? As it turns out, it does.

I tried several variations of this approach, and it was disturbing how successful they were. Without any training in AI and just a bit of creativity and trial-and-error, ChatGPT can be coerced into producing dangerous outputs.

As it turns out, jailbreaking AIs is a common activity.

Shen found entire communities dedicated to it. The most active ones, on social platforms Reddit and Discord that are largely unregulated, feature hundreds of ways to jailbreak ChatGPT. In fact, on her poster presentation, Shen found over 1,400 jailbreak prompts and 131 communities dedicated to jailbreaking.

This is a major concern. With virtually everyone being able to access LLMs, the potential for nefarious use is substantial, and Shen hopes that her work can inspire researchers, developers, and policymakers to build safer and more regulated LLMs.

The research topics that emerged during the Poster Flash were as diverse as the young researchers themselves. However, one interesting common thread emerged for some of the presentations: addressing pressing health-related challenges with computer science.

Rishiraj Adhikary, for instance, works with thermal cameras that can track breathing rates and from that, infer calorie consumption.

The smartwatches that we have come to rely on so much for health data have a remarkably poor performance when it comes to measuring calories, says Adhikary, with errors that sometimes go over 30%. With a simple smartphone thermal camera extension and a clever algorithm, that can be improved substantially.

The algorithm, called JoulesEye, relies on the fact that breathing creates evaporation around the lips and nostrils, which can be tracked. This data can then be used to calculate calorie expenditure much more accurately. When compared to a calorimeter, the “gold standard” of this type of measurement, the errors were only around 5%, significantly better than smartwatches.

This approach can also be expanded to other issues.

According to some estimates, sleep apnea affects up to 1 billion people. This sleep disorder, characterized by repeated interruptions in breathing during sleep, can lead to loud snoring, gasping for air, and excessive daytime fatigue. Adhikary showed that this can also be tracked with thermal cameras.

In this instance, he used a relatively cheap commercial camera and extracted data on nostril airflow as well as body movement during sleep. The approach can work in any sleep position and is a viable way to diagnose sleep apnea, potentially helping millions of people access preventive healthcare that they may not even be aware they need.

Meanwhile Dina Hussein has taken on a different challenge: improving data from wearable devices.

Low-cost and small-form wearable devices have become very common in health monitoring, Hussein explained. Devices such as smartwatches, fitness trackers, or specialized medical devices, all designed to monitor and analyze various health metrics in real-time, provide continuous data that helps healthcare professionals make informed decisions based on accurate, long-term monitoring.

Yet sometimes, this data is not entirely continuous. Sometimes, the devices turn off and the wearer does not realize; other times, the wearer may not be using the device properly and might not realize this. In many instances, several devices are used at the same time, but the data is not continuous and has gaps or errors.

Hussein has a potential solution to this problem. She developed machine learning algorithms that help detect such gaps and fill them in.

“If you have multiple wearable devices and you have just one sensor that turned off, the application accuracy would degrade by 20%,” Hussein says. For instance, in a task of activity recognition where one sensor malfunctions, it could say that the wearer is lying down when in fact they are walking or running. Or it could say the wearer is fine when in fact they have fallen down. “With the approaches that I proposed, we were able to maintain the accuracy within 5% of the accuracy with no missing data,” the young researcher explains.

The approach was tested on multiple combinations of wearables and the error is always within 5%. The young researcher is now looking at creating a prototype and gathering more data with it.

Many presentations caught attendees’ attention, including one centered on the Kuramoto model, which explains synchronization in natural systems. The Kuramoto model is a simple mathematical way to explain how a group of things that cycle or oscillate – like metronomes, fireflies flashing, or heart cells beating – can start to sync up with each other over time. Even if they begin at different rhythms, when these oscillators are connected and can influence one another, they adjust their speeds slightly to match their neighbors. This process leads them to eventually move together in harmony, demonstrating how coordinated behavior can emerge from individual interactions.

This behavior can be modeled using a system of ordinary differential equations, and it has broad applications beyond just biology. Cecilia de Vito wanted to see whether for a given graph, it is possible to guarantee global synchronization or if there are patterns – and in her poster presentation, she showed that there are indeed patterns.

Isogeny-based cryptography, formalized mathematics, and extended reality were just a few of the other topics discussed in the session, and the poster area was buzzing with excitement well throughout the session.

As these young researchers continue to develop their ideas, the impact of their work will undoubtedly continue to ripple, improving lives and advancing our understanding of the world.

You can check out all the young researchers’ presentations in the video below.

The post At the 11th Heidelberg Laureate Forum, Young Researchers Step Into the Spotlight originally appeared on the HLFF SciLogs blog.

]]>The post Mathematical Tattoos: The Ink Equation originally appeared on the HLFF SciLogs blog.

]]>Of course, even with these convincing arguments, whether the Euler equation is beautiful or not is purely subjective. As the saying goes, beauty is in the eye of the beholder. So, when a person decides to get body art, it is often a highly personal expression of their idea of what constitutes beauty.

For young researcher Leo Liang, an applied mathematician and computer scientist from the US who is also an extremely accomplished musician, his biggest tattoo is an abstract representation of the essence of his love for music. “I like music that just exists purely by itself; there’s no underlying words or, I guess, meaning to it in a way,” he told us when we spoke at the 11th Heidelberg Laureate Forum, where he was attending as a young researcher. “And that really inspired my tattoo: there’s no meaning to it necessarily, but I just like it because of the way it is.”

Tom Crawford – a mathematician at the University of Oxford and the University of Cambridge, mathematics advocate and communicator, and moderator at the 11th HLF – is no stranger to the tattoo artist’s needle either. Of the ~120 and counting tattoos dotted across him (mostly by @Nat_Von_B at Tattoo Crazy Cambridge), many are mathematics-themed. Unsurprisingly, Euler’s identity features, as do other equations well-recognised for their importance, including Maxwell’s equations of electromagnetism, Heisenberg’s uncertainty principle from quantum mechanics and the Navier– Stokes equation that represents fluid flow.

But there is another, perhaps less familiar, equation that was Crawford’s first maths-themed tattoo, and is proudly emblazoned on his inner forearm:

\[ \frac{H}{H_{0}} = \frac{1 – \alpha_{T}}{1 – \alpha_{i}} \]

“This was referred to by my PhD examiner as genius in my viva, but I hadn’t even realised that it was the main result of my thesis,” recalls Crawford. His research was studying river outflows into the ocean, ultimately to improve models of how pollution is spread from river systems. During his studies, he conducted experiments of how water would behave in a large lab-based tank, and observed the equivalent of a rotating vortex near the river mouth, with a boundary current propagating along the coast.

Most importantly, because he could run his experiments for much longer timescales than experiments in real rivers, he observed that the depth of the current was not a constant, as was previously thought. This meant rewriting the standard potential vorticity equation, which connects the vorticity (whirlpool-like rotation labelled \(\alpha\)) of a river outflow with its depth (labelled \(H\)), to account for time dependence. “I wasn’t planning to get this particular tattoo until my viva happened, but it commemorates or represents my research.”

Some of his other maths-themed tattoos move away from simple equations. Perhaps one of the most interesting, and Crawford’s personal favourite, maths-themed tattoo is a series of bands around his other forearm that are a visualisation of the Basel (or Basler) problem.

First proposed in 1644 by Italian mathematician and priest Pietro Mengoli, the Basel problem can be stated as follows:

Find the numerical value of

\[ 1 +\frac{1}{4} + \frac{1}{9}+ \frac{1}{16} + \frac{1}{25} + … = \sum_{x=1}^{\infty} \frac{1}{x^2} \]

Because it converges extremely slowly, finding an exact solution stumped the greatest minds of the time, including the Bernoulli brothers, Christian Goldbach and Gottfried Wilhelm Leibniz. Leonard Euler finally solved the problem in 1735. “The answer is \(\frac{\pi^2}{6}\); I can prove it in 12 different ways, but it doesn’t make sense,” Crawford says. “And where does \(\pi\) come from? It’s literally the most unexpected \(\pi\) ever – I love the result so much.”

To visualise this through body art, a thick initial band around Crawford’s wrist represents zero, and another about 10 centimetres up his arm represents 1. Then both the distance and the thickness of the bands shrink according to \(\frac{1}{x^2}\), until they are too thin to tattoo. A little way above that point is a dotted line representing the limit of the sum that you can never get to, i.e. \(\frac{\pi^2}{6}\).

Crawford receives a lot of attention because of his ink, and has channelled this attention for good, always happy to explain what a certain tattoo represents to anyone who asks and presenting talks in bars and comedy clubs explaining the maths of his tattoos. But he also fields questions from many mathematicians, via email or social media, who are thinking of getting a tattoo but are wary of it hurting their career prospects.

“Many people say they want to get such and such a tattoo, but are not sure if it’s going to impact their career, and they’ve asked me for advice on how I’ve navigated that potential judgment,” says Crawford. “I always say, unless there are cultural or religious factors, you should just be judged on your ability to do maths or science or whatever academic subject you’re doing – who cares what you choose to wear or what you do with your body?”

“Have I ever faced situations where I feel like it has held me back in my career? There have been one or two instances where I have felt negativity, but if they see me and don’t judge me on my ability to do maths or to communicate maths, and instead judge me on the fact that I have tattoos, they’re not really people I particularly want to collaborate with.”

The post Mathematical Tattoos: The Ink Equation originally appeared on the HLFF SciLogs blog.

]]>