Mathematics and Computer Science: The Future

BLOG: Heidelberg Laureate Forum

Laureates of mathematics and computer science meet the next generation
Heidelberg Laureate Forum

The first panel of #HLF22 was titled “Future Challenges in Mathematics and Computer Science”. The panel was moderated by Ragni Piene (University of Oslo) and featured Eric A. Brewer (ACM Prize in Computing – 2009), Alexei Efros (ACM Prize in Computing – 2016), Carlos Kenig (President, International Mathematical Union), Shigefumi Mori (Fields Medal – 1990), and Cherri M. Pancake (Oregon State University). This is the first of two posts on this panel. The second post can be found here.

A view from within the audience of the panel. The backs of the heads of audience members are visible. On the panel, which is to the right of a lectern, are 6 people sitting on chairs. The facial features cannot be made out. From left to right they are Eric Brewer (wearing a black top and jeans), Alexei Efros (wearing a great blazer over a black shirt and jeans), Carlos Kenig (wearing a green top and grey trousers), Shigefumi Mori (wearing a yellow jumper, with legs hidden by the audience), Cherri Pancake (wearing a blue top) and Ragni Piene (wearing a brown top). Above them is a screen projecting a life feed of the panel and the worlds "Future Challenges in Mathematics and Computer Science"

“Future Challenges in Mathematics and Computer Science” panel. © Heidelberg Laureate Forum Foundation / Flemming

The Future

One of the strengths of the Heidelberg Laureate Forum is that it allows the greatest minds in mathematics and computer science to inspire the next generation. It was inevitable, therefore, that there were many discussions about what the future may hold for the two disciplines, and this was a recurring theme in the panel.

Rates of Change

The panel contained a mix of both mathematicians and computer scientists, which provided a great opportunity to study where the attitudes in the two fields aligned, and where they diverged. One noticeable difference comes in the rate of change within the fields.

Whilst mathematics has been around for millennia, computer science is a much younger area of research and is therefore seeing huge leaps forward. “Things are changing very very fast,” Alexei Efros acknowledged in his opening statement.

Efros’ specialisation is in machine learning, where he believes the half-life of knowledge is around 3 months, which presents both challenges and opportunities. The rapid improvements in computer hardware is a well-observed phenomenon too, described most famously by Moore’s Law. Moore’s law is based on empirical and historical data (as opposed to being an intrinsic law of physics) and it observes that the number of transistors on microchips grows exponentially in time, doubling roughly once every two years. This exponential growth can also give an idea of the progress of research in computer science as a whole.

Efros partly attributes the fast pace of computer science to the fact that the field sits between science and engineering. He predicts that there will soon be a decoupling of the two facets, particularly within artificial intelligence (AI). Engineering AI would deal with the everyday, such as automatically answering emails whose responses don’t require much thought. On the other hand, science AI would be closer to a natural science, working to determine the processes and mechanisms that drive the evolution of intelligence in both biological and computer agents. Efros explained the difference beautifully, describing science AI as “not when computers can write poetry” (which would be engineering AI) but as “when computers will want to write poetry.”

Safety First

It’s difficult to talk about the future of computer science without mentioning safety and security. Just recently, headlines announced that a third of AI experts believe it could lead to nuclear war-like catastrophes. Although it’s not always wise to take science news stories at face value, this is indicative of the fear that many people have surrounding AI.

One large criticism of A.I. is how it can often seem like a “black box”: data is inputted, a result is outputted, but the steps in between remain a mystery. It’s human nature not to trust something if we don’t understand how it thinks, which has led to a growing call for AI with thought processes we can follow, known as “Explainable AI”.

Cherri Pancake explained the crux of the matter well, describing how an inability to see how AI reached its conclusions “opens the door to people challenging it.” This is seen clearly with machine learning, where there are big debates aiming to determine whether or not outputs are based on faulty learning data, or implicit biases. “Everything becomes a battleground,” Pancake elaborated. ”If it was explainable we’d at least have a grounds for discussing whether or not it’s flawed.”

On the other hand, Alexei Efros pointed out that perhaps it’s unfair to ask for explainability, as this isn’t expected in other fields. For example, in pharmaceuticals there are many medications that appear to work but nobody knows why. Similarly, can anyone explain how they themselves create thoughts? Surely for a machine to be truly “intelligent” it must also need to generate inspiration from nothing and therefore by definition be just as unexplainable as how a person generates ideas? This is an important debate and one that computer scientists alone may not be able to answer.

More generally, the issue of safety and trust does not just apply to AI, it is universal to computer science. Open source code is an integral part of the infrastructure of many countries, underpinning electricity networks, as well as oil and gas pipelines. Eric Brewer recommends caution, however, remarking that there are very few checks on where the open source code comes from, or even that it does what it’s suppose to do.

Part of the problem lies in the relatively poor safety properties of many programming languages. For example, a surprisingly large proportion of languages lack type safety (the prevention of type errors) and memory safety (protection from security breaches and software bugs related to memory access). Further to this, it is often not enough for languages to be empirically safe, they need to be provably safe, and also provably private.

An academic ecosystem

The future of mathematics and computer science lies in collaboration. Carlos Kenig acknowledged the close link between the two areas, describing how his own mathematical work on the soliton solution conjecture has its origins in computer science carried out in the 1940s in Los Alamos. He also described how, at the last International Congress of Mathematicians (ICM), 4 of the 21 plenary lectures were in topics closely related to computer science. Shigefumi Mori also felt that collaboration is important, stating that the opportunity to interact with computer scientists was one of the key things he appreciated about the Heidelberg Laureate Forum.

Mathematics is hugely influential in the natural sciences in general, as evidenced by the 1960 book “The unreasonable effect of mathematics in the natural sciences.” Kenig cited this book, going on to say how he believes that it is important for people to be aware of the power of mathematics not only in the natural sciences, but perhaps also in data and computer science.

Cherri Pancake agreed about the need for understanding of how interconnected various disciplines are, but had a slightly different take on it. Pancake described how strongly she feels that, as mathematicians and computer scientists, they needed to be better at reaching out to those in other fields, otherwise they might find themselves undervalued: “What we really have to bring to the table is not our software, not our tools, not our methods, but our way of looking at problems.”

Eric Brewer raised another reason why it is imperative that mathematicians and computer scientists are able to work with those in other fields. Any development or implementation of any technology requires funding, which has to be voted on by those controlling the budgets. Therefore, issues such as increasing security is as much a social problem as it is a scientific one.

Moving forward, together

Whilst the panel included many serious moments and dealt with many of the big fears in computer science, and AI research in particular, there was also an undercurrent of hope. Though it is unclear what the future may hold, it is clear that we are best to face it together.

Watching this panel was a collection of young researchers from all over the world, with different backgrounds, experiences, research interests, and even levels of expertise. Young researchers who spent the week of the HLF meeting each other and forming connections. They are the future.

The full video of the panel can be found on the HLF YouTube channel.

A blonde, white woman smiling

Posted by

Sophie Maclean is a mathematician and maths communicator, currently studying for a PhD in Analytic Number Theory at Kings College London. She has previously worked as a Quantitative Trader and a Software Engineer, and now gives mathematics talks all over the UK (and Europe!). She is also a member of the team behind Chalkdust Magazine. You can follow her on Twitter at @sophietmacmaths

1 comment

  1. Dies hier ist korrekt (und vglw. amüsant) angemerkt :

    -> ‘The future of mathematics and computer science lies in collaboration.’ [Artikeltext}

    Denn Mathematik, die Fähigkeitslehre ist gemeint, die die Kunst des Lernens meint, im formalen Sinne, Mathematik kann wiederverwendet werden, sie ist losgelöst von weltlicher Problematik und doch von ihr inspiriert, aufgesetzt auch in ihrer (weltlich angeleiteten) Axiomatik. (Es kann andere Mathematiken geben.)

    Böse formuliert ist Mathematik Fähigkeitslehre, die ohne Computer auskommt.
    Sog. Maschinenbeweise von mathematischen Aussagen sind insofern möglich und finden auch statt.
    Insofern könnte sich sozusagen und heutzutage vermählt werden, im Abstrakten, nicht Weltlichen.

    Amüsant ist vielleicht die fehlende oder mangelhafte Ausprägung der explanatorischen Komponente.
    Hier – ‘On the other hand, Alexei Efros pointed out that perhaps it’s unfair to ask for explainability, as this isn’t expected in other fields.’ – geht der Schreiber dieser Zeilen insofern nicht mit.
    “Ai” muss dem bereit stellenden Erkenntnissubjekt erklären können, hier darf womöglich noch ein wenig i.p. “AI” geschraubt werden.
    Dies wäre ‘fair’?
    (Dr. Webbaer glaubt nicht, dass dies zufriedenstellend möglich ist, der Versuch dagegen ist anzustreben.)

    ‘Safety First’ spielt hier ein wenig hinein, an sich meint Sicherheit in Systemen der IT ein Rechtesystem, das bestimmt, wer was darf.
    Hier, bei – ‘ Open source code is an integral part of the infrastructure of many countries, underpinning electricity networks, as well as oil and gas pipelines.’ – musste Dr. Webbaer ein wenig schmunzeln, die Pipelines meinend, an sich ist es abär schon so, dass “AI” neu entwickelt und nicht in sog. Sicherheitskontexten zu fassen ist.

    Auch nicht schlecht :

    -> ‘Though it is unclear what the future may hold, it is clear that we are best to face it together.’ [Artikeltext]

    Dr. W mag auch so :

    -> ‘The future is not set. There is no fate but what we make for ourselves.’ [Zitat aus dem Film mit dem Namen ‘Terminator’ – not bad]

    Mit freundlichen Grüßen
    Dr. Webbaer

Leave a Reply

E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +