What does the future of math and computing hold?

BLOG: Heidelberg Laureate Forum

Laureates of mathematics and computer science meet the next generation
Heidelberg Laureate Forum

On the final day of the 2021 Heidelberg Laureate Forum a panel of laureates convened to discuss “Advances in Computer Science, Mathematics and Computing.” The panel included Vint Cerf (2004 Turing Award), Yoshua Bengio (2018 Turing Award), Alessio Figalli (2018 Fields Medal), Yann LeCun (2018 Turing Award), and Avi Wigderson (1994 Nevanlinna Prize and 2021 Abel Prize).

The panel covered a lot of topics including the future of AI and advice for students pursuing their PhD’s. Among the highlights, Vint Cerf asked the panel if they are worried about AI and its potential dangers? Can we really rely on AI for critical tasks?

In response, Yoshua Bengio said that there is not huge danger from AI at the moment – though there are concerning use cases like lethal autonomous drones – but that future uses, like AI for bioengineering, could potentially have civilization ending consequences. However, he argued that these problems largely aren’t technical but rather the result of the sociopolitical structures that exist in human society. If anyone can get access to dangerous AI-based technologies than the risks become much greater.

Yann Lecun echoed Bengio’s thoughts; for instance, many people are concerned that AI will invade their privacy. Lecun argued that this isn’t really an AI problem but a governance problem. Authoritarian regimes will use AI to invade individual’s privacy, but he believes that the right governance structure can prevent such abuses in democratic countries.  

Alessio Figalli mentioned a less existential threat from AI: suppose that future computer systems are able to solve mathematical proofs for you. As computers take over completing more standard proofs, students and researchers stop doing them. This could be beneficial as you are able to complete work faster, however it has a downside – the way to get better at research and problem solving is by training yourself on more reasonable problems. If you completely stop solving reasonable problems, then your long-term skills will suffer. Researchers will need to motivate themselves to work on these approachable problems, even if a computer can solve them easily.

During the Q&A portion of the session, the panel was asked what fields they would advise students to pursue and what advice they would give to researchers just starting their PhDs.

Lecun suggested that students work on self-supervised learning and learning from observation as these will be key to making advancements in AI and machine learning. Bengio encourage young people to be a part of the “AI for social good” movement, since using AI to solve key problems in the world will only become more important as AI systems improve and are more widely deployed.

Wigderson and Figalli both suggested that recent PhD students should follow their passions and discover what things they both like and are good at. From there you can work with your advisor and peers to figure out how to make your passions actionable. Lecun encouraged new PhD students to attempt to solve a very important problem, recognizing that such a problem will probably be extremely hard and you will likely have to renormalize the scope of your problem until it is hard but solvable for you.

Useful advice from some very accomplished researchers! You can watch the full session on the HLF YouTube channel next week.

Avatar photo

Posted by

Khari Douglas is the Senior Program Associate for Engagement for the Computing Community Consortium (CCC), a standing committee within the Computing Research Association (CRA). In this role, Khari interacts with members of the computing research community and policy makers to organize visioning workshops and coordinate outreach activities. He is also the host and producer of the Catalyzing Computing podcast.

3 comments

  1. The real AI threat: Loss of trust in people
    Quote from above:

    “As computers take over completing more standard proofs, students and researchers stop doing them.”

    A colleague of mine just said to me in a discussion about self-driving cars: “Autonomous cars have to have zero accidents to be allowed on public roads” .
    My reaction: when self-driving cars are much safer than humans, people lose their legitimacy to drive a car.
    The same goes for many other areas. An impartial robotic lawyer who knows every fold of the law will replace all human lawyers.

    The last woman on earth will have a robot husband and robot children. Why ?: Because robots are better lovers, better husbands, better children. Yes they will be better humans and therefore they will replace humans!

  2. Hier – ‘However, he argued that these problems largely aren’t technical but rather the result of the sociopolitical structures that exist in human society. If anyone can get access to dangerous AI-based technologies than the risks become much greater.’
    +
    ‘Yann Lecun echoed Bengio’s thoughts; for instance, many people are concerned that AI will invade their privacy. Lecun argued that this isn’t really an AI problem but a governance problem.’
    [Hervorhebungen jeweils : Dr. Webbaer]

    … musste Dr. Webbaer leicht aufgrunzen.

    Es ist so, dass AI-basierte Technologien nicht, und Dr. Webbaer wiederholt hier gerne, nicht in ihrer Anwendung politisch (global) verwaltet werden können, was daran liegt, dass ein beträchtlicher Teil dieses Planetens nicht sozusagen, wie in Liberalen Demokratien üblich, gut organisiert bis über-organisiert ist.
    Auch nicht “rogue-frei”.

    Korrekt, angebracht, wäre es insofern bereits jetzt avisierten Missbrauch (oder schlicht (“schlecht”) Gebrauch derartiger Technologie anderswo zu antizipieren und eine passende Defensivhaltung vorab zu entwickeln, ob derartiger Gefahr, dabei unbedingt Wert darauf zu legen einen technologischen Vorsprung zu behalten.
    Gegenüber anderen.

    Was schief gehen kann, wird schief gehen, es gilt sich hier präemptiv bestmöglich abzufeimen.

    Es ist keine Kleinigkeit, wenn es derart bald möglich ist Angriffe mit Massenvernichtswaffen zu fahren, maschinengesteuert, mit eigener Intelligenz, deren Urheberschaft nie nachgewiesen wird, bei bestem Einsatz dieser Waffen auch nie nachgewiesen werden kann – sofern sich nicht Zeugen auf der gegnerischen Seite direkt anbieten, worauf nicht gehofft werden muss.

    Mit freundlichen Grüßen
    Dr. Webbaer (der für liberale Demokratien ohnehin die Entstehung eines Überwachungsstaates erwartet, hier aber explizit darum bittet umfänglich auch andere zu überwachen, danke; Dr. Webbaer sich i.p. Webentwicklung und Fortentwicklung auch i.p. sog. AI ein wenig auskennend)

Leave a Reply


E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +