The semantic network of the Heidelberg Laureate Forum

Felipe Leno da Silva at the HLF 2018. Own image.

Felipe Leno da Silva at the HLF 2018. Own image.

The Heidelberg Laureate Forum is all about networking. Mostly in the usual sense of the word, of course: a chance for young researchers to meet laureates in mathematics and computer science, to interact with some of the most prominent minds in the field, and also with each other. But hovering invisibly over the HLF is another network, a semantic network: the concepts of computer science, and of mathematics, that make an appearance in many different contexts, connecting the lectures, the conversations, and the projects of the young researchers and laureates.

To be sure, it’s not as if everything is connected to everything else. For instance, interdisciplinarity notwithstanding, there seems to be a fairly wide divide between some of the more abstract mathematical talks and the computer science side. But some of the themes seem to be woven through all the different HLF warps and wefts.

One of those is machine learning. This year’s HLF kicked off with a basic introduction from John Hopcroft, beginning with the basics of simple neural networks and delving with us into deep learning. Machine learning raised its head in other lectures, for instance in David Patterson’s computer architecture talk on Thursday morning. After all, the requirements of machine learning meant that suddenly a company like NVIDIA found that people were using its cheap, high-performance GPUs to do stuff that did not involve graphical displays at all! And of course, topics like this have ramifications into the research of many of the young researchers involved. As an example, consider Felipe Leno da Silva, who is pursueing a PhD in computer science at the University of São Paulo in Brazil.

During a coffee break, he told me about what he is trying to get machines to learn faster, namely how to perform more complex tasks – such as moving towards a door and holding it open so a human can pass through. The key to successful training is a proper feedback (reward) function. You remember the game “hot and cold”? Where one child, the hunter, is looking for a small hidden objects, and all the others help him or her out by saying “hotter, hotter” if the hunter moves towards the hidden object, and “colder” when he/she moves away? That is somewhat similar to the way Felipe’s machines are getting trained in their complex tasks, where the key component for learning faster is reusing knowledge from previously solved tasks and communication with other machines and/or humans.

I’d love to see a semantic network of the whole HLF, linking prevalent topics, the laureates, the lectures, the workshops and the young researcher’s specialties. Could we please train a machine to produce that?

Markus Pössel hatte bereits während des Physikstudiums an der Universität Hamburg gemerkt: Die Herausforderung, physikalische Themen so aufzuarbeiten und darzustellen, dass sie auch für Nichtphysiker verständlich werden, war für ihn mindestens ebenso interessant wie die eigentliche Forschungsarbeit. Nach seiner Promotion am Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut) in Potsdam blieb er dem Institut als "Outreach scientist" erhalten, war während des Einsteinjahres 2005 an verschiedenen Ausstellungsprojekten beteiligt und schuf das Webportal Einstein Online. Ende 2007 wechselte er für ein Jahr zum World Science Festival in New York. Seit Anfang 2009 ist er wissenschaftlicher Mitarbeiter am Max-Planck-Institut für Astronomie in Heidelberg, wo er das Haus der Astronomie leitet, ein Zentrum für astronomische Öffentlichkeits- und Bildungsarbeit. Pössel bloggt, ist Autor/Koautor mehrerer Bücher, und schreibt regelmäßig für die Zeitschrift Sterne und Weltraum.

Leave a Reply

E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +