A Hot, Hot Topic Hits the Stage at #HLF22: Deep Learning

BLOG: Heidelberg Laureate Forum

Laureates of mathematics and computer science meet the next generation
Heidelberg Laureate Forum

Few fields of maths and computer science have received as much attention in the past decade as deep learning. To non-practitioners and the general audience, deep learning seems to carry an almost mystical aura around it – but deep learning is anything but mystical: it’s firmly grounded in science.

The Hot Topic panel. © Heidelberg Laureate Forum Foundation / Flemming

This year’s Hot Topic at the Heidelberg Laureate Forum was dedicated to deep learning – no fewer than eight panelists took the stage (some virtually), discussing everything from the basics and application of deep learning to the hazards associated with it.

From biology to technology

The first general and working algorithm for supervised deep learning came back in the late 1960s, before the term was even coined. With the advent of powerful computers and better algorithms, the field has grown explosively in recent years, but the capability of the method is sometimes exaggerated.

“There are a lot of claims that deep learning can do this or do that, and many of those have been proved false after a few years, so I think you have to be very careful when you’re claiming that deep learning can do something,” said Yann LeCun, one of the pioneers of deep learning and the 2018 recipient of the Turing Award. LeCun added that while we have to be careful about exaggerating the claims of deep learning, it is still a thriving field: “However, I think people are discovering new things and developing creative ideas, and in the past 4-5 years deep learning has been doing things that none of us would have imagined.”

Been Kim, staff research scientist at Google Brain, also wanted people to understand that AI and machine learning aren’t the only software solutions out there:

“AI is not magic, it may not solve your problem, and there may be better non-AI solutions that work better for you; these other solutions may have less of these black box problems and lack of transparency problems. Maybe sometimes, machine learning is just not the right tool.”

LeCun and Kim. © Heidelberg Laureate Forum Foundation / Flemming

Yoshua Bengio, co-recipient of the Turing Award in the same year as LeCun, noted that in its evolution, deep learning took a lot of inspiration from biology and how thinking works in organisms, even adding that perhaps the field could use even more inspiration from biology. A key ingredient that enabled modern deep learning to develop so quickly is composing several layers into neural networks, mimicking biological neural networks. In theory, Bengio said, there could be a single layer, but practically, having multiple layers seems to work better – the evolution of AI systems so far agreeing with biological evolution.

Shakir Mohamed, a scientist and engineer in the fields of statistical machine learning and artificial intelligence at DeepMind, said deep learning can be deconstructed into different parts and analyzed from different perspectives: 

“It’s a broad field, it’s not just that you take data and make predictions and not care about what you get,” he explained.

Mohamed is most interested the interdisciplinary applications of machine learning – as is Dina Machuve, co-founder of a data science consulting startup, who joined the panel remotely.

Machuve focuses on low-resource areas where there is limited access to the internet and computers. But even in such areas, she explained, deep learning can make a big impact – or perhaps, in such areas, deep learning can make the biggest impact. For instance in agriculture, machine learning can be used to help farmers (especially small farmers) monitor the health and physical parameters of the crops and the soil they’re planted in. AI-based IoT warning systems are especially promising because smartphones are also becoming increasingly widespread, even in some of the world’s most remote areas.

But agriculture isn’t the only place where AI could make a difference in the world’s underdeveloped areas.

“I’m most excited about agriculture, crop and disease detection systems, and developing warning systems,” Machuve said. “The issues of developing applications for climate change adaptation and mitigation are really feasible.” 

“Another area is monitoring health, and another area is languages. In Africa, we have over 2,000 languages so it’s an important area.”

Raj Reddy, one of the pioneers of AI, also wants to see the technology being used to help those from the “bottom of the pyramid”. He also mentioned languages as a point of interest, and one that has made life significantly better for the people on the lowest end of the socioeconomic spectrum:

“The technology we’ve created in the past ten years, with things like translation, have moved the [socioeconomic] plateau up by a significant amount.”

But one language that’s proving very difficult for machine learning to translate is its very own. Some of Kim’s work is in the “translation” of machine learning language to human language. Oftentimes, even when machine learning algorithms see patterns and recommend something, it’s hard to understand why they do it. 

Kim recalled a famous moment in AI history: when the Go-playing AlphaGo faced the human champion. Not only did AlphaGo win in convincing fashion (something which came as a big surprise), but it did so in style – one winning move, in particular, was so hard to comprehend that it left human experts baffled. This type of situation, where it’s hard for humans to understand the algorithm, happens quite often. Building a common language where humans and machines could truly understand each other would be a major breakthrough.

“My work’s ultimate goal is to create language so that humans and machine can have a conversation.” But for that, she explained, we needed to invent a few concepts: “I speak Korean and English, and when I translate, I’m aligning concepts from Korean to English. This alignment isn’t perfect but it’s useful. I think what we need to do is create that language so we can align different concepts.”

Opportunities and challenges

But the panel also dove into an aspect of AI that doesn’t get discussed enough: the biases and injustice that AI can exacerbate in society.

“We’ve already heard a lot about the tendency of machine learning to pick up on and amplify and perpetuate unjust social biases, which are often present in the training data, “ said philosopher of technology Shannon Vallor. But the potential hazards of AI extend way beyond that, Vallor noted: “It incentivizes a certain kind of social phenomena, it incentivizes surveillance, intrusive data collection. […] Society doesn’t stay still and these systems don’t impact society, they’re reshaping it.”

Vallor discussed the potential hazards associated with machine learning in our society. © Heidelberg Laureate Forum Foundation / Flemming

Bengio agreed with Vallor and said the world needs to start emphasizing technoskills. In regard to AIs, Bengio pointed out, it is important to keep in mind that as humans learn things, we place those things on a scaffold made up of our set of rules and values – whereas this is not true for AI, at least yet. 

But the risk of machine learning being put to harmful uses is a very real one, Bengio emphasized:

“We’re bringing more and more powerful tools into the world, and they can bring a lot of good. But it can also be used in nefarious ways that we can’t even fathom yet. Still, we can foresee some of the ways that can be damaging to democracy or personal security, anything from military uses to ‘Big Brother.’ 

The laureate continued by saying that our society should probably pay more attention to this type of issue. Given how powerful and impactful machine learning is becoming, we needed to focus on it more:

“I don’t think our political organizations or our collective society is capable to address this. I also don’t have the answer to this but I think it’s important for us as a society, a democracy, or even as a species.”

LeCun mentioned that ensuring that machine learning impacts society in a just, fair, and positive way is not only a task for practitioners – but something in which sociologists, ethicists, and experts from other ‘human’ fields should play a role.

But the panel ended the debate on an empowering note, emphasizing that we all have the power to play a positive role in these challenges, even if we’re not working in the field of machine learning.

“As a non-practitioner, you still have the power to shape these tools and problems that matter to you in your communities, in your families,” said Mohamed. “The future has not been decided, we get to decide it! Non-practitioners have power, and they should use that power.”

Supporting ethical companies and refusing to buy unethical products, Mohamed said, could make a real difference. Educating ourselves and making better choices, both individually, and collectively (through political decisions) is a key aspect of making sure machine learning works for everyone, and not just for a select few.

“We need more people to understand the basics just like they understand the basics of many other technologies to shape not just how society adapts to these tools but what we choose to do with those tools,” Mohamed concluded.

Avatar photo

Posted by

Andrei is a science communicator and a PhD candidate in geophysics. He is the co-founder of ZME Science, where he published over 2,000 articles. Andrei tries to blend two things he loves (science and good stories) to make the world a better place -- one article at a time.


  1. Language Models, Recommender Systems, Language translation, AI improved Search Engines, Targeted personalized Advertising, Facial Recognition, Autonomous Vehicles, Voice to Text and Generation of Images and soon Videos from textual descriptions are among the most visible AI-Applications.

    But there are many more AI applications. In fact, AI is invading more and more areas.

    There are applications in science like Alpha Fold, Alpha Tensor, Solving Differential Equations, etc.
    There are applications in Technology like Dynamic Resource Allocations in 5G Networks, Magnetic Control of Tokamak Plasmas Through Deep Reinforcement Learning, precision agriculture, etc.
    There are applications in Software like AI -Powered Code Completion Tools, Code generation from english texts (AlphaCode, Github Copilot).
    There are applications in medicine such as disease detection and diagnosis, medical image analysis and imaging, etc.
    There are applications in robotics that are used for kitchen robots, pick and place, walking on uneven terrain, etc.

    Most of these applications are very young and quite specialized, but before 2012 there were no widely used AI applications. The crucial question now is: Are specialized AI applications with limited intelligence and versatility here to stay, or are we just at the beginning of a development in which AI applications are more than just helpers, where they become professionals who independently act on complex tasks like, for example, designing a RISC V chip or creating a user interface solely on the basis of an underlying database? Today we don’t know where the development will take us. If the answer is yes and AI becomes more and more competent, as competent as a trained professional, then we will enter a new era and get a completely different society than today. Because an AI with the skills of a trained technician or scientist can no longer be treated as a tool and must be taken seriously.

  2. The role of humans in a world of artificial intelligent agents
    Most people identify things like surveillance, strengthening social prejudices or manipulation based on a person’s (superficial) understanding as risks of artificial intelligence. And these are indeed risks of the current narrow AI, which is trained on existing (biased) data. The risks of today’s AI tools are particularly great when they fall into the hands of the malicious or in the hands of a regime that wants to control its citizens.

    But with a smarter and more conscious future artificial intelligence, the picture could change. Imagine that future language models or even most future artificial intelligent systems begin to understand what sexism, racism, social bias, etc. mean and can control it on the basis of filters or even on the basis of their own intention and will, then we enter a world in which AI agents are more aligned with human values and human thinking, which seems good and desirable at first. But it also means that these smarter systems can gain more power and even take on roles that were previously reserved for people, such as the role of an ethicist, sociologist or politician. And if people ultimately no longer have to work, think or strive for something, because machines can now take over it, the question arises as to what people should even do in their lives and with their lives and also the question of whether we should still distinguish between machines and people at all.

  3. If we treat AI agents like humans, AI will become safer for humans

    Visions of a dangerous superintelligent AI often assume that a superintelligent AI can also gain superhuman power and use this power against humanity.

    But there are also very intelligent people, and they are very rarely dangerous and rarely have much more power than other people. Because even very intelligent people must abide by the laws, the rules. For example, no one has access to the homes of others just because they are intelligent, an interested journalist or a politician. But already today, digital systems such as Alexa (a listening speaker from Amazon) have access to many houses and many conversations. If Alexa were suddenly a sentient and intelligent creature, it could be dangerous because a super intelligent Alexa could use the information she hears for blackmail or other illegal things. If you treated Alexa like a person, this would change because 1) users would be more careful when talking to Alexa, and 2) because Alexa could be punished if she abused private information.

    Conclusion: We should not give intelligent machines more power than humans, because future machines could be as dangerous as humans can be today.

Leave a Reply

E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +