A Hot, Hot Topic Hits the Stage at #HLF22: Deep Learning
Few fields of maths and computer science have received as much attention in the past decade as deep learning. To non-practitioners and the general audience, deep learning seems to carry an almost mystical aura around it – but deep learning is anything but mystical: it’s firmly grounded in science.
The Hot Topic panel. © Heidelberg Laureate Forum Foundation / Flemming
This year’s Hot Topic at the Heidelberg Laureate Forum was dedicated to deep learning – no fewer than eight panelists took the stage (some virtually), discussing everything from the basics and application of deep learning to the hazards associated with it.
From biology to technology
The first general and working algorithm for supervised deep learning came back in the late 1960s, before the term was even coined. With the advent of powerful computers and better algorithms, the field has grown explosively in recent years, but the capability of the method is sometimes exaggerated.
“There are a lot of claims that deep learning can do this or do that, and many of those have been proved false after a few years, so I think you have to be very careful when you’re claiming that deep learning can do something,” said Yann LeCun, one of the pioneers of deep learning and the 2018 recipient of the Turing Award. LeCun added that while we have to be careful about exaggerating the claims of deep learning, it is still a thriving field: “However, I think people are discovering new things and developing creative ideas, and in the past 4-5 years deep learning has been doing things that none of us would have imagined.”
Been Kim, staff research scientist at Google Brain, also wanted people to understand that AI and machine learning aren’t the only software solutions out there:
“AI is not magic, it may not solve your problem, and there may be better non-AI solutions that work better for you; these other solutions may have less of these black box problems and lack of transparency problems. Maybe sometimes, machine learning is just not the right tool.”
LeCun and Kim. © Heidelberg Laureate Forum Foundation / Flemming
Yoshua Bengio, co-recipient of the Turing Award in the same year as LeCun, noted that in its evolution, deep learning took a lot of inspiration from biology and how thinking works in organisms, even adding that perhaps the field could use even more inspiration from biology. A key ingredient that enabled modern deep learning to develop so quickly is composing several layers into neural networks, mimicking biological neural networks. In theory, Bengio said, there could be a single layer, but practically, having multiple layers seems to work better – the evolution of AI systems so far agreeing with biological evolution.
Shakir Mohamed, a scientist and engineer in the fields of statistical machine learning and artificial intelligence at DeepMind, said deep learning can be deconstructed into different parts and analyzed from different perspectives:
“It’s a broad field, it’s not just that you take data and make predictions and not care about what you get,” he explained.
Mohamed is most interested the interdisciplinary applications of machine learning – as is Dina Machuve, co-founder of a data science consulting startup, who joined the panel remotely.
Machuve focuses on low-resource areas where there is limited access to the internet and computers. But even in such areas, she explained, deep learning can make a big impact – or perhaps, in such areas, deep learning can make the biggest impact. For instance in agriculture, machine learning can be used to help farmers (especially small farmers) monitor the health and physical parameters of the crops and the soil they’re planted in. AI-based IoT warning systems are especially promising because smartphones are also becoming increasingly widespread, even in some of the world’s most remote areas.
But agriculture isn’t the only place where AI could make a difference in the world’s underdeveloped areas.
“I’m most excited about agriculture, crop and disease detection systems, and developing warning systems,” Machuve said. “The issues of developing applications for climate change adaptation and mitigation are really feasible.”
“Another area is monitoring health, and another area is languages. In Africa, we have over 2,000 languages so it’s an important area.”
Raj Reddy, one of the pioneers of AI, also wants to see the technology being used to help those from the “bottom of the pyramid”. He also mentioned languages as a point of interest, and one that has made life significantly better for the people on the lowest end of the socioeconomic spectrum:
“The technology we’ve created in the past ten years, with things like translation, have moved the [socioeconomic] plateau up by a significant amount.”
But one language that’s proving very difficult for machine learning to translate is its very own. Some of Kim’s work is in the “translation” of machine learning language to human language. Oftentimes, even when machine learning algorithms see patterns and recommend something, it’s hard to understand why they do it.
Kim recalled a famous moment in AI history: when the Go-playing AlphaGo faced the human champion. Not only did AlphaGo win in convincing fashion (something which came as a big surprise), but it did so in style – one winning move, in particular, was so hard to comprehend that it left human experts baffled. This type of situation, where it’s hard for humans to understand the algorithm, happens quite often. Building a common language where humans and machines could truly understand each other would be a major breakthrough.
“My work’s ultimate goal is to create language so that humans and machine can have a conversation.” But for that, she explained, we needed to invent a few concepts: “I speak Korean and English, and when I translate, I’m aligning concepts from Korean to English. This alignment isn’t perfect but it’s useful. I think what we need to do is create that language so we can align different concepts.”
Opportunities and challenges
But the panel also dove into an aspect of AI that doesn’t get discussed enough: the biases and injustice that AI can exacerbate in society.
“We’ve already heard a lot about the tendency of machine learning to pick up on and amplify and perpetuate unjust social biases, which are often present in the training data, “ said philosopher of technology Shannon Vallor. But the potential hazards of AI extend way beyond that, Vallor noted: “It incentivizes a certain kind of social phenomena, it incentivizes surveillance, intrusive data collection. […] Society doesn’t stay still and these systems don’t impact society, they’re reshaping it.”
Vallor discussed the potential hazards associated with machine learning in our society. © Heidelberg Laureate Forum Foundation / Flemming
Bengio agreed with Vallor and said the world needs to start emphasizing technoskills. In regard to AIs, Bengio pointed out, it is important to keep in mind that as humans learn things, we place those things on a scaffold made up of our set of rules and values – whereas this is not true for AI, at least yet.
But the risk of machine learning being put to harmful uses is a very real one, Bengio emphasized:
“We’re bringing more and more powerful tools into the world, and they can bring a lot of good. But it can also be used in nefarious ways that we can’t even fathom yet. Still, we can foresee some of the ways that can be damaging to democracy or personal security, anything from military uses to ‘Big Brother.’
The laureate continued by saying that our society should probably pay more attention to this type of issue. Given how powerful and impactful machine learning is becoming, we needed to focus on it more:
“I don’t think our political organizations or our collective society is capable to address this. I also don’t have the answer to this but I think it’s important for us as a society, a democracy, or even as a species.”
LeCun mentioned that ensuring that machine learning impacts society in a just, fair, and positive way is not only a task for practitioners – but something in which sociologists, ethicists, and experts from other ‘human’ fields should play a role.
But the panel ended the debate on an empowering note, emphasizing that we all have the power to play a positive role in these challenges, even if we’re not working in the field of machine learning.
“As a non-practitioner, you still have the power to shape these tools and problems that matter to you in your communities, in your families,” said Mohamed. “The future has not been decided, we get to decide it! Non-practitioners have power, and they should use that power.”
Supporting ethical companies and refusing to buy unethical products, Mohamed said, could make a real difference. Educating ourselves and making better choices, both individually, and collectively (through political decisions) is a key aspect of making sure machine learning works for everyone, and not just for a select few.
“We need more people to understand the basics just like they understand the basics of many other technologies to shape not just how society adapts to these tools but what we choose to do with those tools,” Mohamed concluded.