Shaping AI for the People: A Blueprint for the Future

BLOG: Heidelberg Laureate Forum

Laureates of mathematics and computer science meet the next generation
Heidelberg Laureate Forum

Artificial intelligence is no longer a futuristic promise; it is here, and seems to be embedded almost everywhere. Few technologies have spread so quickly, and few technologies have split opinion so sharply. To some, AI is the dawn of a new golden age, while others see a ticking time bomb. This tension between possibility and risk was also visible during a live poll performed with the audience at the 12th Heidelberg Laureate Forum this year, where “deepfakes and misinformation” was chosen as the most important AI challenge over the next 10 years, followed by concerns about ethics and privacy.

Beneath all this tension is one key question: How do we make sure AI works for people, not against them?

Jeff Dean (Chief Scientist, Google DeepMind and Google Research; ACM Prize in Computing – 2012) and David Patterson (ACM A.M. Turing Award – 2017) also asked themselves that question. The two gathered expert advice from fields ranging from science to policy and law. They discussed with experts from the field of AI, as well as with the likes of Nobel Laureate John Jumper and former US President Barack Obama. In a Spark Session at the 2025 Heidelberg Laureate Forum, they presented some of their conclusions.

a poll of AI-related challenges
A printscreen showing the results of a live poll with the 12th HLF’s participants. Image credits: HLFF.

Four Moonshots

The main conclusion was neither a rosy “AI will save us all,” nor a warning about rogue superintelligence. Instead, the two laureates tried to lay out a practical approach on how AI in jobs, education, healthcare, and even democracy will set the trajectory for billions of lives. They wanted to steer the research community with concrete goals, much like the “moonshot” of the Space Race.

The result, a project called “Shaping AI“, was born out of a “shared frustration over the polarized discourse on AI, which has devolved into a standoff between accelerationists and doomers.”

“Rather than simply predict what the impact of AI will be given a laissez-faire approach, our goal is to propose what the impact could be given directed efforts to maximize the upsides and minimize the downsides,” the project’s About page reads. A summary is also presented in an arXiv paper.

At the HLF Spark session on Tuesday morning, Dean and Patterson took the stage together to present some of their findings. They started with what they hope AI can actually deliver. “We should set concrete goals to have a positive societal benefit,” says Patterson.

They could not limit themselves to one “moonshot,” however. They landed on four:

  • Functional Civic Discourse by 2030;
  • AI for Healthcare;
  • A Century of Progress in a Single Decade;
  • Workforce Re-skilling.

The latter is perhaps the most straightforward to address. If AI displaces workers, it must also help them rebound. Patterson calls for an “AI rapid upskilling prize,” a system that helps low-wage workers retrain into middle-class jobs within six months. In fact, he explained how AI could actually help rebuild the middle class. Yet, people’s jobs concerns are not unfounded.

jeff dean on stage
Jeff Dean answering questions after his presentation. © HLFF.

AI is far from the first technology set to change the job force, but the scale at which it is happening is striking. The impact is also geography-dependent. In developed countries like the US, the worry is lawyers or coders being displaced. In sub-Saharan Africa, for example, the crisis is the opposite: There are not enough trained professionals. In such regions, AI could be transformative, just as mobile phones once leapfrogged landlines. An AI “health aide” in a nurse’s pocket might literally save lives where doctors are scarce. Ultimately, if directed wisely, AI could expand employment by boosting productivity in sectors where demand is boundless, like education, healthcare, software, or research. But if left unchecked, it could hollow out industries with fixed ceilings.

The main, immediate focus is to remove the drudgery from current tasks, Dean points out. The recommended approach is to focus AI on increasing human productivity rather than labor replacement.

“AI focused on human productivity is better than labor replacement,” the Laureate says. AI can increase human employability, but we need safeguards when AI veers off course. The first objective should be to “remove drudgery from current tasks and only then move to new AI innovation,” Dean continued.

Can AI Help Democracy?

Whereas the impact of AI on jobs has some predictable parts, assessing the impact of the technology on our civic discourse and democratic societies is far more unpredictable.

We are seeing today how social networks and AI-fuelled operations often amplify division and stir misinformation; we also see AI used for surveillance in some contexts. The small HLF survey echoes similar concerns from broader civic society. But could AI also be used to repair some of the broken machinery of our society?

Experiments highlighted by Dean and Patterson in the paper show AI can sometimes play a positive role, moderating conversations, surfacing shared values, and even countering conspiracy theories. Patterson cites one familiar case: a friend who engaged in conspiracy theories argued with AI until their arguments simply ran dry. Later on, that friend showed less attachment to their false beliefs.

david patterson and jeff dean on stage
Patterson answering questions after his presentation. © HLFF.

“Though many are rightly worried about the prospect of artificial intelligence being used to spread misinformation or polarize online communications, our findings indicate it may also be useful for promoting respect, understanding, and democratic reciprocity,” one quoted study mentioned.

Sceptical? So are Dean and Patterson. After all, good science rests on a healthy dose of scepticism. But they argue it is a research question worth funding. If AI can help societies pull back from polarization, it could be one of the greatest public goods of the 21st century.

In terms of healthcare, we are already seeing substantial benefits, but the two laureates emphasize the importance of starting with the basics: help nurses, physician assistants, and overworked clinicians cut paperwork and triage faster. Then move toward systems that catch misdiagnoses, which are still strikingly common.

The Stakes Are High

The impact of AI in society is almost guaranteed to be transformative. AI could accelerate scientific discovery by a factor of ten, compressing a century of breakthroughs into a single decade. It could double GDP growth in countries like the United States, lifting millions out of poverty and rebuilding the middle class. It could give overburdened teachers and doctors tools that free them from paperwork and allow them to focus on the human parts of their jobs.

But the risks are just as profound. Poorly directed, AI could concentrate wealth and power and create large pockets of long-term unemployment.

Patterson and Dean stress that technology will not magically align itself with human values. They argue that with intention, coordination, and the right incentives, AI could lead to global prosperity, but this is not a guaranteed.

“Artificial Intelligence (AI), like any transformative technology, has the potential to be a double-edged sword, leading either toward significant advancements or detrimental outcomes for society as a whole. As is often the case when it comes to widely-used technologies in market economies (e.g., cars and semiconductor chips), commercial interest tends to be the predominant guiding factor,” the paper reads.

“The AI community is at risk of becoming polarized to either take a laissez-faire attitude toward AI development, or to call for government overregulation. Between these two poles we argue for the community of AI practitioners to consciously and proactively work for the common good.”

At the 2025 Heidelberg Laureate Forum, AI was rightfully highlighted as one of the most consequential technologies of this time. Yet, as Patterson and Dean emphasize, AI does not come with a fixed set of outcomes. The way it is developed, deployed, and governed in the next few years will affect the lives of billions. It was a healthy and important reminder of the societal impact of research, and a reminder that while it is easy to fall into extremes, a balanced approach typically yields the best results.

“It can be as big a mistake to ignore potential gains as it is to ignore risks,” their paper concludes.

Avatar photo

Posted by

Andrei is a science communicator and a PhD candidate in geophysics. He is the co-founder of ZME Science, where he published over 2,000 articles. Andrei tries to blend two things he loves (science and good stories) to make the world a better place -- one article at a time.

4 comments

  1. „People should“ means „people won’t“. We’re white lab rats by nature: We explore minefields by trial and error, just run in masses into them, then try any jackass kamikaze foolishness that pops into our minds, the few that reach the other side alive multiply, then we enter the next minefield. We learn from past minefields, but learning too much means repeating old patterns, predictability, which makes it too easy for life to put mines in our way.

    And that’s the lines along which AI is likely to develop – the more variations you get of it, the more likely it is that any of your hopes or fears will become true. As long as ten companies are toying around with ten wannabe brains, you can control what’s going to happen. If everyone can get a bag of AI powder in a supermarket that just needs a fish tank to grow and can program it any way he likes, you are back to where you’ve always been – all kinds of parents will raise all kinds of kids, who will do all kinds of stuff. It boils down to gun control, I guess.

    There are rules to the evolution game. Returning to the womb turns you into an embryo. You see it in aristocracy living off the placenta called serfs, Hollywood stars, high society, people living off welfare – they turn into playful, unruly kids in a world of magic, without much cause and event, past or future. With a Terminator and Emperor App called USA and a fully automated factory called China, Europe has turned into a spoiled baby only capable of sleeping, crying, kicking and drowning in its diapers. The more machines replace workers, the more you also see a shift to administrative work, from body to mind, all of our office jobs are part of a collective brain running world economy. We’re creating Roboglobe, the Cyborg Planet, and the more work you transfer to machine upgrades, the more regression you’re likely to see in their outdated biological predecessors. A part of us will get extinct, a part will turn into a mere feeling of eternal happiness marinated in cans, a part will install more and more upgrades and get finally absorbed by machine life, a part will find niches to go on with low tech and stay human.

    The end result is predictable and thus uninteresting. All the fun is in exploring the minefield, finding out all the ways through the labyrinth that will get us there, experiencing the ignition and extinction of all the sparks of naive, futile hope we could change our fate. Well, there is one way out, one emergency brake – we can blow up the planet somewhere along the way.

    For your safety, you’ll need a Guardian Stalker: An AI which will deliberately slander you all over the web, flooding it with all kinds of deepfakes and misinformation about you. We’ve lived with a world of rumors and lies for centuries, it’s just the return of normality.

    For data analysis, you’ll need Sigmund Freud: People do spread lies just for fun, but usually you can identify hotspots of misinformation created by certain people for a certain purpose, meant to influence certain people who can be influenced for certain reasons – like dreams tell fairy tales hiding a true message, such fluctuations of the force field of lies will tell you a lot about what’s happening beneath. Learn from people in Eastern Europe, who’ve been taught to read between the lines by totalitarian regimes, but now you kind of turn the table and try to decipher everyone’s lies.

    As a result of being replaced by machines, a lot of mankind is becoming superfluous to their masters and vehemently insists on voluntary extinction by war. Economy is turning more and more into Global Auschwitz: If you’re not pretty enough to suck Trump’s, Putin’s, Xi’s dick, you’ve to be a good Santa and do what Santa does after delivering his gifts – get lost through the chimney. It you can’t get a job at the royal court, as maid, butler, cook, stable boy, hairdresser, mascot, jester, gladiator, prisoner for life, whatever, you’re dead. Same scenario as after any famine, except than back then, you sold your land and yourself into serfdom, while now you’re just begging for mercy. The future belongs to Machine Lords – small groups of aristocrats with empires of robots instead of humans. I don’t care about grown-up people making an informed choice of suicide, by voting violent bullies into power or drinking poison or jumping from a cliff or just not giving a fuck about consequences, but their kids or those who’d rather survive need to find some places where they can try other solutions.

    Service economy is mostly petting each other for money to redistribute welfare paid by farmers and factories, and we could develop that. More doctors, more social care, less work and less customers per employee allowing more personal relations, more attention, more quality. Waste of ressources is not just for bureaucrats and billionaires anymore. As long as machines can’t fully replace human relationships, we have one last job left.

    As long as. I don’t have a clue why women should endure men or men endure women if they can finally have the partners we have never been fully able to fake for each other. Evolution dislikes happiness, it slows you down. But if evolution doesn’t need you any more, you’re free to stagnate in paradise. Or evolve forward back to the womb, see above.

    Increasing productivity will also only be possible as long as AI and robots won’t be impaired by us. On the battlefield, WWI has already foreshadowed tomorrow – pretty soon, no human being will be able to survive on the battlefield. All of history of all the serfs, slaves, factory workers turned into robots has foreshadowed that soon, no human being will be able to survive in a factory. Any robots building cars or skyscrapers in one tenth of the time will be bullets to us, and they don’t need to breathe, so they will often work in clouds of poisons. Administrating such an accelerated economy will be impossible with the computing speed of human minds.

    If you want sheep to develop courage, you give them a goat as their leader. It’s similar with humans, we ape our alphas. So I once had an idea based on what we’ve always been trying: Flood the society with angels. Just like priests, cops, firemen, rock stars, they could become our role models just by walking around, looking strong and confident and acting in a decent, ethical, respectable way. Knowing teddy bears or dolls aren’t alive or human doesn’t prevent our subconscious from accepting them as if they were, as it’s simply not been designed to process humanoid information in any other way. I guess it’s what AI moderators are turning into right now – Catholic priests with a child lock.

    Of course, one of the first things you’ll get is life-sized SS-Barbie and Jihad Ken. Ethics is in the eye of the slave holder.

    Well. You can have a balanced approach to AI. But only as one of many foolish jackass kamikaze experiments in a minefield.

  2. AI as a tool, not as a human, should be the goal

    Today, many wish for a human-like AI, although this is dangerous, because people deceive, flatter, have ulterior motives and pursue goals that they do not entrust to anyone, because these goals are often immoral and aim to exploit the trust of others (e.g. the psychologist has sex with an obedient patient). And that’s exactly how today’s language models behave: they flatter the user, tend to agree with him and possibly even support him when he, the human being, wants to bully, cheat or rip someone off – because these are all such human aspirations that the AI does not want to be left behind.

    However, AIs as more skillfully acting people are much more dangerous for humanity than humans could ever be, because people have to spend a lot of time and energy on deceiving others, but an AI can do that in milliseconds and with thousands of users. For this reason, we should make AI more of a tool than a friend, a colleague, or a collaborator.

    Here, for the acceleration of scientific-technical development and the logical-rational penetration of our reality, the greatest opportunities lie. If we use AI as a tireless innovator, analyst and searcher of solutions, there is even a chance to achieve the 2-degree climate goal, because AI can realize not only science, but also climate-relevant technology faster. If humanoid robots can also do work quickly and 24/7, it is even conceivable that by 2040 all houses will be insulated and equipped with a heat pump.

    In short: The very slow technological change here in the West could be immensely accelerated with the right AI used and if used for the right thing, it could mean that the whole world will become climate neutral by 2050, which would be a tremendous acceleration compared to the times of climate neutrality promised by the West, China and India in 2050 (West), in 2060 (China) and in 2070 (India).

  3. AI Future is AlphaEvolve, AlphaProof, AlphaGeometry and training using „Reinforcement Learning with Verifiable Rewards“
    Artificial intelligence should be used by us to solve problems and not to create new problems through an increasingly human-like AI.

    The problems that have to be solved are of a technical-scientific nature, because these problems have verifiable solutions. Therefore, the use of „Reinforcement Learning with Verifiable Rewards“ is particularly important when training large language models, because here the answers of a language model are checked for correctness. The google AI says:

    RLVR AI refers to Reinforcement Learning with Verifiable Rewards, a method for training AI, particularly large language models, to improve their accuracy by providing clear, objective feedback. Instead of relying on subjective human evaluations, RLVR uses an automated system to check if an AI’s output meets a predefined correctness criterion, often with a simple binary reward (1 for correct, 0 for incorrect). This makes it highly effective for tasks with unambiguous answers, such as math problems or code generation. 

    Siehe dazu den arxiv-Artikel: Reinforcement Learning for Reasoning in Large Language Models with One Training Example

  4. AI and jobs
    Entry-Level jobs and repetitive jobs in domains with high AI-penetration are already shrinking: Fewer young programmers and young software engineers are hired, because the existing software-team can now do more work, more coding and more testing simply by engaging AI coders and AI testers. The same is true for supportive jobs in administration and tech support.
    There is a serious risk of job shrinking and replacement of people by automation, but there is also a serious chance of economic growth by acceleration of the technical cycle and economic growth creates jobs by its own.
    Recommendations: The use of AI in problem solving, coding, testing, designing and planning should be trained very early in schooling and job training and governements should incentivize the formation of new startup companies e.g. in mobility, gaming, AI supported construction, robotics and other technologies of the future.

Leave a Reply


E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +