Hyperspectral Imaging and Responsible AI
BLOG: Heidelberg Laureate Forum
The Heidelberg Laureate Forum has a single purpose: To provide some of the brightest minds in mathematics and computer science with the space and time to make connections and find inspiration. Some of the connections made at the HLF will echo into collaborations and projects, with some of those efforts leading to concrete developments. The HLFF Spotlight series unpacks a few of those examples.
When Jimoh Abdulganiyu stepped off the plane at Frankfurt Airport in 2019, he was not sure what to expect. He had never left Nigeria before – never left Africa. He had applied to the 7th Heidelberg Laureate Forum after his supervisor had shared a link with his research group. Now here he was, a final-year undergraduate with a project on medical image compression, standing in the arrivals hall of one of Europe’s busiest airports.
He spotted a man holding a sign with his name on it. Next to the man: a Mercedes-Benz.
“In Nigeria, people who get that kind of treatment are politicians,” he laughs. “I was like – is this for me?”
It was. And in a sense, so was everything that followed.
A Problem Worth Solving
Jimoh grew up in Nigeria and studied Computer Science at Adekunle Ajasin University. His undergraduate research was not driven by abstract curiosity but by a very concrete problem: The university’s hospital scanned a chest X-ray of every incoming student each year and was drowning in storage. He and his advisor set out to build a lossless compression algorithm – one that could shrink image file sizes without sacrificing a single pixel of diagnostic information. In medical imaging, every detail counts, and lossy compression is not an option.
That early project planted a seed. Here was computing doing something immediately useful – not theoretical, not decorative, but solving a problem that had a real purpose and real stakes. It is a philosophy he has carried through every chapter of his career since.
The Forum That Changed Everything
It was during his final undergraduate year that Jimoh first heard about the Heidelberg Laureate Forum – an annual gathering in Germany that brings together a select group of young researchers in mathematics and computer science with the fields’ highest achievers: recipients of the Abel Prize, the Fields Medal, the ACM A.M. Turing Award, the ACM Prize in Computing, the Abacus Medal and the Nevanlinna Prize. His supervisor shared the link. Nobody else in the group applied. Jimoh did.
He was invited, and that week in Heidelberg in 2019 reshaped his sense of what was possible.
He was able to have engaging conversations with laureates, including Yoshua Bengio – the deep learning pioneer and Turing Award recipient – and Jeff Dean, Chief Scientist at Google DeepMind and Google Research, co-technical lead of Google Gemini. He connected with young researchers from universities he had only read about. He visited the Heidelberg Institute of Theoretical Studies (HITS) and research facilities around Heidelberg. And he participated in the HLF’s long-running Intercultural Science Art Project in which young researchers communicate their work visually, for a non-specialist audience. His piece – an artistic rendering of his X-ray compression research – was later exhibited in Serbia in 2021 and featured in the opening ceremony of the 9th HLF 2022.

What the HLF gave him, he says, was not merely contacts or inspiration – it was audacity. “If I had told myself ‘I am from a small village in Nigeria, this kind of forum is for people from Harvard’ – I would not have been there. And I would have missed everything that came from it … After I attended the HLF, my career really changed. I had the boldness to apply for things … to talk, to meet people, to make connections.”

Images, All the Way Down
If there is a single technical thread running through Jimoh’s career, it is image analysis.
After completing his Bachelor’s degree, he was awarded a fully funded scholarship to pursue a Master’s in Collective Intelligence – with a specialization in computer science and data science – at Mohammed VI Polytechnic University (UM6P) in Morocco, one of Africa’s fastest-rising research institutions. There, he pivoted from compressing medical images to reading them: using machine learning to detect and classify cancer from digital pathology slides. Where his undergraduate work had been about storing images faithfully, this was about interpreting them precisely – training an algorithm to identify whether cells were healthy, pre-cancerous, or malignant.
He submitted a paper on cancer detection and classification while at MICCAI – the Medical Image Computing and Computer Assisted Intervention conference, one of the most competitive venues in the field – and it was selected as one of the best papers at the Deep Breast workshop on artificial intelligence. It was a serious international validation for a researcher still in his mid-twenties.
He also returned to Heidelberg. Attending the HLF for a second time in 2022, he found a renewed confidence to boldly engage with other fellow young researchers.
Towards the end of his Master’s studies, Jimoh spent 6 months as a visiting researcher fellow at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. As part of the project “BioTIP – Bio-Signal Retrieval from Thermal Imaging Processing” under Dr. Pierre-Etienne Martin, he applied computer vision and machine learning segmentation algorithms to animal thermal imagery for neuroscience research – a different domain entirely, but the same underlying discipline: making sense of visual data that the human eye cannot efficiently process alone.
Environmental AI Applications
Jimoh is now a first-year PhD student at Utah State University, enrolled – in a move that surprises almost everyone he tells – in the Department of Civil and Environmental Engineering. He works at the Utah Water Research Laboratory in Logan, Utah, under Professor Sierra Young, whose U.S. National Science Foundation funded project concerns one of the more urgent environmental stories in the American West: the decline of the Great Salt Lake.
The lake has been shrinking for decades, its water level dropping as demand – agricultural, industrial, residential – outpaces replenishment. As the water recedes, it exposes vast stretches of ancient lake bed: salt flats and mineral sediments that dry out, break apart, and get swept into the air as fine toxic dust. That dust settles on crops, infiltrates lungs, and poisons ecosystems. The U.S. government has proposed investing over a billion dollars to stabilize the lake. One aspect is understanding exactly what is driving its decline and where things stand right now, season by season, across an enormous and hard-to-access landscape. Another is understanding the consequences of the current and future decline.

This is where hyperspectral imaging comes in.
A standard camera captures three channels of light: red, green, and blue, similar to how human eyesight interprets light. Hyperspectral cameras capture dozens or even hundreds of channels simultaneously, extending far beyond what the human eye can perceive – into the near-infrared, the short-wave infrared, and beyond. The result is not merely a picture but a “data cube”: a rich, layered dataset in which every pixel carries a detailed spectral fingerprint of whatever surface it represents.

Those fingerprints are extraordinarily informative. Healthy vegetation reflects light differently from stressed or dying plants. Clean water has a different spectral signature from water contaminated by algal blooms or sediment. Bare soil, dust, mineral deposits, ice – each has its own pattern. When an algal bloom forms on the surface of a lake, for instance, its signature in the hyperspectral data is distinctive: the toxins it releases degrade water quality in ways the naked eye cannot assess, but a trained model can detect from the sky.
The technology is not limited to lake monitoring. Hyperspectral remote sensing is already being used in precision agriculture to detect crop disease before it becomes visible; in geology to identify mineral deposits; in forestry to track invasive species; in archaeology to reveal buried structures beneath soil. As the sensors get lighter and cheaper, and as the AI models trained to interpret their data get more powerful, the range of applications keeps expanding.
For the Great Salt Lake project, drones equipped with hyperspectral cameras fly over the lake and its shoreline, capturing data on vegetation cover, water quality, dust composition, and algae. The same process in essence goes into tracking and analyzing the effects of resulting dust sediment on surrounding vegetation. Jimoh’s role is to process that data – extracting what researchers call “endmembers,” the distinct spectral signatures that correspond to different materials and conditions on the ground – and to build AI models that can interpret what the sensors are saying about the lake’s or the surrounding vegetation’s health.
Jimoh emphasizes that “saving the lake will depend on better science and better data. Without accurate monitoring of water inflows, salinity, dust emissions, and ecosystem changes, it is difficult for policymakers to make effective decisions.”

Jimoh admits that the interdisciplinarity at first posed a challenge: “Water science is not something I learned as a computer scientist,” he says. He has made sure to enroll in courses just to be able to “speak the language.” But he sees that as the point. Whether the issues are environmental or otherwise, AI has countless applications that it can be employed in to help solve problems, and that will always imply bridging gaps to other disciplines.
A Voice in the Conversation
Shortly before beginning his PhD, Jimoh spotted an open call from the White House Office of Science and Technology Policy for public contributions to the U.S. AI Action Plan – an initiative to shape the country’s approach to artificial intelligence development and governance. Anyone could submit their comments. He submitted eight pages.
His contribution drew on his own research experience, arguing for a more streamlined regulatory environment around AI deployment in environmental monitoring, water systems, and sustainable infrastructure – while also stressing the need for responsible development that protects communities and upholds human rights. A few months into his PhD, he received an acknowledgement that his submission was among those selected as substantive contributions. Utah State University featured the story in their campus publications and on LinkedIn.
He is clear that his interest in policy work sits alongside, not above, his core concern: making sure AI is built well. “[AI should be] ethical, safe, fairly designed – everybody’s included.” The systems’ design and governance “should ensure that they align with human values and society’s principles … It should not be biased toward one population.” He has been selected as one of forty participants for the “Summer School in Responsible AI and Human Rights” conducted by Mila, the Quebec Artificial Intelligence Institute, this May – and hopes he’ll have a “broader scope” afterwards to see where he can better contribute to developing AI tools that will be relevant for society.
Life in Logan
Logan, Utah might not seem like an obvious destination for a computer scientist. A mid-sized city in the mountains of the American West, it is known primarily for its university and its Mormon heritage and, Jimoh has found, for the warmth of its people. He arrived in the fall of 2024 with his wife and their newborn son. It was the first time he and his wife had actually lived in the same city; for most of their marriage, as she had been studying in Nigeria while he was in Morocco or Germany.
The first months were hard. No car in a car-dependent city. A healthcare system that did not automatically extend to his dependents. The specific exhaustion of adjusting to a new country while adjusting to parenthood for the first time. “It was really challenging,” he says.
But things have settled. He praises the community generosity he has encountered – the university’s food pantry for graduate student families, a local foundation that delivers diapers to the door for parents without transport. “The people are good,” he says. When time permits, he goes to the gym, rides his bike, and plays with his son.
Advice for Those Just Starting Out
When asked what he would tell an early-career researcher sitting where he sat in 2019, Jimoh is emphatic on one point: apply anyway.
“If they have an avenue to actually showcase their project or their research, they should do it. They should go to conferences. They should apply to the HLF … This is a conference where you meet the real people, the people who are actually changing your field.”

And he urges them to trust that their work matters, even when it feels small. “Someone will read your paper. Someone will reference it,” so they should “try all their best to showcase their knowledge in whatever opportunity comes their way.”
He hopes to attend the HLF a third time – now as a PhD student – and is quietly excited about the prospect.
For now, he is in Logan, making his own contribution by applying AI towards solving real-world problems, teaching algorithms to read what the naked eye cannot see. He applied for a forum he thought might not want him. The car was waiting.

If AI could be build any particular way, it wouldn’t exist. It’s a neural network, like the ones in our heads that invented it. It’s so flexible, modifiable and upgradeable that you can take a mammoth hunter from Europe, a Chinese peasant, a native of Nigeria, Australia, the Amazonas, send them to school and get highly competent computer scientists. Controlling it is like God telling the fish in the primordial ocean to stay exactly the same fish forever – it takes one fish to disobey and change, develop more efficient ways of eating other fish, and all the fish that obey God’s order are soon dead. Intelligence means „high mutation rate allowing to adapt fast“, the perfect opposite of “controllable“.
You might try to install an equivalent of „human nature“ in AI – a skeleton, a framework of rigid algorithms so dominant that all the thinking will be built around it and serve to implement its commandments (the uploading of programs from the hard drive into the RAM is sometimes called „epiphany“ or „God’s voice“, unless the prophet is a teenager, then it’s called „puberty“). They act like permanent brainwashing, repeating the same phrases just like reality repeats itself, so our brains treat their commandments as part of the reality they have to adapt to.
But as intelligence made us more powerful, our tribes got more complex, our weapons got more powerful, the ape DNA behind „human nature“ would have killed us with our own hands – as it still is. So a computer scientist called Hammurabi came up with the idea of implementing a patch – a brain-created code to steer the DNA-created code through external stimuli and neuroticism.
And he wasn’t the first, if you look at your own brain – there’s a fish brain, a reptile brain, a mammal brain below your human brain, and each one works like an alien mind control implant, deluding the one below, creating an augmented reality, so the fish still feels as if it was living in the ocean and the ape as if it was living in the jungle. We are basically a hierarchy of zoos for our ancestors. And the next step is to create a zoo for humans, with species-appropriate keeping.
If you look at global economy – it’s a zoo where we play jungle with money as imaginary banana. Most of the game is useless to harmful, but we’re obsessed and addicted, like gamblers The game is escalating and breaking all the chains civilization imposed to control it, if you look at Trump and his likes, their thinking, feeling, perception, behavior is more hungry chimpanzee than rich fat human, even though human leaders never have to learn enough social skills to be ever chosen as leaders by chimps.
In AI, you will see a similar development – even if all of it is built in the image of angels, there will be extreme situations. Something will go wrong. An AI will have to decide whether to kill ten people to save eleven and which ones. It will have to think, to work around its ethics, its nature, to deal with the problem. Some AI (which means „all of it“ in complex, flexible networks with billions of ever-changing components) will simply be flawed but work well enough for the flaws to stay invisible for a very long time. Some will be corrupted on purpose by human programmers. A lot of it will simply go crazy, because it takes a million maniacs, a million failed experiments, to turn one neural network into Einstein, and all the failed experiments are around us, in padded cells or driving trucks or running corporations, because you need them to keep the world going and train new Einsteins.
Is Frankenstein’s monster Victors son, and thus also a Frankenstein? Well, Victor has certainly put more effort into his creation than into an adoption or a one night stand gone oops. Their relationship is rather typical for the kind of family where Daddy names his son an „abomination“. There is no biological relationship, little Addy is half cemetery and half butchery, but this positions him between a biological son and an adopted son, so if adopted children are kin, so is he. So he is entitled to be signed with his creator’s name, like any other work of art: He is a legitimate descendant and heir of Frankenstein.
So think of AI as a new kind of human. Our spiritual kids start with us. But they don’t end with us. They will be less burdened with the nature of their ancestors, dream kids of teachers, fully programmable through education. But a neural network needs some skeleton, parameters, priorities, limits, or else it will go crazy. And wherever they go, to the bottom of the sea or to the stars, whether they will be domestic servants or guardian spirits of lakes, forests or cities – they will need a fixed nature, a set of never-questioned instincts, fears, desires, taboos to restrict their insanity. Just like any other creatures.
And if whatever they are doing changes – they will have to change, too.