AI May Enable Us to Explore the Deep Sea Like Never Before

BLOG: Heidelberg Laureate Forum

Laureates of mathematics and computer science meet the next generation
Heidelberg Laureate Forum

The deep sea is notoriously difficult to explore. With crushing water pressure, frigid temperatures, and no sunlight, there is no easy way to peer into the depths of the oceans. We built supersonic planes and went to the moon before we had a reliable way to explore the deep sea – and still, to this day, acquiring any deep ocean data is challenging. But thanks to newer exploration technologies, the data is finally coming in.

This data has raised a new problem: how do you analyze it?

An AI Deep Dive

The “Eye-in-the-Sea” camera – a specially designed instrument left on the sea floor for 1-2 days at a time to record deep sea data without the bright lights and loud noise of the submersible. Image credits: NOAA / Wiki Commons / CC BY 3.0.

In the past 35 years, just the Monterey Bay Aquarium Research Institute (MBARI) has acquired 28,000 hours of deep-sea videos and over 1 million images. Sometimes, there are new species and unexpected phenomena; other times, there is nothing of note. But you cannot know unless you search through the data somehow.

MBARI is, of course, not the only group collecting deep sea data. Other universities or research institutes, as well as private groups like National Geographic, also have major deep water archives. The National Oceanic and Atmospheric Administration (NOAA) Ocean Exploration started using a remotely operated vehicle system in 2010 and has since gathered over 271 terabytes of publicly accessible data.

So how does one get through all this data? Researchers’ work-hours are simply not enough for this task. Scientists often turn to crowdsourcing or citizen science – asking regular folks for help with annotating the data, after providing them with a basic tutorial. But increasingly, teams are looking at artificial intelligence (AI) and machine learning to automate (or semi-automate) this type of task.

“A big ocean needs big data. Researchers are collecting large quantities of visual data to observe life in the ocean. How can we possibly process all this information without automation? Machine learning provides an exciting pathway forward,” says MBARI Principal Engineer Kakani Katija.

MBARI’s video archive has over 8 million annotations. These include various descriptions and tags referring to habitats, objects, and animals. This was initially done with the goal of aiding science and creating a sort of archive of what has been seen; but this data can also be used to train algorithms – specifically, machine learning algorithms.

This is the core idea behind FathomNet.

The logo of the FathomNet initiative. Image credits: MBARI.

An Advancement That Is Hard to Fathom

FathomNet is a collaborative, open-source database aimed at enhancing the understanding of the underwater world. It serves as a comprehensive repository of underwater imagery and associated data, collected from various sources like remotely operated vehicles, divers, and underwater cameras. The main goal of FathomNet is to support advancements in marine science, particularly in the development of machine learning and artificial intelligence technologies for ocean exploration and monitoring. By providing a vast and diverse collection of annotated images, FathomNet aids researchers and technologists in training and testing algorithms for tasks such as species identification, habitat monitoring, and environmental assessment, contributing significantly to the broader scientific community’s efforts to explore and understand oceanic ecosystems.

Essentially, FathomNet offers an extensive database of annotated underwater imagery which is crucial for developing and refining machine learning models. These models can be used to recognize and analyze marine life and underwater features of note, even in real time.

This capability becomes all the more efficient when it comes to the identification of species, monitoring of habitats, and mapping of unexplored ocean regions. No longer do researchers have to sift manually through hours and hours of data – they can now conduct more targeted and informed investigations, leading to quicker discoveries and a deeper understanding of the ocean’s vast and largely uncharted territories.

This type of approach also democratizes the field of ocean exploration, allowing everyone to download labeled data and train algorithms (as long as the goals are consistent with the United Nations’ Sustainable Development Goals). At the same time, it can serve as a hub where ocean explorers from all backgrounds can contribute with their knowledge and expertise to solve various challenges.

But this is not the only way through which algorithms can help deep sea exploration.

AI, Take the Wheel

Part of the problem is analyzing the data, while another part is ensuring a steady flow of the same. Even with modern technology and remotely operated vehicles (ROVs), navigation is still challenging. With this in mind, engineers at Caltech, ETH Zurich, and Harvard have been developing an AI system that will enable ROVs to use ocean currents to aid their navigation. The idea is to make navigation more efficient, thus opening the way for longer exploration missions, even in difficult environments.

“When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 feet away at the surface. We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves,” says John O. Dabiri, one of the researchers behind the project.

Dabiri and colleagues used reinforcement learning (RL) networks, which are not trained on a static dataset but rather train as fast as they can collect data. This approach can be deployed on smaller and less powerful computers. Researchers even installed and ran the algorithm on a $30 microcontroller that only uses a half watt of power.

The other innovative avenue that machine learning is enabling for deep sea exploration is a combination of the two – feature recognition and navigation.

Deep sea creatures are not used to being disturbed and it often happens that when they come across an ROV, they scurry away. Equipping ROV systems with automated detection features could enable them to spot creatures of interest in real time, and then tag and follow them. An example of this is the CUREE robot, which has been trained to follow fish, jellyfish, and other creatures. This robot has already been demonstrated, though not in a deep sea setting yet.

Advancements in the field of machine learning and AI for deep sea exploration can greatly help researchers gather more data and efficiently analyze the deluge of data (including the impressive backlog that has already been accumulated). But they will not replace human activity.

Human intervention is still required at almost every step of the process. Humans are required to train and evaluate the algorithms, and to figure out ways to improve their performance. Of course, human expertise is also necessary when it comes to interpreting and analyzing the data. What these algorithms can do is reduce the monotonous workload and enable humans to focus on what actually matters – the creative and interpretive aspects of marine research.

The integration of machine learning and AI into deep sea exploration represents exactly how this technology can help researchers make real-life progress. It marks a significant leap forward in our ability to understand and protect the ocean’s mysteries. Ultimately, these machine learning and AI initiatives are more than just tools for data processing; they are gateways towards a new era of oceanography. Blending technology and human expertise may finally allow us to unlock the secrets of the deep.

Avatar photo

Posted by

Andrei is a science communicator and a PhD candidate in geophysics. He is the co-founder of ZME Science, where he published over 2,000 articles. Andrei tries to blend two things he loves (science and good stories) to make the world a better place -- one article at a time.

7 comments

  1. “we know less about the deep sea than about the moon”. Reason: The lunar surface is systematically detected by satellites with cameras and radar with resolutions below one meter (LRO) and with spectral analyses that even show the chemical composition of the lunar surface.

    In order to explore the deep sea, on the other hand, you have to dive down and orient yourself there. You can usually only capture a very limited underwater area and still lose a lot of time with the diving processes alone. This only becomes really different with largely autonomous divers who make decisions independently. And only systems with artificial intelligence can do that.

    If deep-sea robots can partly explore the underwater world independently, this has a significance for many future projects. At some point, Saturn’s moon Titan, Mars and the Earth’s moon could also be explored by many small autonomous robots. Instead of a more than a billion expensive Mars rover, dozens of more or less autonomously acting small rovers would be used, each of which costing just a few million dollars. This would make a systematic exploration of the Titan or the surface of Mars, for example, possible in the first place.

  2. Nice idea. Wake up the spirit. If I get it right, a autonomous robot, AI-driven, should operate in regions we can’t reach. This means, we assemble different AI techniques to build something, we neither can reach nor control.

    AI is simplified just the result of a copy&paste what we found out about neuronal activities. There is still no explanation why it works so well, but it works. Basically pattern matching. Which is, basically, the same that we and a lot of lifeforms do and extend to reaction patterns.

    From my point of view, this is a very good recipe to establish the basics of consciousness or awareness. Just because it is necessary to continue to operate. And we added all the parts, that isolated are not close to awareness, to work and survive in a interesting environment.

    Not that I worry about, if things like this may happen. More I worry about the lack of responsibility we show, when we go in a direction like this.

      • Sure, but if we claim to be superior, homo sapiens sapiensis, we should act accordingly, shouldn’t we?

        If we insist to create new consciousness, we should behave like parents not like slaveholders, probably?

        Nice phrase though

        Yes, life itself is irresponsible, and yes, let’s admit it, simply outrageously.

        but what about consequences being conscious or claiming to be so?

        Usually, and I don’t like to think in categories of black and white, life has two main concepts: domination or symbiosis.

        I would choose latter.

        And it is not only the “only” scientific approach, autonomous weapons are another category of the same kind.

        Shouldn’t we be a step further than Einstein and stupidities?

Leave a Reply


E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +