The search for long-lost shipwrecks, downed aircraft and even rare species of coral and fish could become easier thanks to an image enhancement technology developed by James Cook University researchers.
The new technology, known as UDnet (Uncertainty Distribution Network), uses artificial intelligence to automatically enhance poor quality underwater images by adjusting contrast, saturation, and gamma correction—without the need for human input. The work is published in the journal Expert Systems with Applications.
The result is UDnet being able to produce clearer, crisper images that reveal details otherwise invisible or hard to see.
Developed by AIMS@JCU Postdoctoral Research Fellow Dr. Alzayat Saleh and Electronics and Computer Engineering Professor Mostafa Rahimi Azghadi in collaboration with Distinguished Professors Marcus Sheaves and Dean Jerry, UDnet has already outperformed 10 existing state-of-the-art underwater image enhancement models in tests involving thousands of images across several different datasets.
“You don’t get the same image quality underwater as you would above water,” Prof Azghadi said.
“Light scatters differently, and various wavelengths of light are absorbed at different rates. This makes it difficult to capture clear images, especially in deeper waters.”
The model tackles the challenges of underwater imaging by counteracting the effects of light absorption and scattering.
“In water, only colors with shorter wavelengths, like blue and green, penetrate deeply. This often distorts the true colors of underwater scenes, making it difficult to distinguish between objects like different types of coral,” Dr. Saleh said.
“UDnet is trained on large datasets of underwater images. It uses the three primary colors of light—red, green and blue—to analyze each pixel and correct color imbalances.
“For instance, if an image is 99% blue due to being captured underwater, the model knows that’s unrealistic and adjusts the colors to achieve a natural balance.”
The AI model processes each pixel millions of times, guided by statistical algorithms without human feedback, to ensure the enhanced images are as accurate as possible.
Dr. Saleh said one of UDnet’s standout features is its ability to enhance images and videos in real time, making it ideal for use with underwater cameras, such as those on remotely operated vehicles (ROVs).
“Another advantage is that UDnet is open source and publicly available for download,” he said.
“This means researchers, marine scientists and explorers can start using the technology with just a few clicks.”
Dr. Saleh said researchers in marine science and aquaculture would benefit when it came to analyzing different species.
“For example, if you’re studying a fish, you need a clear picture to analyze its finer features, such as its color or signs of disease. UDnet helps achieve that clarity,” he said.
Marine conservation, archaeology, environment monitoring and even search-and-rescue efforts to locate downed aircraft are other possible applications for the technology.
More information:
Alzayat Saleh et al, Adaptive deep learning framework for robust unsupervised underwater image enhancement, Expert Systems with Applications (2025). DOI: 10.1016/j.eswa.2024.126314
Citation:
Underwater exploration boosted with image enhancer (2025, January 17)
retrieved 19 January 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.