Underwater imaging is critical for various applications such as marine biology, underwater archaeology, and pipeline inspection. However, backscatter from suspended particles significantly degrades image quality. This project addresses the challenge of mitigating underwater backscatter in real-time imaging by developing a backscatter cancellation system using a Raspberry Pi Single Board Computer. The project proposes a solution that leverages a combination of image processing techniques, including and revolving around the Canny edge detection algorithm, to accurately detect and segment backscatter particles. To evaluate the system, the project develops a bubble backscatter simulator, and a lossless video recorder for controlled testing and real-world footage analysis. Performance tests revealed an average frame processing latency of 2.6ms, outperforming systems in previous work that operate on more powerful hardware. Attempts to enhance performance using multiprocessing and a real-time operating system (RTOS) patch, however, resulted in increased latency due to the overhead of Inter-Process Communication (IPC) and frequent kernel context switching. These findings suggest that simpler single-core implementations may offer superior performance for I/O-bound tasks. The results demonstrate significant progress in reducing underwater backscatter, with potential applications across various underwater imaging tasks. Future work will focus on hardware improvements and further software optimisation to refine system performance.
This was my final year individual project as part of my Integrated Master’s in Electronic and Computer Engineering (MEng). I worked on this project during the span of approximately 15 weeks and achieved the following high-level objectives:
All of the work that I produced throughout this project has been uploaded to a set of Git repositories within the Sidharth-Shanmugam-MEng-Project-2023-24 GitHub organisation for version control. The project consisted of the following 6 main deliverables:
This dossier page transcribes the Introduction and Conclusion sections from the Final Report, which can be downloaded as a PDF for reading by visiting the final-report repository.
Underwater imaging capabilities are of great importance across a wide spectrum of disciplines, for instance, marine research and environmental analysis, where scientists dive to several reef locations with waterproof camera systems and quadrats to audit the abundance of coral over time [1]. Unmanned Underwater Vehicles (UUVs), a class of submersible vehicles, are pivotal in advancing these capabilities. With vast arrays of sensors and often remote-controlled or completely autonomous, UUVs enable the end user to conduct exploration missions of extended duration to analyse underwater environments with utmost accuracy even in the most hazardous conditions, impossible for direct human access. These benefits have caused UUVs to become common as a safer and cheaper alternative to manned vehicular operations in almost all underwater imaging-related applications, such as intelligence surveillance and reconnaissance in defence, defect and foreign object inspection in maritime, and oceanography and hydrography in marine research [1].
When capturing images or recording video underwater, one would instantly notice the lack of light at greater sea depths. Coupling a high-power light source next to the camera resolves this issue to ensure a well-lit scene. However, this produces an adverse side-effect called backscatter, as Figure 1 illustrates, where suspended particles in water scatter light in an inhomogeneous manner, reflecting the light emitted by the light source back into the camera, creating exceptionally bright spots and often saturating the image and degrading the quality [3]. Albeit the existence of a few simple and universal techniques to mitigate backscatter, such as bringing the camera closer to the subject, fine-tuning the headlight position such that only the edge of the light cone illuminates the subject, or achieving perfect buoyancy to minimise disrupting sand, debris, and bubbles [4], they all lack viability for UUVs due the continuous and arbitrary propellor motion and the constant existence of debris floating throughout water bodies.
Figure 1: (a) Backscatter appears as the particles of sand drift between the aquatic animal and the camera [4]. (b) A frame from GoPro footage showing backscatter from a UUV.
With a specialised camera sensor to capture fast-moving backscatter particles, a single-board computer processing frames using advanced machine vision technologies to detect backscatter, and a specialised projector to selectively illuminate the scene, the project’s ultimate goal is to develop a novel backscatter-cancelling light source to aid the generation of high-quality underwater images in real-time, thus eliminating the requirement for a camera with better dynamic range compatibility to compensate for the bright regions from backscatter and lens flares. The first objective is to research architectures to develop a reliable backscatter detection system, interlinking with the second objective of researching methodologies to optimise for real-time to ensure predictability and stability, ensuring the projection of backscatter-cancelling light patterns with minimal and consistent latency.
This project set out to design, implement, and evaluate a real-time backscatter cancellation system for underwater imaging, leveraging an RPi SBC for portability and accessibility. The project began by developing the toolsets for the final system, including a backscatter simulation to generate a synthetic ground truth to test and validate the final system, and a lossless recording script to capture underwater footage to test the final system without requiring underwater deployment. These toolsets provided crucial assistance in the final system’s development.
The project achieves the objective of precise backscatter particle segmentation by employing an image processing pipeline centred around the Canny edge detection algorithm and a simple minimum enclosing circle segmentation method. The project uncovers mixed results when reducing system latency with a real-time kernel and multiprocessing for increasing system throughput to achieve the latency targets, including the adverse effects of the real-time kernel patch for Linux and Python multiprocessing. However, the single-core system achieves a commendable 2.6ms average frame processing latency, a clear improvement over previous research. Underwater testing of the system, with the lossless recording program, uncovered the drastic parallax and distortion effects due to the submersible housing construction, which resulted in the project’s shift from focusing on the development of a final working system, to real-time software optimisation, to balance the short project time duration. Therefore, driving the DLP projector, including functionality to control and establish a fixed system frame rate, was no longer an actionable item.
In conclusion, this project successfully delivers a functional and efficient backscatter segmentation system, meeting real-time targets. The insights gained from the performance evaluation highlight the intricacies of optimising such systems and suggest that, for specific applications, simpler single-core implementations may offer superior performance compared to more complex multiprocessing or RTOS-enhanced solutions. In addition to completing the objectives for DLP-driven projections and fine-grain framerate controls, the following aspects for future work can drastically improve the system: (a) FPGA implementation for accelerated image processing and DLP projector driving by harnessing the inherently multithreaded architecture, (b) improved submersible housing, mitigating the component offsets with high-precision alignment and a beamsplitter to eliminate parallax by co-location, and finally, (c), a predictive system to track and estimate the future location of backscatter particles to mitigate backscatter movement against system latencies, with options to incorporate machine-learning technologies, using a more realistic backscatter simulation for training data.
[1] University of Hawai‘i, “Practices of Science: Underwater Photography and Videography.” [Online]. Available: https://manoa.hawaii.edu/exploringourfluidearth/physical/ocean-depths/light-ocean/practices-science-underwater-photography-and-videography [Accessed: 2024- 02-13]
[2] Yannick Allard and Elisa Shahbazian, Unmanned Underwater Vehicle (UUV) Information Study. Defence Research & Development Canada, Atlantic Research Centre, Nov. 2014. [Online]. Available: https://apps.dtic.mil/sti/pdfs/AD1004191.pdf [Accessed: 2024-02-14]
[3] Sidharth Shanmugam, “Initial Report: Machine Vision-Based Anti-Backscatter Lighting System for Unmanned Underwater Vehicles,” University of York, York, UK, Tech. Rep., Mar. 2024.
[4] Brent Durand, “Easy Ways to Eliminate Backscatter in your Photos,” Oct. 2013. [Online]. Available: https://www.uwphotographyguide.com/eliminate-backscatter-underwater [Accessed: 2024-02-14]