About the project
The project will explore new techniques in computer vision to enable AI on the “edge” i.e. within AUVs, as well AI-enabled scheduling on-board the robot.
This project seeks to capitalise on advances in multimodal sensing and dynamic inference, to make on-board decisions to enable efficient and adaptive sampling/control based on real-time collected data. This will entail both processing the visual data itself and producing outputs that are actionable for mission planning.
Studying the ocean and its inhabitants is an extremely challenging task: the animals and the physics that govern their habitat vary on an enormous range of spatiotemporal scales, and this renders monitoring from crewed ships infeasible. Scientists are increasingly relying on Autonomous Underwater Vehicles (AUVs) coupled with imaging systems to observe marine ecosystems.
However, these tools are typically deployed to run a preset transect continually collecting data for later processing, removing opportunities to react in real-time to observed data. The potential for adaptive sampling on AUVs from image data is enormous, enabling creative sampling paradigms to better understand our changing oceans, enabling a single robot to stop and follow a new organism, or a system of robots could track and interrogate ephemeral biological features like thin layers of plankton. Such functionality would allow scientists to consider and study new ideas in biological oceanography.
While embedded AI is becoming increasingly feasible to enable this, the associated power consumption has a significant impact on vehicle battery-life and hence the length of a mission.
You will also be supervised by organisations other than the University of Southampton, including
- Dr Eric Orenstein from the National Oceanography Centre.