Using Robotics Research To Improve Diver Safety
Known for her work in developing sophisticated algorithms and methods for managing and coordinating complex systems, Silvia Ferrari has applied her expertise to integrating scuba diving technology with robotic systems.
Her research focuses on underwater robotics, autonomous underwater vehicles (AUVs), and robots designed for aquatic environments. By incorporating dive technologies into these robotic systems, Ferrari is advancing the capabilities of AUVs and making notable contributions to robotics and underwater exploration.
Ferrari is the director of the Laboratory for Intelligent Systems and Controls (LISC), associate dean for Cross-Campus Engineering Research, and the John Brancaccio Professor of Mechanical and Aerospace Engineering at Cornell University.
Tell us about your position and what drew you to this role.
I’m a mechanical and aerospace engineering professor at Cornell University, and my focus is on intelligent systems, artificial intelligence (AI), robotics, and control theory. I received my PhD in aerospace engineering from Princeton University, where I studied aircraft guidance, control, and navigation. Over time my work has expanded to include sensor networks, autonomous vehicles, and active perception, influencing projects such as assistive devices for scuba divers and human–robot interactions.
During my undergraduate studies, my advisor introduced me to optimal control theory. Its wide range of applications fascinated me and led me to pursue a PhD in this area. Optimal control was initially popular in aerospace and chemical engineering but soon found applications elsewhere.
My advisor also suggested I explore artificial neural networks, which introduced me to AI back when early neural networks were just emerging. I worked on reconfigurable air traffic control. As a professor, I shifted my focus to sensing, perception, and areas such as robotics and autonomous systems, where many new applications have since opened.
How do you integrate scuba diving into your research?
Several years ago my team started working on various games, including computer and tabletop games such as Clue. Our work evolved to include sports because we found they were a great way to study problems with clear rules and goals, which helped us develop adaptive and intelligent systems. I became known for my research in AI for sports.
Around the same time, some colleagues began discussing applications for divers with the Navy. They invited me to join their proposal, which was a perfect fit for my background in perception and adapting to challenging environments. My focus shifted to underwater sensing and autonomy, which led to research on robots that assist divers to make them safer and help them with complex underwater tasks.
Perception has a critical role because underwater environments present significant challenges, such as low visibility, that complicate computer vision and general perception. These issues made us realize that human–robot interactions underwater are particularly difficult due to challenging communications and harsh conditions. Together with my research group in the LISC — Jia Guo, PhD; student Sushrut Surve; master’s students Jovan Menez and Yiting (Jerry) Jin; and diver expert Daniele Fragiacomo — we developed a hydrodynamic model and an underwater simulation environment in the lab to test robots and complex scenarios without actual diving. We’re currently working on a diver avatar to enhance our research on underwater human–robot collaboration.
Tell us more about developing the avatar.
The team has been working on developing a special drysuit with a set of sensors, which we call the Movella Xsens MVN Link™ suit. The sensors were initially designed for use in dry lab settings, where they track body movements to create accurate virtual avatars. They work by capturing the 3D shape, pose, and position of a person in a lab environment. These sensors aren’t suitable for underwater use as they originally relied on wireless communication, which didn’t perform well in the water due to high signal noise.
We ran into issues even when we switched to wired sensors because the algorithms were not designed for underwater conditions. To solve that problem, we created an underwater version of simultaneous localization and mapping (SLAM) integrated with these sensors. This approach allows us to accurately measure a diver’s position and movement while swimming. We then combined this with mathematical models from our collaborators to develop a physics-based model of human swimming.
What other challenges have you faced besides environmental factors when integrating underwater sensors?
The biggest challenge is getting robots to interpret a diver’s state, emotions, and physiological condition. Divers experience stress from various sources: physiological, psychological, environmental, and cognitive. We’re working closely with divers to understand these stressors and are collaborating with other groups to develop wearable sensors. Our goal is to integrate these sensors with the robot’s data, such as sonar, to better understand the diver’s condition, which is crucial for decision-making and mission effectiveness.
For practical testing, our team at Cornell uses a pool for experiments with divers. Since pool testing is costly and time-consuming, we’re also developing a virtual reality environment to simulate underwater conditions. This virtual setup will help us test technologies and complex scenarios, such as low visibility or ocean currents, without putting divers at risk. We’re working with others to create realistic underwater environments and integrate wearable sensors virtually, allowing us to test and refine our approaches safely and effectively.
You also work with underwater vehicles, primarily for exploration purposes. Tell us about that.
Over the years I’ve worked extensively with the Office of Naval Research and various naval bases on underwater autonomy. Our work has primarily focused on underwater vehicles used for anti-submarine warfare, which involves a lot of sensing and adapting to ocean currents. We’ve had to learn about estimating and using these currents for navigation, which is also relevant for scuba divers because currents also affect them.
I’ve also worked on small vehicles for detecting and classifying underwater explosives, which is important for national security. These vehicles, such as the REMUS 100, are equipped with crucial sensors, and processing the data from these sensors is essential. This expertise is also applicable to scuba diving, as some divers perform similar underwater tasks.
What other projects are you working on?
I’m really excited about using our tools to analyze accidents, similar to how the aerospace community reconstructs airplane crashes. If a scuba diver were equipped with advanced sensors and something went wrong, the collected data could help us reconstruct the incident. By combining this data with simulations and testing it with a diver, we can better understand what went wrong and improve safety.
I am discussing with the Office of Naval Research how to apply these technologies to support Navy divers and Marines who face high stress levels. The aim is to use these devices for training and to get insight into how stress impacts their cognitive and psychological states.
We’re also exploring virtual reality tools to study divers and dive-related medical conditions in a lab setting. This research could eventually extend to other challenging environments, such as high altitudes or hyperbaric chambers. We’re collaborating with teams who specialize in undersea medicine.
Additionally, we’re working on developing underwater sensors tailored to these specific conditions. Although these sensors are still in the developmental phase, their advancements have the potential to benefit a range of fields, including those that include other extreme conditions that athletes and military personnel face.
Explore More
Watch Silvia Ferrari deliver a TEDx Talk in this video.
© Alert Diver – Q4 2024