Imagine being aboard an airplane where two pilots share control – one human, one computer. They each focus on different aspects, but when they align their attention, the human pilot takes the reins. If the human pilot becomes distracted, the computer quickly steps in.
Enter the Air-Guardian, a creation from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). In the ever-demanding world of modern aviation, the Air-Guardian functions as a proactive co-pilot, forming a symbiotic partnership between human and machine, all rooted in the understanding of attention.
But how does it gauge attention? For humans, it relies on eye-tracking, while the computer employs “saliency maps” to pinpoint where attention is directed. These maps act as visual cues, highlighting crucial areas within an image, aiding in decoding complex algorithms. Instead of intervening only during emergencies like traditional autopilot systems, Air-Guardian detects early signs of potential risks through these attention markers.
The implications stretch beyond aviation. Similar cooperative control systems could find applications in cars, drones, and various robotic endeavors.
MIT CSAIL postdoc Lianhao Yin, a lead author on a paper about Air-Guardian, explains, “Our method stands out due to its differentiability. We can train the cooperative layer and the entire end-to-end process. The system’s adaptability is another unique feature – it can be tailored to suit different situations, ensuring a harmonious partnership between human and machine.”
In field tests, both pilot and system based decisions on the same raw images when navigating to a target waypoint. Air-Guardian’s success was measured by cumulative rewards during flight and achieving waypoints faster, reducing flight risks, and enhancing navigation success rates.
“This system embodies a novel approach to human-centric AI-powered aviation,” adds Ramin Hasani, MIT CSAIL research affiliate and creator of liquid neural networks. “Our use of liquid neural networks offers a dynamic, adaptive approach, ensuring that AI doesn’t replace human judgment but complements it, leading to improved safety and collaboration in the skies.”
The real power of Air-Guardian lies in its core technology. Utilizing an optimization-based cooperative layer that combines visual attention from humans and machines with liquid closed-form continuous-time neural networks, known for deciphering cause-and-effect relationships, it analyzes incoming images for vital information. The VisualBackProp algorithm enhances this by identifying focal points within an image, providing clear insight into attention maps.
To facilitate mass adoption, the human-machine interface requires refinement. Feedback suggests that a more intuitive indicator, such as a visual bar, could signify when the guardian system assumes control.
Air-Guardian ushers in a new era of safer skies, offering a dependable safety net for those moments when human attention falters.
Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, director of CSAIL, and senior author on the paper, emphasizes, “The Air-Guardian system showcases the synergy between human expertise and machine learning, advancing the goal of using machine learning to assist pilots in challenging situations and reduce operational errors.”
Stephanie Gil, assistant professor of computer science at Harvard University, who was not involved in the work, comments, “One of the most intriguing aspects of this work is the potential for earlier interventions and better interpretability by human pilots using a visual attention metric. This is a great example of how AI can collaborate with humans, fostering trust through natural communication between human and AI systems.”