AI Assistant for Human-Robot Collaboration
On a research cruise around Hawaii in 2018, Yuening Zhang SM ’19, PhD ’24 saw how difficult it was to keep a tight ship. The careful coordination required to map underwater terrain could sometimes lead to a stressful environment for team members, who might have different understandings of which tasks must be completed in spontaneously changing conditions. During these trips, Zhang considered how a robotic companion could have helped her and her crewmates achieve their goals more efficiently.
Developing an AI Assistant
Six years later, as a research assistant in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Zhang developed what could be considered a missing piece: an AI assistant that communicates with team members to align roles and accomplish a common goal. In a paper presented at the International Conference on Robotics and Automation (ICRA) and published on IEEE Xplore on Aug. 8, she and her colleagues present a system that can oversee a team of both human and AI agents, intervening when needed to potentially increase teamwork effectiveness in domains like search-and-rescue missions, medical procedures, and strategy video games.
Theory of Mind Model
The CSAIL-led group has developed a theory of mind model for AI agents, which represents how humans think and understand each other’s possible plan of action when they cooperate in a task. By observing the actions of its fellow agents, this new team coordinator can infer their plans and their understanding of each other from a prior set of beliefs. When their plans are incompatible, the AI helper intervenes by aligning their beliefs about each other, instructing their actions, as well as asking questions when needed.
Applications
For example, when a team of rescue workers is out in the field to triage victims, they must make decisions based on their beliefs about each other’s roles and progress. This type of epistemic planning could be improved by CSAIL’s software, which can send messages about what each agent intends to do or has done to ensure task completion and avoid duplicate efforts. In this instance, the AI helper may intervene to communicate that an agent has already proceeded to a certain room, or that none of the agents are covering a certain area with potential victims.
Conclusion
The researchers’ method incorporates probabilistic reasoning with recursive mental modeling of the agents, allowing the AI assistant to make risk-bounded decisions. The AI assistant currently infers agents’ beliefs based on a given prior of possible beliefs, but the MIT group envisions applying machine learning techniques to generate new hypotheses on the fly. To apply this counterpart to real-life tasks, they also aim to consider richer plan representations in their work and reduce computation costs further.
FAQs
Q: What is the purpose of the AI assistant?
A: The AI assistant is designed to communicate with team members to align roles and accomplish a common goal, increasing teamwork effectiveness in domains like search-and-rescue missions, medical procedures, and strategy video games.
Q: How does the AI assistant work?
A: The AI assistant uses a theory of mind model to infer the plans and understanding of its fellow agents, intervening when needed to align their beliefs about each other and instruct their actions.
Q: What are the potential applications of the AI assistant?
A: The AI assistant could be used in search-and-rescue missions, medical procedures, and strategy video games, among other domains where teamwork is essential.
Q: How does the AI assistant make decisions?
A: The AI assistant uses probabilistic reasoning with recursive mental modeling of the agents, allowing it to make risk-bounded decisions.