Effects of Integrated Intent Recognition and Communication on Human-Robot Collaboration
Human-Robot Interaction | Collaborative Systems | Intent Communication
Human-Robot Interaction | Collaborative Systems | Intent Communication
Conference: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018
Authors: Mai Lee Chang, Reymundo A. Gutierrez, Priyanka Khante, Elaine Schaertl Short, Andrea Lockerd Thomaz
My Role: Lead Researcher
Research Overview
This research investigates how integrating intent recognition and communication affects human-robot collaboration in physical tasks. While previous studies have examined these elements in isolation, our work takes a novel approach by implementing and evaluating a bi-directional intent system where robots can both recognize human intentions and communicate their own intentions through movement.
We developed an integrated system that combines hand motion intent recognition (to predict which object a human will select) with legible motion planning (to clearly communicate the robot's intentions to humans). Through a controlled user study with 18 participants performing a collaborative cup-pouring task, we evaluated how different combinations of intent recognition and motion legibility affect objective team performance and subjective perceptions of collaboration.
Innovation
This research breaks new ground by exploring the combined effects of two critical components in human-robot interaction that have traditionally been studied in isolation: intent recognition and intent communication. While previous research examined these elements separately, our work takes a holistic approach by investigating how their integration influences collaborative team performance.
The innovation lies in treating human-robot collaboration as a bi-directional interaction system where both partners simultaneously:
Predict the other's intentions through observation
Communicate their own intentions through movement
This integrated approach reflects how humans naturally collaborate, constantly alternating between recognizing others' intentions and communicating their own, enabling seamless coordination during complex physical tasks.
Technical Implementation
The research implemented a comprehensive system with four integrated modules:
1. Intent Recognition System
Hand motion intent recognition: Predicts human intention before cup selection by tracking hand movement trajectories
Object intent recognition (baseline): Detects human intention after cup selection by monitoring object state changes
2. Motion Planning System
Predictable motion: Efficient, direct trajectories between start and goal positions
Legible motion: Trajectories optimized to clearly communicate the robot's intended goal early in the movement
3. Segmentation and State Extraction
Real-time environmental monitoring
Detection of object presence/absence states
Tracking of human hand movements via colored glove
4. Motion Execution
Trajectory generation for both predictable and legible motion types
Coordination of cup selection and pouring actions based on intent recognition
The system was implemented on a robot platform for real-time collaborative cup pouring tasks, with capabilities for both early intent detection and transparent movement communication.
The integrated intent recognition and generation system consists of four modules.
Experimental Validation
The research was validated through a 2×2 within-subjects user study with 18 participants (5 female, 13 male) testing four experimental conditions:
Baseline: IR Absent + Predictable Motion
IR Present + Predictable Motion
IR Absent + Legible Motion
IR Present + Legible Motion
Task Design:
Collaborative cup pouring where human and robot needed to coordinate cup selection and pouring targets
Human got first choice of cup; robot had to select a different cup
Robot chose the target bin; human had to infer and match this choice
Both partners had to place cups back in original positions
Objective metrics:
Human's initial intent recognition time
Human's final intent recognition time
Percentage of overall concurrent motion
Percentage of segment concurrent motion
Subjective metrics (7-point Likert scales):
Team fluency
Robot contribution
Robot capability
Legibility
Intent recognition
Key Findings
Interaction Effects: Our analysis revealed a significant interaction between motion type and intent recognition for concurrent motion. This key finding supports our hypothesis that intent recognition and communication should be studied as an integrated system rather than independently.
Legible Motion Impact: Legible motion significantly improved subjective ratings across multiple measures (team fluency, robot contribution, robot capability, and legibility).
Collaborative Behavior: Rather than optimizing for speed, participants attempted to synchronize their actions with the robot, suggesting that people naturally value coordination in collaborative tasks.
These findings demonstrate that integrating both intent recognition and communication significantly enhances human-robot collaboration.
Research Impact
This research makes several significant contributions to the field of human-robot interaction:
Theoretical Framework: Establishes a new paradigm for studying human-robot collaboration as a bi-directional interaction system where intent flows in both directions simultaneously.
Design Principles: Provides evidence-based guidance for designing more intuitive and collaborative robots that can both understand human intentions and clearly communicate their own.
Performance Metrics: Introduces concurrent motion as a meaningful metric for evaluating human-robot collaboration beyond traditional speed-based measurements.
User Experience: Demonstrates that robots capable of recognizing and communicating intent create more positive user experiences and are perceived as more capable teammates.
Practical Applications: Findings have direct applications in collaborative robotics for manufacturing, healthcare, household assistance, and other domains requiring natural human-robot teamwork.
Future Directions
This work opens several promising avenues for future research:
Complex Task Environments: Extending this approach to multi-step tasks with higher complexity, time pressure, and multiple concurrent goals.
Multimodal Intent Communication: Integrating motion-based intent communication with other modalities such as gaze, gestures, and verbal cues.
Timing Optimization: Further investigating the relationship between collaboration, timing, and synchronization in human-robot teams.
Long-term Interaction: Studying how intent recognition and communication evolve over extended periods of human-robot collaboration.
Skills Demonstrated
Technical Skills
Robotics: Motion planning, trajectory optimization, robotic manipulation
Computer Vision: Object detection, hand tracking, visual state monitoring
Machine Learning: Intent recognition, human action prediction
Software Engineering: System integration, real-time control, modular design
Programming: Implementation of control systems, data processing pipelines
Research Skills
Experimental Design: Within-subjects study design, controlled variables
Statistical Analysis: ANOVA, post-hoc testing, ICC reliability analysis
Data Collection: Video coding, timing measurements, subjective evaluations
Literature Review: Synthesis of prior work across multiple research domains
Scientific Communication: Presented complex technical concepts clearly in academic writing and conference presentation
Domain Knowledge
Human-Robot Interaction: Principles of collaborative robotics, social robotics
Motion Psychology: Understanding of how humans perceive and interpret motion
Intent Recognition: Models of human intention and action prediction
User Experience: Evaluation methodologies for human-robot collaboration
Interdisciplinary Applications: Bridging robotics, psychology, and human factors