Defining Fairness in Human-Robot Teams
Human-Robot Interaction | Fairness | Metrics Development
Human-Robot Interaction | Fairness | Metrics Development
Conference: IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2020
Authors: Mai Lee Chang, Zachary Pope, Elaine Schaertl Short, Andrea Lockerd Thomaz
My Role: Lead Researcher
Research Overview
We seek to understand the human teammate’s perception of fairness during a human-robot physical collaborative task where certain subtasks leverage the robot’s strengths and others leverage the human’s. We conduct a user study (n=30) to investigate the effects of fluency (absent vs. present) and effort (absent vs. present) on participants’ perception of fairness. We evaluated four human-robot teaming algorithms that consider different levels of fluency and effort. We propose three notions of fairness for effective human-robot teamwork: equality of workload, equality of capability, and equality of task type.
Innovation
This research introduces three novel definitions of fairness specifically designed for human-robot teams:
Equality of Workload (Ew): Equalizing the number of subtasks among the team members regardless of the task type and regardless of capability.
Equality of Capability (Ec): Equalizing the number of subtasks that are strengths among the team members. Strengths could be quantified in various ways including task completion time, accuracy, throughput, and task difficulty levels.
Equality of Task Type (Et): Equalizing the number of subtasks from task type categories among the team mates. In other words, Et gives all team members an equal access to opportunities and cost sharing.
These definitions enable quantification and implementation of fairness considerations in human-robot teaming algorithms, addressing a significant gap in the field of human-robot interaction.
Technical Implementation
The research required developing:
Robot Control Architecture: Custom system integrating GUI interface, arm motion planner, and head movement controller
Robot System
Adaptive Teaming Algorithms: Four distinct algorithms manipulating fluency and effort variables:
Fluency-present and effort-absent: After a human action, the robot takes an action that prioritizes the math tasks.
Fluency-present and effort-present: After a human action, the robot takes an action that prioritizes the math tasks. At the same time, the robot performs manipulation as fast as possible.
Fluency-absent and effort-absent: The robot solves a math problem every 2 sec and does not perform any manipulation.
Fluency-absent and effort-present: The robot solves a math problem every 2 sec, and at the same time, the robot performs manipulation as fast as possible.
The four human-robot teaming algorithms that we evaluated in the user study.
Experimental Validation
We conducted a 2×2 within-subjects design with 30 participants.
Task Design: Collaborative sorting task requiring complementary skills (math problem solving and physical manipulation)
In the user study, the participant and robot work together to sort items from a table into one of the bins.
Independent Variables:
Fluency (present vs. absent): Controls robot's responsiveness to human actions
Effort (present vs. absent): Controls robot's approach to tasks it performs less efficiently
Dependent Measures:
Objective: Task completion time, task distribution ratios, capability utilization metrics
Subjective: Perceived fairness ratings, unfairness attributions, post-study interviews
Analysis Methods: Repeated measures ANOVA with post-hoc Tukey HSD comparisons
Key Findings
Effort Significantly Impacts Perceived Fairness: Robot effort increased fairness ratings by 27%, demonstrating that perceived exertion is more important than actual task distribution
Fairness-Efficiency Balance: The fluency-absent/effort-present condition achieved the optimal balance between perceived fairness and team efficiency
Dual Perception of Fairness: Participant feedback revealed two distinct conceptualizations of fairness:
Capability-based fairness: Task allocation based on individual strengths
Task type-based fairness: Equal distribution of task types regardless of capability
Research Impact
This work has significant implications for:
Industrial Collaborative Robots: Providing design principles for robots that are both efficient and perceived as fair teammates
Algorithmic Fairness: Extending fairness concepts beyond human-human interactions to human-machine collaborations
Human Factors Engineering: Offering metrics to evaluate and improve human satisfaction in mixed human-robot teams
Robotics Software Development: Providing implementable metrics for robot behavior planning algorithms
The findings directly address emerging challenges in manufacturing, healthcare, and service sectors where maintaining worker satisfaction alongside robot deployment is crucial.
Future Directions
This research opens several promising avenues for future investigation:
Adaptive Fairness Models: Developing systems that learn individual fairness preferences over time
Multi-Agent Extensions: Expanding fairness definitions to teams with multiple humans and robots
Cross-Cultural Validation: Exploring how cultural contexts affect fairness perceptions in human-robot teams
Long-Term Interaction Effects: Investigating how fairness perceptions evolve during extended collaboration periods
Implementation in Real-World Settings: Validating these fairness principles in actual workplace deployments
Skills Demonstrated
Technical Skills
Algorithm Design: Created and implemented multiple robot control algorithms
Mathematical Modeling: Formalized abstract concepts of fairness into implementable metrics
Research Skills
Experimental Design: Created and executed a rigorous user study with appropriate controls
Statistical Analysis: Applied mixed-methods analysis to quantify both objective and subjective measures
User Experience Research: Conducted surveys to capture human perception
Interdisciplinary Integration: Successfully combined principles from robotics, psychology, and social science
Scientific Communication: Presented complex technical concepts clearly in academic writing and conference presentation
Domain Knowledge
Human-Robot Interaction: Deep understanding of how humans and robots can work together
Fairness: Knowledge of how fairness influences human-robot collaboration
Robotics: Knowledge of robot planning and action execution
Human Factors: Understanding of how humans perceive and interpret robot behavior
Collaborative Systems: Experience designing systems where humans and AI collaborate