Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: 2nd Workshop on Models of Human Feedback for AI Alignment (MoFA)

Explainable Decision Support and Justification for Model Alignment in Human-Robot Teams

Matthew Luebbers

[ ] [ Project Page ]
Fri 18 Jul 9:05 a.m. PDT — 9:40 a.m. PDT

Abstract:

There is great potential for humans and autonomous robots, each possessing their own capabilities and strengths, to perform tasks collaboratively, achieving greater performance than either could on their own. Productive teamwork, however, requires a great deal of coordination, with human and robot agents maintaining well-aligned models regarding the shared task and each agent's role within it. Achieving this in complex domains requires live and effective communication, especially as plans change due to shifts in environment knowledge. In this talk, I will discuss a set of novel algorithms, systems, and human-factors studies focused on robots acting as decision-support systems for environmental navigation and search tasks. These works leverage augmented reality and natural language interfaces to recommend policies to human teammates, explain the rationale of those policies, and justify during times of mismatched expectation, facilitating plan synchronization in partially observable, collaborative human-robot settings.

Chat is not available.