Legible Robot Motion from Conditional Generative Models

Published: 20 Jun 2023, Last Modified: 10 Jul 2023ILHF Workshop ICML 2023EveryoneRevisions
Keywords: Human Robot Interaction, Generative Modeling, Legible Motion, Learning from Demonstrations
TL;DR: We introduce Generative Legible Motion Models (GLMM), a framework that utilizes conditional generative models to learn legible trajectories from human demonstrations.
Abstract: In human robot collaboration, legible motion that clearly conveys its intentions and goals is essential. This is because forecasting a robot’s next move can lead to an improved user experience, safety, and task efficiency. Current methods for generating legible motion utilize hand designed cost functions and classical motion planners, but there is need for data driven policies that are trained end-to-end on demonstration data. In this paper we propose Generative Legible Motion Models (GLMM), a framework that utilizes conditional generative models to learn legible trajectories from human demonstrations. We find that GLMM produces motion that is 76% more legible than standard goal conditioned generative models and 83% percent more legible than generative models without goal conditioning.
Submission Number: 46
Loading