One-shot learning of human–robot handovers with triadic interaction meshes

David Vogt, Simon Stepputtis, Bernhard Jung, Hani Ben Amor

Research output: Contribution to journalArticlepeer-review

17 Scopus citations


We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.

Original languageEnglish (US)
Pages (from-to)1053-1065
Number of pages13
JournalAutonomous Robots
Issue number5
StatePublished - Jun 1 2018


  • Handover
  • Human–human demonstration
  • Human–robot interaction
  • Interaction mesh

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'One-shot learning of human–robot handovers with triadic interaction meshes'. Together they form a unique fingerprint.

Cite this