Mechanisms for Mapping Human Input to Robots

From Robot Learning to Shared Control/Autonomy

Workshop RSS 2024 - July 15, 2024

Location: Technical University of Delft, Frans van Hasselt Room (Aula, 2nd Floor)

[Photos from Workshop]


About
With recent advances in robot learning, there has been an increase in algorithmic human-robot interaction methods, from those used to teach robots new skills to those providing shared assistance (e.g., shared control/autonomy) when humans and robots work together to complete tasks. A variety of input mechanisms have been investigated for humans to both teach and team with robots in these settings. By mechanisms, we refer to both the type of information provided by the human (e.g., preferences, demonstrations, shared control, corrections, supervisory control) and the interface through which it is provided (e.g., language, physical human-robot interaction, AR/VR, haptic teleoperation devices, and their combination). Despite the large number of input mechanisms, limited work has investigated the relationship between input mechanisms and task or learning model characteristics (i.e., what is the most appropriate mechanism(s) for a given context?).

Task characteristics – such as environment structure, pace (e.g., speed), and safety or performance requirements – and learning model characteristics – such as required training data, uncertainty, and model assumptions – can greatly impact requirements around human input. For example, a surgical shared control system will have very different requirements for user interaction compared to a robot assistant designed for use in household chores. Understanding the relationship between interaction mechanisms and context will be crucial to design robot systems for future adoption. Thus, the primary goal of this workshop is to facilitate a conversation around what input mechanisms are best suited for given tasks and task representations, particularly for emergent robot learning methods (e.g., diffusion, transformers) and common robotics research domains (e.g., manufacturing, space, surgery, activities of daily living).


To accomplish this goal, the workshop aims to bring together (1) leaders in the areas of robot learning (e.g., imitation learning, inverse reinforcement learning), shared control/autonomy, and supervisory control and (2) experts in a range of applied areas (e.g., manufacturing, surgery) for a set of talks, discussions, and brainstorming sessions (e.g., diverging-converging thinking) around the relationship between task criteria, robot representations, context (e.g., teaching vs human-robot teaming) and desired qualities for human interaction mechanisms.


Speakers and Panelists


Sylvain Calinon

Idiap Research Institute

Nicolai Anton Lynnerup

Universal Robots

Luka Peternel

Delft University of Technology

Werner Kraus

Fraunhofer IPA

Henny Admoni

Carnegie Mellon University

Erdem Biyik

University of Southern California

Ann Majewicz Fey

University of Texas at Austin

Jason Cochrane

Boeing Research & Technology

Lars Johannsmeier

Franka Robotics


Schedule

Time (GMT+2)
09:00 am - 09:15 am Organizers
Introductory Remarks
09:15 am - 10:00 am Invited Speakers: Part I
Luka Peternel - Shared Control Systems and Interfaces for Seamless Human-Robot Co-manipulation (9:15-9:30)
Nicolai Anton Lynnerup - Programming by Demonstration - A Skill-based Kinesthetic Teaching Platform (9:30-9:45)
Erdem Biyik - Making Robot Learning More Natural Through Human Saliency and Language (9:45-10:00)
10:00 am - 10:30 am Morning Coffee Break
10:30 am - 12:00 pm Invited Speakers: Part II
Jason Cochrane - Mastering the Realities and Challenges of Implementing Truly Collaborative Robotics in Production Environments (10:30-10:45)
Werner Kraus - Automation of Automation (10:45-11:00)
Sylvain Calinon - Manipulation skills acquisition by exploiting various forms of human guidance (11:00-11:15)
Lars Johannsmeier - The Dual Role of Robots as Input Device and Embodiment for Human Skills (11:15-11:30)
Ann Fey - Towards More Human-Aware Robotic Systems for Surgical Training and Intervention (11:30-11:45)
Henny Admoni - Is Eye Gaze Actually Helpful for Shared Control? (11:45-12:00)
12:00 pm - 12:30 pm What mechanisms have been used and where?
Group Activity I
12:30 pm - 02:00 pm Lunch break
02:00 pm - 03:00 pm Technologies, Adoption, and Opportunities - Academic/Industry Panel
Sylvain Calinon, Nicolai Anton Lynnerup, Jason Cochrane, Luka Peternel
Moderated by Harold Soh
03:00 pm - 03:30 pm Consolidation and Brainstorming Opportunities
Group Activity II
03:30 pm - 04:00 pm Afternoon Coffee Break (and Posters)
04:00 pm - 04:30 pm Contributed Paper Spotlight Presentations
04:30 pm - 05:00 pm Concluding Remarks and Next Steps


Papers
  • Kinesthetic vs Imitation: Analysis of Usability and Workload of Programming by Demonstration Methods [link]
    Bruno Maric, Filip Zoric, Frano Petric and Matko Orsag
  • Interactive Keyframe Learning (IKL): Learning Keyframes from a Single Demonstration of a Task [link]
    Thavishi Illandara and Julie Shah
  • Advancing Human-Robot Collaboration: The Impact of Flexible Input Mechanisms [link] (Best Student Paper)
    Helen Beierling, Kira Loos, Robin Helmert and Anna-Lisa Vollmer
  • LHManip: A Dataset for Long-Horizon Language-Grounded Manipulation Tasks in Cluttered Tabletop Environments [link]
    Federico Ceola, Lorenzo Natale, Niko Suenderhauf and Krishan Rana
  • MAPLES: Model based Assistive Policy Learning for Shared-autonomy [link]
    Rolif Lima, Somdeb Saha and Kaushik Das

Organizers

Mike Hagenow

Massachusetts Institute of Technology

Andreea Bobu

Boston Dynamics AI Institute/MIT

Tesca Fitzgerald

Yale University

Mario Selvaggio

University of Naples Federico II

Harold Soh

National University of Singapore

Julie Shah

Massachusetts Institute of Technology



Contact
Reach out to mechanisms.hri@gmail.com with any questions.