Mechanisms for Mapping Human Input to Robots

From Robot Learning to Shared Control/Autonomy

Workshop RSS 2024 - July 15, 2024

Location: Technical University of Delft, Room TBD

With recent advances in robot learning, there has been an increase in algorithmic human-robot interaction methods, from those used to teach robots new skills to those providing shared assistance (e.g., shared control/autonomy) when humans and robots work together to complete tasks. A variety of input mechanisms have been investigated for humans to both teach and team with robots in these settings. By mechanisms, we refer to both the type of information provided by the human (e.g., preferences, demonstrations, shared control, corrections, supervisory control) and the interface through which it is provided (e.g., language, physical human-robot interaction, AR/VR, haptic teleoperation devices, and their combination). Despite the large number of input mechanisms, limited work has investigated the relationship between input mechanisms and task or learning model characteristics (i.e., what is the most appropriate mechanism(s) for a given context?).

Task characteristics – such as environment structure, pace (e.g., speed), and safety or performance requirements – and learning model characteristics – such as required training data, uncertainty, and model assumptions – can greatly impact requirements around human input. For example, a surgical shared control system will have very different requirements for user interaction compared to a robot assistant designed for use in household chores. Understanding the relationship between interaction mechanisms and context will be crucial to design robot systems for future adoption. Thus, the primary goal of this workshop is to facilitate a conversation around what input mechanisms are best suited for given tasks and task representations, particularly for emergent robot learning methods (e.g., diffusion, transformers) and common robotics research domains (e.g., manufacturing, space, surgery, activities of daily living).

To accomplish this goal, the workshop aims to bring together (1) leaders in the areas of robot learning (e.g., imitation learning, inverse reinforcement learning), shared control/autonomy, and supervisory control and (2) experts in a range of applied areas (e.g., manufacturing, surgery) for a set of talks, discussions, and brainstorming sessions (e.g., diverging-converging thinking) around the relationship between task criteria, robot representations, context (e.g., teaching vs human-robot teaming) and desired qualities for human interaction mechanisms.

Speakers and Panelists

Sylvain Calinon

Idiap Research Institute

Nicolai Anton Lynnerup

Universal Robots

Luka Peternel

Delft University of Technology

Werner Kraus

Fraunhofer IPA

Henny Admoni

Carnegie Mellon University

Erdem Biyik

University of Southern California

Ann Majewicz Fey

University of Texas at Austin

Jason Cochrane

Boeing Research & Technology

Lars Johannsmeier

Franka Robotics


Time (GMT+1)
09:00 am - 09:15 am Organizers
Introductory Remarks
09:15 am - 10:00 am Invited Speakers: Part I
Luka Peternel - Shared Control Systems and Interfaces for Seamless Human-Robot Co-manipulation (9:15-9:30)
Nicolai Anton Lynnerup - Programming by Demonstration - A Skill-based Kinesthetic Teaching Platform (9:30-9:45)
Erdem Biyik - Making Robot Learning More Natural Through Human Saliency and Language (9:45-10:00)
10:00 am - 10:30 am Morning Coffee Break
10:30 am - 12:00 pm Invited Speakers: Part II
Jason Cochrane - Mastering the Realities and Challenges of Implementing Truly Collaborative Robotics in Production Environments (10:30-10:45)
Werner Kraus - Automation of Automation (10:45-11:00)
Sylvain Calinon - Manipulation skills acquisition by exploiting various forms of human guidance (11:00-11:15)
Lars Johannsmeier - The Dual Role of Robots as Input Device and Embodiment for Human Skills (11:15-11:30)
Ann Fey - Towards More Human-Aware Robotic Systems for Surgical Training and Intervention (11:30-11:45)
Henny Admoni - Is Eye Gaze Actually Helpful for Shared Control? (11:45-12:00)
12:00 pm - 12:30 pm What mechanisms have been used and where?
Group Activity I
12:30 pm - 02:00 pm Lunch break
02:00 pm - 03:00 pm Technologies, Adoption, and Opportunities
Academic/Industry Panel
03:00 pm - 03:30 pm Contributed Paper Spotlight Presentations
03:30 pm - 04:00 pm Afternoon Coffee Break (and Posters)
04:00 pm - 04:30 pm Consolidation and Brainstorming Opportunities
Group Activity II
04:30 pm - 05:00 pm Concluding Remarks and Next Steps

Call for papers
New: The EasyChair call for papers and submission portal is now available!

Areas of interest
We solicit submissions related to (but not limited to) the following themes on interaction-grounded machine learning with humans:

  • Multi-modal interfaces for teaching and robot interaction
  • Novel interfaces for teaching robots or shared control/shared assistance
  • Assessing the impact of application on human input interfaces in robot learning or shared control/autonomy systems
  • Modeling human input requirements in human-robot teaming systems
All submissions will be managed through EasyChair. Authors are invited to submit short papers of 2-4 pages (including references) describing work related to any of the topics above. We encourage papers to follow the formatting of the official RSS LaTeX template. The review process will be double blind so please make sure your submission is anonymized. Workshop submissions are non-archival (allowing submissions to future conferences or journals). We welcome both preliminary results as well as results that summarize or build on previously-presented work. Accepted papers will be presented as posters during the workshop and select works will be invited to give spotlight talks during the workshop. Accepted papers will be made available online on the workshop website. Submissions will be evaluated based on novelty, rigor, and relevance to theme of the workshop.

Important Dates (All times are Anywhere on Earth [AoE] unless specified otherwise)
  • Submission deadline: May 24th 8th, 2024.
  • Notification deadline: June 15th, 2024.
  • Camera-ready deadline: July 1st, 2024.
  • Workshop: July 15th, 2024.


Mike Hagenow

Massachusetts Institute of Technology

Andreea Bobu

Boston Dynamics AI Institute/MIT

Tesca Fitzgerald

Yale University

Mario Selvaggio

University of Naples Federico II

Harold Soh

National University of Singapore

Julie Shah

Massachusetts Institute of Technology

Reach out to with any questions.