Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data

Résumé

The main aim of artificial-intelligence (AI) is to provide machines with intelligence. Machine learning is now widely used to extract such intelligence from data. Collecting and modeling mul-timodal interactive data is thus a major issue for fostering AI for HRI. We first discuss the egg-and-chicken problem of collecting ground-truth HRI data without actually disposing of robots with mature social skills. Particular issues raised by the current multimodal end-to-end mapping frameworks are also commented. We then analyze the benefits and challenges raised by using immersive tele-operation for endowing humanoid robots with such skills. We finally argue for establishing stronger gateways between HRI and Augmented/Virtual Reality research domains.
Fichier principal
Vignette du fichier
gb_AI-MHRI2018.pdf (304.17 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01835008 , version 1 (11-07-2018)

Identifiants

Citer

Gérard Bailly, Frédéric Elisei. Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, Jul 2018, Stockholm, Sweden. pp.39-43, ⟨10.21437/AI-MHRI.2018-10⟩. ⟨hal-01835008⟩
203 Consultations
307 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More