Developmental Learning of Audio-Visual Integration From Facial Gestures Of a Social Robot - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

Developmental Learning of Audio-Visual Integration From Facial Gestures Of a Social Robot

Résumé

We present a robot head with facial gestures, audio and vision capabilities toward the emergence of infant-like social features. For this, we propose a neural architecture that integrates these three modalities following a developmental stage with social interaction with a caregiver. During dyadic interaction with the experimenter, the robot learns to categorize audio-speech gestures of vowels /a/, /i/, /o/ as a baby would do it, by linking someone-else facial expressions to its own movements. We show that multimodal integration in the neural network is more robust than unimodal learning so that it compensates erroneous or noisy information coming from each modality. Therefore, facial mimicry with a partner can be reproduced using redundant audiovisual signals or noisy information from one modality only. Statistical experiments on 24 naive participants show the robustness of our algorithm during human-robot interactions in public environment where many people move and talk all the time. We then discuss our model in the light of human-robot communication, the development of social skills and language in infants.
Fichier principal
Vignette du fichier
humanoids_back.pdf (1.16 Mo) Télécharger le fichier
Loading...

Dates et versions

hal-02185423 , version 1 (26-07-2019)

Identifiants

  • HAL Id : hal-02185423 , version 1

Citer

Oriane Dermy, Sofiane Boucenna, Alexandre Pitti, Arnaud Blanchard. Developmental Learning of Audio-Visual Integration From Facial Gestures Of a Social Robot. 2019. ⟨hal-02185423⟩
216 Consultations
100 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More