Multimodal-Based Upper Facial Gestures Synthesis for Engaging Virtual Agents - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

Multimodal-Based Upper Facial Gestures Synthesis for Engaging Virtual Agents

Catherine Pelachaud
Nicolas Obin

Résumé

Myriad of applications involve the interaction of humans with machines, such as reception agents, home assistants, chatbots or autonomous vehicles' agents. Humans can control the virtual agents by the mean of various modalities including sound, vision, and touch. In this paper, we discuss about designing engaging virtual agents with expressive gestures and prosody. We also propose an architecture that generates upper facial movements based on two modalities: speech and text. This paper is part of a work that aims to review the mechanisms that govern multimodal interaction, such as the agent's expressiveness and the adaptation of its behavior, to help remove technological barriers and develop a conversational agent capable of adapting naturally and coherently to its interlocutor.
Fichier principal
Vignette du fichier
WACAI2021_V2.pdf (491 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03553517 , version 1 (02-02-2022)

Identifiants

  • HAL Id : hal-03553517 , version 1

Citer

Mireille Fares, Catherine Pelachaud, Nicolas Obin. Multimodal-Based Upper Facial Gestures Synthesis for Engaging Virtual Agents. WACAI, Oct 2021, Saint Pierre d'Oléron, France. ⟨hal-03553517⟩
72 Consultations
60 Téléchargements

Partager

More