Multimodal generation of upper-facial and head gestures with a Transformer Network using speech and text - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2021

Multimodal generation of upper-facial and head gestures with a Transformer Network using speech and text

Résumé

We propose a semantically-aware speech driven method to generate expressive and natural upper-facial and head motion for Embodied Conversational Agents (ECA). In this work, we tackle two key challenges: produce natural and continuous head motion and upper-facial gestures. We propose a model that generates gestures based on multimodal input features: the first modality is text, and the second one is speech prosody. Our model makes use of Transformers and Convolutions to map the multimodal features that correspond to an utterance to continuous eyebrows and head gestures. We conduct subjective and objective evaluations to validate our approach.

Dates et versions

hal-03570955 , version 1 (13-02-2022)

Identifiants

Citer

Mireille Fares, Catherine Pelachaud, Nicolas Obin. Multimodal generation of upper-facial and head gestures with a Transformer Network using speech and text. 2021. ⟨hal-03570955⟩
94 Consultations
0 Téléchargements

Altmetric

Partager

More