Towards interpretable co-speech gestures synthesis using STARGATE - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Towards interpretable co-speech gestures synthesis using STARGATE

Résumé

Co-speech gestures synthesis is a growing field of research. However, new systems often use complex or heavy architecture, making them unsuitable for incorporation into Embodied Conversational Agents (ECAs) or for interpretation in other research fields such as linguistics, where the link between speech and gestures is difficult to investigate manually. This paper presents STARGATE, a novel architecture for Spatio-Temporal Autoregressive Graph from Audio-Text Embeddings. The model takes advantage of autoregression to provide fast generation capabilities. Additionally, it employs graph convolutions coupled with attention to incorporate explicit structural prior knowledge and enable efficient spatial and temporal processing. The model was evaluated against a state-of-the-art model in both perceptive and quantitative studies.We demonstrated that our model is capable of generating convincing gestures in the same range as state-of-the-art. Furthermore, we conducted in-depth analysis that show how our model actually produces gestures from its input.
Fichier principal
Vignette du fichier
Towards_interpretable_co_speech_gestures_synthesis_using_STARGATE_final.pdf (6.77 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04678537 , version 1 (27-08-2024)

Identifiants

Citer

Louis Abel, Vincent Colotte, Slim Ouni. Towards interpretable co-speech gestures synthesis using STARGATE. International Conference on Multimodal Interaction (ICMI Companion ’24: GENEA Workshop), Nov 2024, San José, Costa Rica. ⟨10.1145/3686215.3688819⟩. ⟨hal-04678537⟩
51 Consultations
14 Téléchargements

Altmetric

Partager

More