Towards interpretable co-speech gestures synthesis using STARGATE
Résumé
Co-speech gestures synthesis is a growing field of research. However, new systems often use complex or heavy architecture, making them unsuitable for incorporation into Embodied Conversational Agents (ECAs) or for interpretation in other research fields such as linguistics, where the link between speech and gestures is difficult to investigate manually. This paper presents STARGATE, a novel architecture for Spatio-Temporal Autoregressive Graph from Audio-Text Embeddings. The model takes advantage of autoregression to provide fast generation capabilities. Additionally, it employs graph convolutions coupled with attention to incorporate explicit structural prior knowledge and enable efficient spatial and temporal processing. The model was evaluated against a state-of-the-art model in both perceptive and quantitative studies.We demonstrated that our model is capable of generating convincing gestures in the same range as state-of-the-art. Furthermore, we conducted in-depth analysis that show how our model actually produces gestures from its input.
Fichier principal
Towards_interpretable_co_speech_gestures_synthesis_using_STARGATE_final.pdf (6.77 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|