Towards realtime co-speech gestures synthesis using STARGATE - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Towards realtime co-speech gestures synthesis using STARGATE

Résumé

The field of co-speech gestures synthesis is gaining more and more interest. However, many new systems utilize complex or resource-intensive architectures, making them impractical for integration into Embodied Conversational Agents (ECAs) or for exploration in fields like linguistics, where understanding the connection between speech and gestures is challenging. This paper introduces STARGATE, a novel architecture for Spatio- Temporal Autoregressive Graph from Audio-Text Embeddings. The model leverages autoregression for fast gestures generation, alongside graph convolutions and attention to integrate explicit structural knowledge and facilitate efficient spatial and temporal processing. Through both subjective and objective assessments against state-of-the-art models, our research demonstrates our model capabilities of generating convincing gestures fast. It also achieves slightly better scores in terms of credibility and coherence of generated gestures in relation to speech.
Fichier principal
Vignette du fichier
Towards_realtime_co_speech_gestures_synthesis_using_STARGATE_publi.pdf (2.47 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04667107 , version 1 (02-08-2024)

Licence

Identifiants

  • HAL Id : hal-04667107 , version 1

Citer

Louis Abel, Vincent Colotte, Slim Ouni. Towards realtime co-speech gestures synthesis using STARGATE. 25th Interspeech Conference (INTERSPEECH 2024), Sep 2024, Kos Island, Greece. ⟨hal-04667107⟩
228 Consultations
101 Téléchargements

Partager

More