Transformer with Controlled Attention for Synchronous Motion Captioning - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Transformer with Controlled Attention for Synchronous Motion Captioning

Karim Radouane
Sylvie Ranwez
Julien Lagarde

Résumé

In this paper, we address a challenging task, synchronous motion captioning, that aim to generate a language description synchronized with human motion sequences. This task pertains to numerous applications, such as aligned sign language transcription, unsupervised action segmentation and temporal grounding. Our method introduces mechanisms to control self-and cross-attention distributions of the Transformer, allowing interpretability and time-aligned text generation. We achieve this through masking strategies and structuring losses that push the model to maximize attention only on the most important frames contributing to the generation of a motion word. These constraints aim to prevent undesired mixing of information in attention maps and to provide a monotonic attention distribution across tokens. Thus, the cross attentions of tokens are used for progressive text generation in synchronization with human motion sequences. We demonstrate the superior performance of our approach through evaluation on the two available benchmark datasets, KIT-ML and Hu-manML3D. As visual evaluation is essential for this task, we provide a comprehensive set of animated visual illustrations in the code repository: https://github.com/ rd20karim/Synch-Transformer.
Fichier principal
Vignette du fichier
Synch_Transformer_Karim_Radouane.pdf (7.36 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04697946 , version 1 (14-09-2024)

Licence

Identifiants

  • HAL Id : hal-04697946 , version 1

Citer

Karim Radouane, Sylvie Ranwez, Julien Lagarde, Andon Tchechmedjiev. Transformer with Controlled Attention for Synchronous Motion Captioning. 2024. ⟨hal-04697946⟩
42 Consultations
23 Téléchargements

Partager

More