Communication Dans Un Congrès Année : 2025

Encoding the Spatial Features of Co-Verbal Manual Gestures: a Framework for Automated Annotation

Résumé

Despite the tools and technological advancements of recent years, the functioning of coverbal gestures remains largely unexplained. Our ANR project thus aims to investigate the speech-gesture relationship [8, 9, 3, 4, 10] by combining multidisciplinary approaches (linguistics, computer science, movement sciences), to develop more efficient generative models of artificial gestures [6]. Within this context, we conducted a study to determine annotation categories intended to enhance database-driven learning systems and their applications. This proposal focuses on one aspect of this work, namely the spatial aspects of coverbal manual gestures, and builds upon prior studies on methods for describing [8, 9, 3, 4, 1] and annotating gestures [5, 11, 12]. The research explores the development of an encoding system relevant for gesture characterization (establishing spatiality categories) and corpus enhancement (automatic spatial feature annotations). The study was based on 13 minutes of audio and MOCAP data from the BEAT corpus [7], including approximately 500 gestures annotated by our team. It combined our expertise in manual annotation with computational tools not commonly used for gesture annotation. The research identified two complementary spatial representation modes: positioning and orientation, each determined within its own three-dimensional reference system. Positioning characterizes the gesture location within the gestural space situated in front of the speaker and defined by the maximum reach of their hands. This space is modeled as a hemisphere divided into 96 zones, determined by analyzing the density of articulator positions (spine, shoulders, elbows, hands, and fingers), incorporating body measurements, and defining a coverage ratio. Within this hemisphere is dynamically included a 64-zone spatial orientation sphere, in which calculated vectors provide precise characterization of movement orientation. Designed to clarify formal gesture description and simplify annotation tasks, these spatial attributes could, during the analysis of gesture-speech articulation, be considered as potential distinctive features in the meaning construction. References: [1] Geneviève Calbris. 1980. The Semiotics of French Gestures. Indiana University Press, Bloomington. [2] Gaëlle Ferré. 2019. Analyse de discours multimodale. Gestualité et prosodie en discours, UGA Éditions, Grenoble. [3] Adam Kendon. 1980. Gesture and Speech. Cambridge University Press, Cambridge. [4] Adam Kendon. 2004. Gesture: Visible Action as Utterance. Cambridge University Press, Cambridge; New York. [5] Michael Kipp, Michael Nef, and Irene Albrecht. 2007. An annotation scheme for conversational gestures: How to economically capture timing and form. Language Resources and Evaluation 41 (12 2007), 325–339. https://doi.org/10.1007/s10579-007-9053-5 [6] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. In Proceedings of the 25th International Conference on Multimodal Interaction (Paris, France) (ICMI ’23). Association for Computing Machinery, New York, NY, USA, 792–801. https://doi.org/10.1145/3577190.3616120 [7] Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. 2022. BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII. Springer, 612–630. [8] David McNeill. 1992. Hand and Mind: What Gestures Reveal about Thought. The University of Chicago Press, Chicago and London. [9] David McNeill. 2005. Gesture and Thought. The University of Chicago Press, Chicago and London. [10] Emmanuel A. Schegloff. 1984.On some gestures' relation to talk. In J.M. Atkinson, J. Heritage (Eds.), Structures of Sound Action: Studies in Conversational Analysis, Cambridge University Press, Cambridge, 266–296. [11] Marion Tellier, Mathilde Guardiola, and Brigitte Bigi. Types de gestes et utilisation de l’espace gestuel dans une description spatiale : méthodologie de l’annotation. Atelier DEGELS, 18e conférence annuelle Traitement Automatique des Langues Naturelles (TALN), June 2011, Montpellier, France. 45–56.

Fichier principal
Vignette du fichier
pres_icom_hal.pdf (5.35 Mo) Télécharger le fichier
abstract_ICOM.pdf (646.73 Ko) Télécharger le fichier

Dates et versions

hal-05343752 , version 1 (03-11-2025)

Licence

Identifiants

  • HAL Id : hal-05343752 , version 1

Citer

Mickaëlla Grondin-Verdon, Domitille Caillat, Slim Ouni. Encoding the Spatial Features of Co-Verbal Manual Gestures: a Framework for Automated Annotation. 12th International Conference on Multimodality (ICOM), University of Groningen, Oct 2025, Groningen, Netherlands. ⟨hal-05343752⟩
462 Consultations
193 Téléchargements

Partager

  • More