Emotion recognition in video streams using intramodal and intermodal attention mechanisms - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Emotion recognition in video streams using intramodal and intermodal attention mechanisms

Résumé

Automatic emotion recognition from video streams is an essential challenge for various applications including human behavior understanding, mental disease diagnosis, surveillance, or human-machine interaction. In this paper we introduce a novel, completely automatic, multimodal emotion recognition framework based on audio and visual fusion of information designed to leverage the mutually complementary nature of features while maintaining the modality-distinctive information. Specifically, we integrate the spatial, channel and temporal attention into the visual processing pipeline and the temporal self-attention into the audio branch. Then, a multimodal cross-attention fusion strategy is introduced that effectively exploits the relationship between the audio and video features. The experimental evaluation performed on RAVDESS, a publicly available database, validates the proposed approach with average accuracy scores superior to 87.85%. When compared with the state-of the art methods the proposed framework returns accuracy gains of more than 1.85%.
Fichier non déposé

Dates et versions

hal-03937083 , version 1 (13-01-2023)

Identifiants

Citer

Bogdan Mocanu, Ruxandra Tapu. Emotion recognition in video streams using intramodal and intermodal attention mechanisms. Advances in Visual Computing 17th International Symposium on Visual Computing(ISVC), University of Nevada, Reno, NV, USA, Oct 2022, San Diego (CA), United States. pp.295-306, ⟨10.1007/978-3-031-20716-7_23⟩. ⟨hal-03937083⟩
27 Consultations
0 Téléchargements

Altmetric

Partager

More