Active speaker recognition using cross attention audio-video fusion - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Active speaker recognition using cross attention audio-video fusion

Résumé

The audio-video based multimodal active speaker recognition from video streams has attracted the attention of the scientific community due to its wide range of applications, such as human centered computing or semantic video understanding. Most of the existing techniques use early or late fusion audio- video (A-V) strategies without considering completely the inter- modal and intra-modal interactions. In this context, this research work proposes a novel cross-modal attention mechanism based on visual and audio modalities designed to capture the complex spatiotemporal relationship between descriptors and to fuse complementary information from multiple modalities. First, we perform the representation learning of audio and video using deep convolutional neural networks (CNNs). Secondly, we feed the features of both modalities to a cross attention block by fusing A-V features at the model level. Finally, we obtain the identity of the active speaker and associate to each character the corresponding subtitle segment. The experimental evaluation performed on 30 videos validates the approach with average F1-scores superior to 88%. The effectiveness of the proposed system architecture is compared against state-of-the-art methods and demonstrates accuracy gains of more than 3%.
Fichier non déposé

Dates et versions

hal-03937091 , version 1 (13-01-2023)

Identifiants

Citer

Bogdan Mocanu, Ruxandra Tapu. Active speaker recognition using cross attention audio-video fusion. 2022 10th European Workshop on Visual Information Processing (EUVIP), Sep 2022, Lisbon, Portugal. pp.1-6, ⟨10.1109/EUVIP53989.2022.9922810⟩. ⟨hal-03937091⟩
23 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More