Active speaker recognition using cross attention audio-video fusion
Résumé
The audio-video based multimodal active speaker recognition from video streams has attracted the attention of the scientific community due to its wide range of applications, such as human centered computing or semantic video understanding. Most of the existing techniques use early or late fusion audio- video (A-V) strategies without considering completely the inter- modal and intra-modal interactions. In this context, this research work proposes a novel cross-modal attention mechanism based on visual and audio modalities designed to capture the complex spatiotemporal relationship between descriptors and to fuse complementary information from multiple modalities. First, we perform the representation learning of audio and video using deep convolutional neural networks (CNNs). Secondly, we feed the features of both modalities to a cross attention block by fusing A-V features at the model level. Finally, we obtain the identity of the active speaker and associate to each character the corresponding subtitle segment. The experimental evaluation performed on 30 videos validates the approach with average F1-scores superior to 88%. The effectiveness of the proposed system architecture is compared against state-of-the-art methods and demonstrates accuracy gains of more than 3%.