Vision-Text cross-modal fusion for accurate video captioning - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue IEEE Access Année : 2023

Vision-Text cross-modal fusion for accurate video captioning

Résumé

In this paper, we introduce a novel end-to-end multimodal video captioning framework based on cross-modal fusion of visual and textual data. The proposed approach integrates a modality-attention module, which captures the visual-textual inter-model relationships using cross-correlation. Further, we integrate temporal attention into the features obtained from a 3D CNN to learn the contextual information in the video using task-oriented training. In addition, we incorporate an auxiliary task that employs a contrastive loss function to enhance the model’s generalization capability and foster a deeper understanding of the inter-modal relationships and underlying semantics. The task involves comparing the multimodal representation of the video-transcript with the caption representation, facilitating improved performance and knowledge transfer within the model. Finally, a transformer architecture is used to effectively capture and encode the interdependencies between the text and video information using attention mechanisms. During the decoding phase, the transformer allows the model to attend to relevant elements in the encoded features, effectively capturing long-range dependencies and ultimately generating semantically meaningful captions. The experimental evaluation, carried out on the MSRVTT benchmark, validates the proposed methodology, which achieves BLEU4, ROUGE, and METEOR scores of 0.4408, 0.6291 and 0.3082, respectively. When compared to the state-of-the-art methods, the proposed approach shows superior performance, with gains in performance ranging from 1.21% to 1.52% across the three metrics considered.
Fichier principal
Vignette du fichier
Vision-Text_Cross-Modal_Fusion_for_Accurate_Video_Captioning.pdf (2.82 Mo) Télécharger le fichier
Origine : Publication financée par une institution

Dates et versions

hal-04305431 , version 1 (14-02-2024)

Identifiants

Citer

Kaouther Ouenniche, Ruxandra Tapu, Titus Zaharia. Vision-Text cross-modal fusion for accurate video captioning. IEEE Access, 2023, 11, pp.115477-115492. ⟨10.1109/ACCESS.2023.3324052⟩. ⟨hal-04305431⟩
25 Consultations
6 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More