From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation

Résumé

Multimodal Deep Learning has garnered much interest, and transformers have triggered novel approaches, thanks to the cross-attention mechanism. Here we propose an approach to deal with two key existing challenges: the high computational resource demanded and the issue of missing modalities. We introduce for the first time the concept of knowledge distillation in transformers to use only one modality at inference time. We report a full study analyzing multiple student-teacher configurations, levels at which distillation is applied, and different methodologies. With the best configuration, we improved the state-of-the-art accuracy by 3%, we reduced the number of parameters by 2.5 times and the inference time by 22%. Such performance computation tradeoff can be exploited in many applications and we aim at opening a new research area where the deployment of complex models with limited resources is demanded
Fichier principal
Vignette du fichier
From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation.pdf (1.2 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03389126 , version 1 (20-10-2021)

Identifiants

Citer

Dhruv Agarwal, Tanay Agrawal, Laura Ferrari, Francois F Bremond. From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation. AVSS 2021 - 17th IEEE International Conference on Advanced Video and Signal-based Surveillance, Nov 2021, Virtual, United States. ⟨10.1109/AVSS52988.2021.9663793⟩. ⟨hal-03389126⟩
78 Consultations
248 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More