CentralNet: a Multilayer Approach for Multimodal Fusion - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

CentralNet: a Multilayer Approach for Multimodal Fusion

Résumé

This paper proposes a novel multimodal fusion approach, aiming to produce best possible decisions by integrating information coming from multiple media. While most of the past multimodal approaches either work by projecting the features of different modalities into the same space, or by coordinating the representations of each modality through the use of constraints, our approach borrows from both visions. More specifically, assuming each modality can be processed by a separated deep convolutional network, allowing to take decisions independently from each modality, we introduce a central network linking the modality specific networks. This central network not only provides a common feature embedding but also regularizes the modality specific networks through the use of multi-task learning. The proposed approach is validated on 4 different computer vision tasks on which it consistently improves the accuracy of existing multimodal fusion approaches.
Fichier principal
Vignette du fichier
eccv2018submission.pdf (749.41 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01858560 , version 1 (21-08-2018)

Identifiants

Citer

Valentin Vielzeuf, Alexis Lechervy, Stéphane Pateux, Frédéric Jurie. CentralNet: a Multilayer Approach for Multimodal Fusion. European Conference on Computer Vision Workshops: Multimodal Learning and Applications, Sep 2018, Munich, Germany. ⟨hal-01858560⟩
360 Consultations
1724 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More