Multimodal Coordinated Representation Learning Based on Evidence Theory - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Multimodal Coordinated Representation Learning Based on Evidence Theory

Résumé

In multimodal learning, multimodal coordinated representation is an important yet challenging issue, which establishes the interaction between different modalities to describe multimodal data more effectively. Existing coordinated representation methods are implemented in the deep feature space (or encoding space) of each modality. In this paper, based on the framework of evidence theory, we propose a novel coordinated representation method, where multimodal data is described as the basic belief assignment (BBA), and coordinated learning is implemented in the evidential space (i.e., the BBAbased space). That is, the information interaction between different modalities is implemented at the level of evidence modeling (or uncertainty modeling). To use the intra-class and inter-class difference information of multimodal data, we design an evidential coordinated constraint. Furthermore, to represent each modality clearly, we introduce an ambiguity constraint. Experimental results of multimodal classification show that our proposed method is rational and effective.
Fichier principal
Vignette du fichier
DTIS2024-153-Fusion 2024 - Multimodal coordinated representation learning - postprint-Publiée.pdf (220.88 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04723877 , version 1 (07-10-2024)

Identifiants

  • HAL Id : hal-04723877 , version 1

Citer

Wei Li, Deqiang Han, Jean Dezert, Yi Yang. Multimodal Coordinated Representation Learning Based on Evidence Theory. Fusion 2024, Jul 2024, Venise, Italy. ⟨hal-04723877⟩
9 Consultations
20 Téléchargements

Partager

More