HyperMM : Robust Multimodal Learning with Varying-sized Inputs - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

HyperMM : Robust Multimodal Learning with Varying-sized Inputs

Hava Chaptoukaev
  • Fonction : Auteur
  • PersonId : 1403888
Francesco Galati
  • Fonction : Auteur
  • PersonId : 1128335
Maria A Zuluaga

Résumé

Combining multiple modalities carrying complementary in-formation through multimodal learning (MML) has shown considerable benefits for diagnosing multiple pathologies. However, the robustness of multimodal models to missing modalities is often overlooked. Most works assume modality completeness in the input data, while in clinical prac-tice, it is common to have incomplete modalities. Existing solutions that address this issue rely on modality imputation strategies before using su-pervised learning models. These strategies, however, are complex, compu-tationally costly and can strongly impact subsequent prediction models. Hence, they should be used with parsimony in sensitive applications such as healthcare. We propose HyperMM, an end-to-end framework designed for learning with varying-sized inputs. Specifically, we focus on the task of supervised MML with missing imaging modalities without using im-putation before training. We introduce a novel strategy for training a universal feature extractor using a conditional hypernetwork, and pro-pose a permutation-invariant neural network that can handle inputs of varying dimensions to process the extracted features, in a two-phase task-agnostic framework. We experimentally demonstrate the advantages of our method in two tasks: Alzheimer’s disease detection and breast cancer classification. We demonstrate that our strategy is robust to high rates of missing data and that its flexibility allows it to handle varying-sized datasets beyond the scenario of missing modalities. We make all our code and experiments available at github.com/robustml-eurecom/hyperMM.
Fichier principal
Vignette du fichier
HyperMM_MMMI2024.pdf (1.2 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04658931 , version 1 (22-07-2024)

Identifiants

  • HAL Id : hal-04658931 , version 1

Citer

Hava Chaptoukaev, Vincenzo Marcianó, Francesco Galati, Maria A Zuluaga. HyperMM : Robust Multimodal Learning with Varying-sized Inputs. MMMI 2024, 5th International workshop on Multiscale and Multimodal Medical Imaging, Springer, Oct 2024, Marrakech, Morocco. ⟨hal-04658931⟩

Collections

EURECOM
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More