Multimodal Learning for Detecting Stress under Missing Modalities
Résumé
Dealing with missing modalities is critical for many real-life applications. In this work, we propose a scalable framework for detecting stress induced by specific triggers in multimodal data with missing modalities. Our method has two key components: (i) aligning all modalities in the space of the strongest modality (the video) for learning a joint embedding space and (ii) a Masked Multimodal Transformer, leveraging inter- and intra-modality correlations while handling missing modalities. We validate our method through experiments on the StressID dataset, where we set the new state of the art while demonstrating its robustness across various modality scenarios and its high potential for real-life applications.
Origine | Fichiers produits par l'(les) auteur(s) |
---|