BS-GAENets: Brain-Spatial Feature Learning Via a Graph Deep Autoencoder for Multi-modal Neuroimaging Analysis - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

BS-GAENets: Brain-Spatial Feature Learning Via a Graph Deep Autoencoder for Multi-modal Neuroimaging Analysis

Résumé

The obsession with how the brain and behavior are related is a challenge for cognitive neuroscience research, for which functional magnetic resonance imaging (fMRI) has significantly improved our understanding of brain functions and dysfunctions. In this paper, we propose a novel multi-modal spatial cerebral graph based on an attention mechanism called MSCGATE that combines both fMRI modalities: task-, and rest-fMRI based on spatial and cerebral features to preserve the rich complex structure between brain voxels. Moreover, it attempts to project the structural-functional brain connections into a new multi-modal latent representation space, which will subsequently be inputted to our trace regression predictive model to output each subject’s behavioral score. Experiments on the InterTVA dataset reveal that our proposed approach outperforms other graph representation learning-based models, in terms of effectiveness and performance.
Fichier non déposé

Dates et versions

hal-04055621 , version 1 (03-04-2023)

Identifiants

Citer

Refka Hanachi, Akrem Sellami, Imed Riadh Farah. BS-GAENets: Brain-Spatial Feature Learning Via a Graph Deep Autoencoder for Multi-modal Neuroimaging Analysis. 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021), Feb 2021, Vienna, Austria. pp.303-327, ⟨10.1007/978-3-031-25477-2_14⟩. ⟨hal-04055621⟩
47 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More