Text to brain: predicting the spatial distribution of neuroimaging observations from text reports - Archive ouverte HAL
Communication Dans Un Congrès Année : 2018

Text to brain: predicting the spatial distribution of neuroimaging observations from text reports

Résumé

Despite the digital nature of magnetic resonance imaging, the resulting observations are most frequently reported and stored in text documents. There is a trove of information untapped in medical health records, case reports, and medical publications. In this paper, we propose to mine brain medical publications to learn the spatial distribution associated with anatomical terms. The problem is formulated in terms of minimization of a risk on distributions which leads to a least-deviation cost function. An efficient algorithm in the dual then learns the mapping from documents to brain structures. Empirical results using coordinates extracted from the brain-imaging literature show that i) models must adapt to semantic variation in the terms used to describe a given anatomical structure, ii) voxel-wise parameterization leads to higher likelihood of locations reported in unseen documents, iii) least-deviation cost outperforms least-square. As a proof of concept for our method, we use our model of spatial distributions to predict the distribution of specific neurological conditions from text-only reports.
Fichier principal
Vignette du fichier
main.pdf (790.79 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01807295 , version 1 (04-06-2018)
hal-01807295 , version 2 (04-06-2018)
hal-01807295 , version 3 (28-06-2018)

Identifiants

Citer

Jérôme Dockès, Demian Wassermann, Russel Poldrack, Fabian M. Suchanek, Bertrand Thirion, et al.. Text to brain: predicting the spatial distribution of neuroimaging observations from text reports. MICCAI 2018, Sep 2018, Granada, Spain. ⟨hal-01807295v1⟩
1412 Consultations
446 Téléchargements

Altmetric

Partager

More