Multi-view and Cross-view Brain Decoding - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Multi-view and Cross-view Brain Decoding

Résumé

Can we build multi-view decoders that can decode concepts from brain recordings corresponding to any view (picture, sentence, word cloud) of stimuli? Can we build a system that can use brain recordings to automatically describe what a subject is watching using keywords or sentences? How about a system that can automatically extract important keywords from sentences that a subject is reading? Previous brain decoding efforts have focused only on single view analysis and hence cannot help us build such systems. As a first step toward building such systems, inspired by Natural Language Processing literature on multi-lingual and cross-lingual modeling, we propose two novel brain decoding setups: (1) multi-view decoding (MVD) and (2) cross-view decoding (CVD). In MVD, the goal is to build an MV decoder that can take brain recordings for any view as input and predict the concept. In CVD, the goal is to train a model which takes brain recordings for one view as input and decodes a semantic vector representation of another view. Specifically, we study practically useful CVD tasks like image captioning, image tagging, keyword extraction, and sentence formation. Our extensive experiments lead to MVD models with ~0.68 average pairwise accuracy across view pairs, and also CVD models with ~0.8 average pairwise accuracy across tasks. Analysis of the contribution of different brain networks reveals exciting cognitive insights: (1) Models trained on picture or sentence view of stimuli are better MV decoders than a model trained on word cloud view. (2) Our extensive analysis across 9 broad regions, 11 language sub-regions and 16 visual sub-regions of the brain help us localize, for the first time, the parts of the brain involved in cross-view tasks like image captioning, image tagging, sentence formation and keyword extraction. We make the code publicly available.
Fichier principal
Vignette du fichier
2022.coling-1.10.pdf (3.56 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03946696 , version 1 (19-01-2023)

Identifiants

Citer

Subba Reddy Oota, Jashn Arora, Manish Gupta, Raju Surampudi Bapi. Multi-view and Cross-view Brain Decoding. Coling 2022 - The 29th International Conference on Computational Linguistics, Oct 2022, Gyeongju, South Korea. pp.105-115. ⟨hal-03946696⟩

Collections

CNRS INRIA INRIA2
17 Consultations
16 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More