What aspects of NLP models and brain datasets affect brain-NLP alignment? - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

What aspects of NLP models and brain datasets affect brain-NLP alignment?

Résumé

Recent brain encoding studies highlight the potential for natural language processing models to improve our understanding of language processing in the brain. Simultaneously, naturalistic fMRI datasets are becoming increasingly available and present even further avenues for understanding the alignment between brains and models. However, with the multitude of available models and datasets, it can be difficult to know what aspects of the models and datasets are important to consider. In this work, we present a systematic study of the brain alignment across five naturalistic fMRI datasets, two stimulus modalities (reading vs. listening), and different Transformer text and speech models. We find that all textbased language models are significantly better at predicting brain responses than all speech models for both modalities. Further, bidirectional language models better predict fMRI responses and generalize across datasets and modalities.
Fichier principal
Vignette du fichier
0000821.pdf (1.22 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04416456 , version 1 (25-01-2024)

Identifiants

Citer

Subba Reddy Oota, Mariya Toneva. What aspects of NLP models and brain datasets affect brain-NLP alignment?. 2023 Conference on Cognitive Computational Neuroscience, CCN, Aug 2023, Oxford, UK, United Kingdom. ⟨10.32470/CCN.2023.1273-0⟩. ⟨hal-04416456⟩

Collections

CNRS
22 Consultations
21 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More