Disentangling Syntax and Semantics in the Brain with Deep Networks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Disentangling Syntax and Semantics in the Brain with Deep Networks

Résumé

The activations of language transformers like GPT-2 have been shown to linearly map onto brain activity during speech comprehension. However, the nature of these activations remains largely unknown and presumably conflate distinct linguistic classes. Here, we propose a taxonomy to factorize the high-dimensional activations of language models into four combinatorial classes: lexical, compositional, syntactic, and semantic representations. We then introduce a statistical method to decompose, through the lens of GPT-2's activations, the brain activity of 345 subjects recorded with functional magnetic resonance imaging (fMRI) during the listening of 4.6 hours of narrated text. The results highlight two findings. First, compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices. Second, contrary to previous claims, syntax and semantics are not associated with separated modules, but, instead, appear to share a common and distributed neural substrate. Overall, this study introduces a versatile framework to isolate, in the brain activity, the distributed representations of linguistic constructs.
Fichier principal
Vignette du fichier
Narratives_Syntax_semantics.pdf (3.87 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03361421 , version 1 (01-10-2021)

Identifiants

  • HAL Id : hal-03361421 , version 1

Citer

Charlotte Caucheteux, Alexandre Gramfort, Jean-Remi King. Disentangling Syntax and Semantics in the Brain with Deep Networks. ICML 2021 - 38th International Conference on Machine Learning, Jul 2021, Online conference, France. ⟨hal-03361421⟩
119 Consultations
114 Téléchargements

Partager

Gmail Facebook X LinkedIn More