Decoding speech from non-invasive brain recordings
Résumé
Decoding speech from brain activity is a long-awaited goal in both healthcare and neuroscience. Invasive devices have recently led to major milestones in that regard: deep learning algorithms trained on intracranial recordings now start to decode elementary linguistic features (e.g. letters, words, spectrograms). However, extending this approach to natural speech and non-invasive brain recordings remains a major challenge. To address these issues, we introduce a contrastive-learning model trained to decode self-supervised representations of natural speech from the non-invasive recordings of a large cohort of individuals. To evaluate this approach, we curate and integrate four public datasets, encompassing 169 volunteers recorded with magneto-or electro-encephalography (M/EEG), while they listened to natural speech. The results show that our model can identify, from 3 seconds of MEG signals, the corresponding speech segment with up to 44% accuracy out of 1,594 distinct possibilities-a performance that allows the decoding of phrases absent from the training set. Model comparison and ablation analyses show that these results directly benefit from the use of (i) a contrastive objective, (ii) pretrained representations of speech and (iii) a common convolutional architecture simultaneously trained across multiple participants. Overall, these results delineate a promising path to assist patients with communication disorders, without putting them at risk for brain surgery.
Origine | Fichiers produits par l'(les) auteur(s) |
---|