Speech in the mirror? Neurobiological correlates of self-speech perception - Archive ouverte HAL Accéder directement au contenu
Poster De Conférence Année : 2015

Speech in the mirror? Neurobiological correlates of self-speech perception

Résumé

Self-awareness and self-recognition during action observation may partly result from a functional matching between action and perception systems. This perception-action interaction enhances the integration between sensory inputs and our own sensory-motor knowledge. We present combined EEG and fMRI studies examining the impact of self-knowledge on multisensory integration mechanisms. More precisely, we investigated this impact during auditory, visual and audio-visual speech perception. Our hypothesis was that hearing and/or viewing oneself talk would facilitate the bimodal integration process and activate sensory-motor maps to a greater extent than observing others. In both studies, half of the stimuli presented the participants’ own productions (self condition) and the other half presented an unknown speaker (other condition). For the “self” condition, we recorded videos of each participant producing/pa/, /ta/ and /ka/ syllables. In the “other” condition, we recorded videos of a speaker the participants had never met producing the same syllables. These recordings were then presented in different modalities: auditory only (A), visual only (V), audio-visual (AV) and incongruent audiovisual (AVi – incongruency referred to different speakers for the audio and video components). In the EEG experiment, 18 participants had to categorize the syllables. In the fMRI experiment, 12 participants had listen to and/or view passively the syllables. In the EEG session, audiovisual interactions were estimated by comparing auditory N1/P2 ERPs during bimodal responses (AV) with the sum of the responses in A and V only conditions (A+V). The amplitude of P2 ERPs was lower for AV than A+V. Importantly, latencies for N1 ERPs were shorter for the “Visual-self” condition than the “Visual-other”, regardless of signal type. In the fMRI session, the presentation modality had an impact on brain activation: activation was stronger for audio or audiovisual stimuli in the superior temporal auditory regions (A= AV=AVi> V), and for video or audiovisual stimuli in MT/V5 and in the premotor cortices (V=AV=AVi> A). In addition, brain activity was stronger in the “self” than the “other” condition both at the left posterior inferior frontal gyrus and cerebellum (lobules I-IV). In line with previous studies on multimodal speech perception, our results point to the existence of integration mechanisms of auditory and visual speech signals. Critically, they further demonstrate a processing advantage when the perceptual situation involves our own speech production. In addition, hearing and/or viewing oneself talk increased activation in the left posterior IFG and cerebellum. These regions are generally responsible for predicting sensory outcomes of action generation. Altogether, these results suggest that viewing our own utterances leads to a temporal facilitation of auditory and visual speech integration. Moreover, processing afferent and efferent signals in sensory-motor areas leads to self -awareness during speech perception. Part of this research was supported by a grant from the European Research Council (FP7/2007-2013 Grant Agreement no. 339152, "Speech Unit(e)s")
Fichier principal
Vignette du fichier
NLC_self_EEG&IRMf_poster_FINAL.pdf (1.05 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02074936 , version 1 (21-03-2019)

Identifiants

  • HAL Id : hal-02074936 , version 1

Citer

Avril Treille, Coriandre Emmanuel Vilain, Sonia Kandel, Jean-Luc Schwartz, Marc Sato. Speech in the mirror? Neurobiological correlates of self-speech perception. Seventh Annual Society for the Neurobiology of Language Conference, Oct 2015, Chicago, United States. ⟨hal-02074936⟩
46 Consultations
22 Téléchargements

Partager

Gmail Facebook X LinkedIn More