Through the looking-glass: Neural basis of self representation during speech perception A V AV AVi AV AVi Results -Self - Archive ouverte HAL Accéder directement au contenu
Poster De Conférence Année : 2015

Through the looking-glass: Neural basis of self representation during speech perception A V AV AVi AV AVi Results -Self

Résumé

Introduction: To recognize one's own face and voice is key for our self-awareness and for our ability to communicate effectively with others. Interestingly, several theories and studies suggest that self-recognition during action observation may partly result from a functional coupling between action and perception systems and a better integration of sensory inputs with our own sensory-motor knowledge (Apps & Tsakiris, 2014). The present fMRI study aimed at further investigating the neural basis of self representation during auditory, visual and audio-visual speech perception. Our working hypothesis was that hearing and/or viewing oneself talk might activate sensory-motor plans to a greater degree than does observing others. Methods: • Participants were 12 healthy adults (25±6years, 9 females). • A total of 1176 stimuli were created. During the scanning session, participants were asked to passively listening and/or viewing auditory (A), visual (V), audio-visual (AV) and incongruent audio-visual (AVi) syllables. Half of the stimuli were related to themselves, the other half to an unknown speaker, and they were either presented with or without noise. In addition, a resting face of the participant or of the unknown speaker, presented with and without acoustic noise, served as baseline. • Functional MRI images were acquired with a sparse-sampling acquisition used to minimize scanner noise (53 axial slices, 3 mm3; TR = 8 sec, delay in TR = 5 sec). • BOLD responses were analyzed using a general linear model, including 16 regressors of interest (4 modalities x 2 speakers x 2 noise levels) and the 4 corresponding baselines (2 speakers x 2 noise levels). A second-level random effect group analysis was carried-out, with the modality, the speaker, and the noise level as within-subject factors and the subjects treated as a random factor. All effects and interactions were calculated with a significance level set at p < .001 uncorrected. Results: • In line with previous brain-imaging studies on multimodal speech perception, the main effect of modality revealed stronger activity in the superior temporal auditory regions during A, AV and AVi compared to V, in the middle temporal visual motion area MT/V5 during V, AV, and AVi compared to A, as well as in the premotor cortices during V, AV and AVi compared to A. The main effect of noise and the modality by noise interaction also showed stronger activity in the primary and secondary auditory cortices for the stimuli presented without noise during A, AV and AVi compared to V. • Crucially, the main effect of the speaker showed stronger activity of the left posterior inferior frontal gyrus as well as of the left cerebellum during the observation of self-related stimuli compared to those related to an unknown speaker. In addition, the speaker by noise interaction revealed stronger activity in the ventral superior parietal lobules and the dorsal extrastriate cortices during the observation of other-related compared to self-related stimuli presented without noise, while the opposite pattern of activity was observed for noisy stimuli. Finally, the speaker by modality interaction showed stronger activity for self-related compared to other-related stimuli during A in the right auditory cortex, as well as stronger activity for other-related compared to self-related stimuli for V, AV and AVi in the left posterior temporal sulcus. Conclusions: Listening and/or viewing oneself talk was found to activate to a greater extent the left posterior inferior frontal gyrus and cerebellum, two regions thought to be responsible for predicting sensory outcomes of action generation constraining perceptual recognition. In addition, activity in associative auditory and visual brain areas was also found to be modulated by the speaker identity depending on the modality of presentation and the acoustic noise. Altogether these results suggest that self-awareness during speech perception is partly driven by afferent and efferent signals in sensory-motor areas.
Fichier principal
Vignette du fichier
hbm-2015.pdf (2.86 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02074963 , version 1 (21-03-2019)

Identifiants

  • HAL Id : hal-02074963 , version 1

Citer

Marc Sato, Avril Treille, Coriandre Emmanuel Vilain, Jean-Luc Schwartz. Through the looking-glass: Neural basis of self representation during speech perception A V AV AVi AV AVi Results -Self. 21th Annual Meeting of the Organization for Human Brain Mapping (OHBM), Jun 2015, honolulu, United States. ⟨hal-02074963⟩
69 Consultations
30 Téléchargements

Partager

Gmail Facebook X LinkedIn More