Audio-visual combination of syllables involves time-sensitive dynamics following from fusion failure - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Scientific Reports Année : 2020

Audio-visual combination of syllables involves time-sensitive dynamics following from fusion failure

Jaime Delgado-Saa
  • Fonction : Auteur
Itsaso Olasagasti
  • Fonction : Auteur
Anne-Lise Giraud

Résumé

In face-to-face communication, audiovisual (AV) stimuli can be fused, combined or perceived as mismatching. While the left superior temporal sulcus (STS) is presumably the locus of AV integration, the process leading to combination is unknown. Based on previous modelling work, we hypothesize that combination results from a complex dynamic originating in a failure to integrate AV inputs, followed by a reconstruction of the most plausible AV sequence. In two different behavioural tasks and one MEG experiment, we observed that combination is more time demanding than fusion. Using time-/source-resolved human MEG analyses with linear and dynamic causal models, we show that both fusion and combination involve early detection of AV incongruence in the STS, whereas combination is further associated with enhanced activity of AV asynchrony-sensitive regions (auditory and inferior frontal cortices). Based on neural signal decoding, we finally show that only combination can be decoded from the IFG activity and that combination is decoded later than fusion in the STS. These results indicate that the AV speech integration outcome primarily depends on whether the STS converges or not onto an existing multimodal syllable representation, and that combination results from subsequent temporal processing, presumably the off-line reordering of incongruent AV stimuli. Screen-based communication poses specific challenges to our brain for integrating audiovisual (AV) disparities due to either asynchronies between audio and visual signals (e.g. video call software) or to mismatching physical features (dubbed movies). To make sense of discrepant audiovisual speech stimuli, humans mostly focus on the auditory input 1 , which is taken as ground truth, and try to discard the disturbing visual one. In some specific cases, however, AV discrepancy goes unnoticed and the auditory and visual inputs are implicitly fused into a percept that corresponds to none of them 2. More interestingly perhaps, discrepant AV stimuli can also be combined into a composite percept where simultaneous sensory inputs are perceived sequentially 2,3. These two distinct outcomes can experimentally be obtained using the "McGurk effect" 2 , where an auditory /aba/ dubbed onto a facial display articulating /aga/ elicits the perception of a fused syllable /ada/, while an auditory /aga/ dubbed onto a visual /aba/ typically leads to a mix of the combined syllables /abga/ or /agba/. What determines whether AV stimuli are going to be fused 4-6 or combined 7 , and the underlying neural dynamics of such a perceptual divergence is not known yet. Audiovisual speech integration draws on a number of processing steps distributed over several cortical regions, including auditory and visual cortices, the left posterior temporal cortex, and higher-level language regions of the left prefrontal 8-12 and anterior temporal cortices 13,14. In this distributed network, the left superior temporal sulcus (STS) plays a central role in integrating visual and auditory inputs from the visual motion area (mediotemporal cortex, MT) and the auditory cortex (AC) 15-21. The STS is characterized by relatively smooth temporal integration properties making it resilient to the natural asynchrony between auditory and visual speech inputs, i.e. the fact that orofacial speech movements often start before the sounds they produce 6,22,23. Although the STS responds more strongly when auditory and visual speech are perfectly synchronous 24 , its activity remains largely insensitive to temporal discrepancies 25 , reflecting a broad temporal window of integration in the order of

Domaines

Psychologie
Fichier principal
Vignette du fichier
s41598-020-75201-7.pdf (2.67 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03039448 , version 1 (03-12-2020)

Identifiants

Citer

Sophie Bouton, Jaime Delgado-Saa, Itsaso Olasagasti, Anne-Lise Giraud. Audio-visual combination of syllables involves time-sensitive dynamics following from fusion failure. Scientific Reports, 2020, 10, ⟨10.1038/s41598-020-75201-7⟩. ⟨hal-03039448⟩
25 Consultations
54 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More