Neural Signatures Of Musical And Linguistic Interactions During Natural Song Listening
Résumé
How are songs processed in the human brain? In song, tunes and lyrics are tightly bound in a music-language synergy to convey meaning and emotions beyond mere linguistic content, raising questions on how the two components are represented and integrated into a cohesive perceptual whole. Previous research pointed to areas of the human cortex sensitive to music, speech, and song, finding both shared and specialized sites. Yet, the interactions between tunes and lyrics processing when listening to songs remain poorly understood. To tackle this question, we probed neural predictive mechanisms specific to music and speech with electroencephalography. The encoding of melodic predictions was compared when listeners were presented with songs or the corresponding hummed (speech-free) melodies. Similarly, the encoding of phonemic predictions was studied in song and the corresponding spoken (melody-free) lyrics. We found that the concurrence of music and speech in songs alters how their predictive signals are generated and processed, altering their neural encoding.Furthermore, we found a trade-off in the neural encoding of melodic and phonemic expectations, with their balance depending both on who was listening (internal driver reflecting the listener's preference, e.g., musical training) and how the song is composed and performed (external driver reflecting the salience of lyrics and tunes). Altogether, our results indicate that song involves parallel prediction processes competitively interacting for the use of shared processing resources.
Origine | Fichiers produits par l'(les) auteur(s) |
---|