Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content - Laboratoire d'Etude de l'Apprentissage et du Développement (LEAD) Accéder directement au contenu
Article Dans Une Revue Trends in Hearing Année : 2023

Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content

Résumé

While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music — often not well perceived by CI users— reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.
Fichier principal
Vignette du fichier
harding-et-al-2023-musical-emotion-categorization-with-vocoders-of-varying-temporal-and-spectral-content.pdf (3.15 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte
licence : CC BY NC - Paternité - Pas d'utilisation commerciale

Dates et versions

hal-04210082 , version 1 (18-09-2023)

Licence

Paternité - Pas d'utilisation commerciale

Identifiants

Citer

Eleanor E Harding, Etienne Gaudrain, Imke J Hrycyk, Robert L Harris, Barbara Tillmann, et al.. Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content. Trends in Hearing, 2023, 27, pp.233121652211411. ⟨10.1177/23312165221141142⟩. ⟨hal-04210082⟩
7 Consultations
2 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More