Preliminary study for intonation classification of imagined speech for brain-computer interface applications
Résumé
In the current study, we focused on decoding speech prosody from EEG. Prosody (i.e., melody and rhythm of speech) is important during communication as it allows to convey emotion and meaning. However, it has received little attention in the field of brain-computer interfaces. To address this issue, we contrasted the production of two syllables, "ba" and "da", produced mentally as an affirmation (e.g., "ba.") or a question (e.g., "ba?") using two different intonations. We focused on spectral features. After classification in the time-frequency domain, we found above chance-level accuracies in specific frequency ranges of the alpha band (7-12 Hz) early on during the production phase. We also obtained above chance-level results on a range of the low-beta band (16-20 Hz) during a late time window. Based on the visual inspection of topographies and the literature, we suggest that the results during the early time window, but not that during the late time window, reflect a genuine difference between imagined affirmation and question production. Future studies should provide more information about neural markers and underlying neuro-cognitive processes to improve the understanding of the imagined intonation production. This would pave the way for the development of speech-based BCI capable of differentiating intonation and prosody in general.
Fichier principal
EUSIPCO_COMREV_Paper_reviewer_corrections_Final.pdf (591.15 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|