Visual cues can bias EEG Deep Learning models
Résumé
The use of Deep Learning (DL) for classifying motor imagery-based brain-computer interfaces (MI-BCIs) has seen significant growth over the past years, promising to enhance EEG classification accuracies. However, the black-box nature of DL may lead to accurate but biased and/or irrelevant DL models. Here, we study the influence of using visual cue EEG (which is commonly done) in the DL input window on both the features learned and the classification performance of a state-of-the-art DL model, DeepConvNet. The classifier was tested on a large MI-BCI dataset with two time windows post visual cue: 0-4s (with the cue EEG) and 0.5-4.5s (without). Performance-wise, the first condition significantly outperformed the second (86.82% vs. 76.11%, p<0.001). However, saliency maps analyses demonstrated that the inclusion of the visual cue EEG leads to the extraction of cue-related evoked potentials, which are distinct from the MI features used by the model trained without visual cues EEG.
Origine | Fichiers produits par l'(les) auteur(s) |
---|