P300 event-related potentials classification from EEG data through interval feature extraction and recurrent neural networks - Archive ouverte HAL Accéder directement au contenu
Poster De Conférence Année : 2020

P300 event-related potentials classification from EEG data through interval feature extraction and recurrent neural networks

Résumé

Event-related potentials (ERPs) are reproducible electrophysiological responses after the administration of an external stimulus, e.g. visual, auditory. They appear as weak signals with low amplitude and signal- to-noise ratio, typically masked by noise and spontaneous EEG activity. Thus, the classical experimental protocol used to measure ERPs consists in multiple presentations of the same kind of stimulus: epochs, that is the EEG time series after the stimulus onset, are time-locked averaged so as to cancel out noise and observe the desired waveform. An example of an ERP component is the P300, i.e. a positive deflection appearing around 300 ms after the stimulus presentation, which is typically elicited by the oddball paradigm: sequences of repetitive stimuli (non-target) are infrequently interrupted by a deviant stimulus (target). The P300 component concerns the evaluation and categorization of a stimulus, allowing to discriminate the subject’s brain states. Thus, the different neural responses can be used to investigate dysfunctions in sensory and cognitive processing, as well as being a suitable tool for brain- computer interface. Hence, novel signal processing and machine learning algorithms are gaining ground to achieve robust automatic ERPs classification, and consequently a more extensive and practical use of EEG. In the current study, we examined the performance of a LSTM network to classify P300 component between target and non-target stimuli in an auditory oddball paradigm. EEG recordings of 19 channels of two subjects were acquired during an auditory oddball paradigm with the 80% of the tones at a low frequency pitch, and the remaining 20% at a high frequency pitch. The signals were re-referenced with respect to the mastoids and band-passed filtered between 0.1 and 30 Hz. After extracting the epochs and removing the baseline, visual inspection and rejection was performed to end up with a total of 324 trials (200 from S1, 124 from S2). A different classification problem was considered for each electrode. Interval features derived from signal segments of various length were extracted. For each time i of the interval, segments with length varying as the power of two are formed, by keeping always i as the starting point and including the rest of the time series until the length of the interval exceeds the epoch. Then, for each interval the average amplitude and the standard deviation were calculated and used as features to feed the network. A single layer Long Short-Term Memory (LSTM) network was defined, followed by fully connected layers and a softmax activation function. Two approaches were tested: a subject-dependent one, by using 75% of S1 samples as training set and the other 25% as test; a subject- independent one, by using 200 samples (100 S1 + 100 S2) as training set and the remaining ones as test. The mean classification accuracy over all the electrodes for the first approach was 81.16%, while for the second approach 75.55%. These results are in line with the previous literature studies, in which recurrent neural network outperformed other common algorithms, such as SVM. In particular, F3, F4, Fp1 and Fp2 presents globally the best accuracies. Indeed, P300 is typically stronger on the frontal lobes. Also, compared to other feature extraction methods, the interval feature extraction is intuitive and easy- to-implement, and does not rely on any other parameter. The results encourage the use of this classifier, but further analyses need to be done. For example, it would be interesting to collect data with a more balanced number of events – target occur always less than non-targets. Moreover, the interval feature extraction could be combined to other features, e.g. in the frequency domain, and fed to a LSTM bidirectional layer.
Fichier principal
Vignette du fichier
Poster_Sophia_summit_2020_final.pdf (606.1 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03446272 , version 1 (24-11-2021)

Identifiants

Citer

Giulia Rocco, Jerome Lebrun, Marie-Noële Magnié-Mauro, Olivier Meste. P300 event-related potentials classification from EEG data through interval feature extraction and recurrent neural networks. SophIA Summit 2020, Nov 2020, Sophia Antipolis, France. ⟨10.13140/RG.2.2.34911.12960⟩. ⟨hal-03446272⟩
126 Consultations
29 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More