A Multi-Stream Approach for Seizure Classification with Knowledge Distillation
Abstract
In this work, we propose a multi-stream approach with knowledge distillation to classify epileptic seizures and psychogenic non-epileptic seizures. The proposed framework utilizes multi-stream information from keypoints and appearance from both body and face. We take the detected keypoints through time as spatio-temporal graph and train it with an adaptive graph convolutional networks to model the spatio-temporal dynamics throughout the seizure event. Besides, we regularize the keypoint features with complementary information from the appearance stream by imposing a knowledge distillation mechanism. We demonstrate the effectiveness of our approach by conducting experiments on real-world seizure videos. The experiments are conducted by both seizure-wise cross validation and leaveone-subject-out validation, and with the proposed model, the performances of the F1-score/accuracy are 0.89/0.87 for seizure-wise cross validation, and 0.75/0.72 for leaveone-subject-out validation.