Reward-based online learning in nonstationary environments: adapting a P300-speller with a ``Backspace’’ key
Résumé
We adapt a policy gradient approach to the problem of reward-based online learning of a non-invasive EEG-based ``P300''-speller. We first clarify the nature of the P300-speller classification problem and present a general regularized gradient ascent formula. We then show that when the reward is immediate and binary (namely ``bad response'' or ``good response''), each update is expected to improve the classifier accuracy, whether the actual response is correct or not. We also estimate the robustness of the method to occasional mistaken rewards, i.e. show that the learning efficacy may only linearly decrease with the rate of invalid rewards. The effectiveness of our approach is tested in a series of simulations reproducing the conditions of real experiments. We show in a first experiment that a systematic improvement of the spelling rate is obtained for all subjects in the absence of initial calibration. In a second experiment, we consider the case of the online recovery that is expected to follow unforeseen impairments. Combined with a specific failure detection algorithm, the spelling error information (typically contained in a ``backspace'' hit), is shown useful for the policy gradient to adapt the P300 classifier to the new situation, provided the feedback is reliable enough (namely having a reliability greater than 70%).