Sequential Decision-Making under Non-stationary Environments via Sequential Change-point Detection
Résumé
Reinforcement Learning (RL) has been mainly interested in computing an optimal policy for an agent acting in a stationary environment. However, in many real world decision problems the assumption on the stationarity does not hold. One can view a non-stationary environment as a set of contexts (also called modes or modules) where a context corresponds to a possible stationary dynamics of the environment. Even most approaches assume that the number of modes is known, a RL method-Reinforcement Learning with Context Detection (RLCD)-has been recently proposed to learn an a pirori unknown set of contexts and detect context changes. In this paper, we propose a new approach by adapting the tools developed in statistics and more precisely in sequential analysis for detecting an environmental change. Our approach is thus more theoretically founded and necessitates less parameters than RLCD. We also show that our parameters are easier to interpret and therefore easier to tune. Finally, we show experimentally that our approach out-performs the current methods on several application problems.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...