Continuous Learning of Action and State Spaces (CLASS)
Résumé
We present a novel approach to state space discretization for constructivist and reinforcement learning. Constructivist learning and reinforcement learning often operate on a predefined set of states and transitions (state space). AI researchers design algorithms to reach particular goal states in this state space (for example, visualized in the form of goal cells that a robot should reach in a grid). When the size and the dimensionality of the state space increases, however, finding goal states becomes intractable. It is nonetheless assumed that these algorithms can have useful applications in the physical world provided that there is a way to construct a discrete state space of reasonable size and dimensionality. Yet, the manner in which the state space is discretized is the source of many problems for both constructivist and reinforcement learning approaches. The problems can roughly be divided into two categories: (1) wiring too much domain information into the solution, and (2) requiring massive storage to represent the state space (such as Q-tables. The problems relate to (1) the non generality arising from wiring domain information into the solution, and (2) non scalability of the approach to useful domains involving high dimensional state spaces. Another important limitation is that high dimensional state spaces require a massive number of learning trials. We present a new approach that builds upon ideas from place cells and cognitive maps.