Preprints, Working Papers, ... Year : 2022

A Supervised Formulation of Reinforcement Learning: with super linear convergence properties

Abstract

Deep reinforcement learning uses simulators as abstract oracles to interact with the environment. In continuous domains of multi body robotic systems, differentiable simulators have recently been proposed but are yet under utilized, even though we have the knowledge to make them produce richer information. This problem when juxtaposed with the usually high computational cost of exploration-exploitation in high dimensional state space can quickly render reinforcement learning algorithms impractical. In this paper, we propose to combine learning and simulators such that the quality of both increases, while the need to exhaustively search the state space decreases. We propose to learn value function and state, control trajectories through the locally optimal runs of model based trajectory optimizer. The learned value function, along with an estimate of optimal state and control policies, is subsequently used in the trajectory optimizer: the value function estimate serves as a proxy for shortening the preview horizon, while the state and control approximations serve as a guide in policy search for our trajectory optimizer. The proposed approach demonstrates a better symbiotic relation, with super linear convergence, between learning and simulators, that we need for end-to-end learning of complex poly articulated systems.
Fichier principal
Vignette du fichier
icra_2023.pdf (1.81 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03674092 , version 1 (20-05-2022)
hal-03674092 , version 2 (19-09-2022)

Identifiers

  • HAL Id : hal-03674092 , version 2

Cite

Amit Parag, Nicolas Mansard. A Supervised Formulation of Reinforcement Learning: with super linear convergence properties. 2022. ⟨hal-03674092v2⟩
257 View
135 Download

Share

More