Modulated Policy Hierarchies - Archive ouverte HAL Access content directly
Conference Papers Year : 2018

Modulated Policy Hierarchies


Solving tasks with sparse rewards is a main challenge in reinforcement learning. While hierarchical controllers are an intuitive approach to this problem, current methods often require manual reward shaping, alternating training phases, or manually defined sub tasks. We introduce modulated policy hierarchies (MPH), that can learn end-to-end to solve tasks from sparse rewards. To achieve this, we study different modulation signals and exploration for hierarchical controllers. Specifically, we find that communicating via bit-vectors is more efficient than selecting one out of multiple skills, as it enables mixing between them. To facilitate exploration, MPH uses its different time scales for temporally extended intrinsic motivation at each level of the hierarchy. We evaluate MPH on the robotics tasks of pushing and sparse block stacking, where it outperforms recent baselines.

Dates and versions

hal-01963580 , version 1 (21-12-2018)



Alexander Pashevich, Danijar Hafner, James Davidson, Rahul R Sukthankar, Cordelia Schmid. Modulated Policy Hierarchies. Deep Reinforcement Learning Workshop at NeurIPS 2018, Dec 2018, Montreal, Canada. ⟨hal-01963580⟩
202 View
0 Download



Gmail Facebook Twitter LinkedIn More