Learning Compositional Neural Programs with Recursive Tree Search and Planning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Learning Compositional Neural Programs with Recursive Tree Search and Planning

Thomas Pierrot
  • Fonction : Auteur
Guillaume Ligner
  • Fonction : Auteur
Scott Reed
  • Fonction : Auteur
Olivier Sigaud
Nicolas Perrin
Alexandre Laterre
  • Fonction : Auteur
David Kas
  • Fonction : Auteur
Karim Beguir
  • Fonction : Auteur

Résumé

We propose a novel reinforcement learning algorithm, AlphaNPI, that incorporates the strengths of Neural Programmer-Interpreters (NPI) and AlphaZero. NPI contributes structural biases in the form of modularity, hierarchy and recursion, which are helpful to reduce sample complexity, improve generalization and increase interpretability. AlphaZero contributes powerful neural network guided search algorithms, which we augment with recursion. AlphaNPI only assumes a hierarchical program specification with sparse rewards: 1 when the program execution satisfies the specification, and 0 otherwise. Using this specification, AlphaNPI is able to train NPI models effectively with RL for the first time, completely eliminating the need for strong supervision in the form of execution traces. The experiments show that AlphaNPI can sort as well as previous strongly supervised NPI variants. The AlphaNPI agent is also trained on a Tower of Hanoi puzzle with two disks and is shown to generalize to puzzles with an arbitrary number of disk

Dates et versions

hal-03080949 , version 1 (17-12-2020)

Identifiants

Citer

Thomas Pierrot, Guillaume Ligner, Scott Reed, Olivier Sigaud, Nicolas Perrin, et al.. Learning Compositional Neural Programs with Recursive Tree Search and Planning. Advances in Neural Information Processing Systems 32 (NeurIPS 2019), Dec 2019, Vancouver, Canada. ⟨hal-03080949⟩

Collections

ANR
41 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More