LOOK WHERE YOU LOOK! SALIENCY-GUIDED Q-NETWORKS FOR VISUAL RL TASKS - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

LOOK WHERE YOU LOOK! SALIENCY-GUIDED Q-NETWORKS FOR VISUAL RL TASKS

Résumé

Deep reinforcement learning policies, despite their outstanding efficiency in simulated visual control tasks, have shown disappointing ability to generalize across disturbances in the input training images. Changes in image statistics or distracting background elements are pitfalls that prevent generalization and real-world applicability of such control policies. We elaborate on the intuition that a good visual policy should be able to identify which pixels are important for its decision, and preserve this identification of important sources of information across images. This implies that training of a policy with small generalization gap should focus on such important pixels and ignore the others. This leads to the introduction of saliency-guided Q-networks (SGQN), a generic method for visual reinforcement learning, that is compatible with any value function learning method. SGQN vastly improves the generalization capability of Soft Actor-Critic agents and outperforms existing stateof-the-art methods on the Deepmind Control Generalization benchmark, setting a new reference in terms of training efficiency, generalization gap, and policy interpretability.
Fichier principal
Vignette du fichier
SGQN_Hal_Arxiv.pdf (6.68 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03777742 , version 1 (15-09-2022)

Identifiants

Citer

David Bertoin, Adil Zouitine, Mehdi Zouitine, Emmanuel Rachelson. LOOK WHERE YOU LOOK! SALIENCY-GUIDED Q-NETWORKS FOR VISUAL RL TASKS. Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022), Nov 2022, New Orleans, United States. ⟨hal-03777742⟩
129 Consultations
100 Téléchargements

Altmetric

Partager

More