Neuro-Symbolic Learning of Lifted Action Models from Visual Traces - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Neuro-Symbolic Learning of Lifted Action Models from Visual Traces

Kai Xi
  • Fonction : Auteur
  • PersonId : 1378927
Stephen Gould
  • Fonction : Auteur
  • PersonId : 1378928

Résumé

Model-based planners rely on action models to describe available actions in terms of their preconditions and effects. Yet, manually encoding such models is challenging, especially in complex domains. Numerous methods have been proposed to learn action models from examples of plan execution traces. However, high-level information, such as state labels within traces, is often unavailable and needs to be inferred indirectly from raw observations. In this paper, we aim to learn lifted action models from visual traces — sequences of image-action pairs depicting discrete successive trace steps. We present ROSAME, a differentiable neuRO-Symbolic Action Model lEarner that infers action models from traces consisting of probabilistic state predictions and actions. By combining ROSAME with a deep learning computer vision model, we create an end-to-end framework that jointly learns state predictions from images and infers symbolic action models. Experimental results demonstrate that our method succeeds in both tasks, using different visual state representations, with the learned action models often matching or even surpassing those created by humans.
Fichier principal
Vignette du fichier
icaps24-rosame.pdf (3.3 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04561851 , version 1 (14-07-2024)

Licence

Domaine public

Identifiants

  • HAL Id : hal-04561851 , version 1

Citer

Kai Xi, Stephen Gould, Sylvie Thiébaux. Neuro-Symbolic Learning of Lifted Action Models from Visual Traces. International Conference on Automated Planning and Scheduling (ICAPS-24), Jun 2024, Banff, Canada. pp.653-662. ⟨hal-04561851⟩
160 Consultations
26 Téléchargements

Partager

More