Markov Decision Processes with Functional Rewards - Archive ouverte HAL
Communication Dans Un Congrès Année : 2013

Markov Decision Processes with Functional Rewards

Olivier Spanjaard
Paul Weng
  • Fonction : Auteur
  • PersonId : 952563

Résumé

Markov decision processes (MDP) have become one of the standard models for decision-theoretic planning problems under uncertainty. In its standard form, rewards are assumed to be numerical additive scalars. In this paper, we propose a generalization of this model allowing rewards to be functional. The value of a history is recursively computed by composing the reward functions. We show that several variants of MDPs presented in the literature can be instantiated in this setting. We then identify sufficient conditions on these reward functions for dynamic programming to be valid. In order to show the potential of our framework, we conclude the paper by presenting several illustrative examples.
Fichier principal
Vignette du fichier
miwai2013-1.pdf (321.2 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01216435 , version 1 (30-06-2017)

Identifiants

Citer

Olivier Spanjaard, Paul Weng. Markov Decision Processes with Functional Rewards. 7th Multi-Disciplinary International Workshop on Artificial Intelligence, MIWAI 2013, Dec 2013, Krabi, Thailand. pp.269-280, ⟨10.1007/978-3-642-44949-9_25⟩. ⟨hal-01216435⟩
99 Consultations
156 Téléchargements

Altmetric

Partager

More