Ordinal Decision Models for Markov Decision Processes - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2012

Ordinal Decision Models for Markov Decision Processes

Paul Weng
  • Fonction : Auteur
  • PersonId : 952563

Résumé

Setting the values of rewards in Markov decision processes (MDP) may be a difficult task. In this paper, we consider two ordinal decision models for MDPs where only an order is known over rewards. The first one, which has been proposed recently in MDPs [23], defines preferences with respect to a reference point. The second model, which can been viewed as the dual approach of the first one, is based on quantiles. Based on the first decision model, we give a new interpretation of rewards in standard MDPs, which sheds some interesting light on the preference system used in standard MDPs. The second model based on quantile optimization is a new approach in MDPs with ordinal rewards. Although quantile-based optimality is state-dependent, we prove that an optimal stationary deterministic policy exists for a given initial state. Finally, we propose solution methods based on linear programming for optimizing quantiles.
Fichier non déposé

Dates et versions

hal-01273056 , version 1 (11-02-2016)

Identifiants

Citer

Paul Weng. Ordinal Decision Models for Markov Decision Processes. European Conference on Artificial Intelligence, Aug 2012, Montpellier, France. pp.828-833, ⟨10.3233/978-1-61499-098-7-828⟩. ⟨hal-01273056⟩
41 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More