On Minimizing Ordered Weighted Regrets in Multiobjective Markov Decision Processes - Archive ouverte HAL
Communication Dans Un Congrès Année : 2011

On Minimizing Ordered Weighted Regrets in Multiobjective Markov Decision Processes

Wlodzimierz Ogryczak
  • Fonction : Auteur
Patrice Perny
Paul Weng
  • Fonction : Auteur
  • PersonId : 952563

Résumé

In this paper, we propose an exact solution method to generate fair policies in Multiobjective Markov Decision Processes (MMDPs). MMDPs consider n immediate reward functions, representing either individual payoffs in a multiagent problem or rewards with respect to different objectives. In this context, we focus on the determination of a policy that fairly shares regrets among agents or objectives, the regret being defined on each dimension as the opportunity loss with respect to optimal expected rewards. To this end, we propose to minimize the ordered weighted average of regrets (OWR). The OWR criterion indeed extends the minimax regret, relaxing egalitarianism for a milder notion of fairness. After showing that OWR-optimality is state-dependent and that the Bellman principle does not hold for OWR-optimal policies, we propose a linear programming reformulation of the problem. We also provide experimental results showing the efficiency of our approach.

Dates et versions

hal-01285802 , version 1 (09-03-2016)

Identifiants

Citer

Wlodzimierz Ogryczak, Patrice Perny, Paul Weng. On Minimizing Ordered Weighted Regrets in Multiobjective Markov Decision Processes. 2nd International Conference on Algorithmic Decision Theory (ADT'11), Oct 2011, Piscataway, NJ, United States. pp.190-204, ⟨10.1007/978-3-642-24873-3_15⟩. ⟨hal-01285802⟩
131 Consultations
0 Téléchargements

Altmetric

Partager

More