On finding compromise solutions in multiobjective Markov decision processes
Résumé
A Markov Decision Process (MDP) is a general model for solving planning problems under uncertainty. It has been extended to multiobjective MDP to address multicriteria or multiagent problems in which the value of a decision must be evaluated according to several viewpoints, sometimes conflicting. Although most of the studies concentrate on the determination of the set of Pareto-optimal policies, we focus here on a more specialized problem that concerns the direct determination of policies achieving well-balanced tradeoffs. We first explain why this problem cannot simply be solved by optimizing a linear combination of criteria. This leads us to use an alternative optimality concept which formalizes the notion of best compromise solution, i.e. a policy yielding an expected-utility vector as close as possible (w.r.t. Tchebycheff norm) to a reference point. We show that this notion of optimality depends on the initial state. Moreover, it appears that the best compromise policy cannot be found by a direct adaptation of value iteration. In addition, we observe that in some (if not most) situations, the optimal solution can only be obtained with a randomized policy. To overcome all these problems, we propose a solution method based on linear programming and give some experimental results.