An Iterative Algorithm for Solving Constrained Decentralized Markov Decision Processes - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2006

An Iterative Algorithm for Solving Constrained Decentralized Markov Decision Processes

Résumé

Despite the significant progress to extend Markov Decision Processes (MDP) to cooperative multi-agent systems, developing approaches that can deal with realistic problems remains a serious challenge. Existing approaches that solve Decentralized Markov Decision Processes (DEC-MDPs) suffer from the fact that they can only solve relatively small problems without complex constraints on task execution. OC-DEC-MDP has been introduced to deal with large DEC-MDPs under resource and temporal constraints. However, the proposed algorithm to solve this class of DEC-MDPs has some limits: it suffers from overestimation of opportunity cost and restricts policy improvement to one sweep (or iteration). In this paper, we propose to overcome these limits by first introducing the notion of Expected Opportunity Cost to better assess the influence of a local decision of an agent on the others. We then describe an iterative version of the algorithm to incrementally improve the policies of agents leading to higher quality solutions in some settings. Experimental results are shown to support our claims.
Fichier principal
Vignette du fichier
AAAI06.pdf (178.37 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01344436 , version 1 (11-07-2016)

Identifiants

  • HAL Id : hal-01344436 , version 1

Citer

Aurélie Beynier, Abdel-Illah Mouaddib. An Iterative Algorithm for Solving Constrained Decentralized Markov Decision Processes. The Twenty-First National Conference on Artificial Intelligence , 2006, Boston, United States. ⟨hal-01344436⟩
83 Consultations
84 Téléchargements

Partager

Gmail Facebook X LinkedIn More