How to Play in Infinite MDPs - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

How to Play in Infinite MDPs

Résumé

Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochastic and nondeterministic behavior. For MDPs with finite state space it is known that for a wide range of objectives there exist optimal strategies that are memoryless and deterministic. In contrast, if the state space is infinite, optimal strategies may not exist, and optimal or ε-optimal strategies may require (possibly infinite) memory. In this paper we consider qualitative objectives: reachability, safety, (co-)Büchi, and other parity objectives. We aim at giving an introduction to a collection of techniques that allow for the construction of strategies with little or no memory in countably infinite MDPs.
Fichier principal
Vignette du fichier
LIPIcs-ICALP-2020-3.pdf (559.71 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03064669 , version 1 (29-12-2020)

Identifiants

Citer

Stefan Kiefer, Richard Mayr, Mahsa Shirmohammadi, Patrick Totzke, Dominik Wojtczak. How to Play in Infinite MDPs. 47th International Colloquium on Automata, Languages, and Programming (ICALP 202)0, Jul 2020, Sarrebruck, Germany. pp.3:1--3:18, ⟨10.4230/LIPIcs.ICALP.2020.3⟩. ⟨hal-03064669⟩
26 Consultations
44 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More