Finite-Memory Strategies in POMDPs with Long-Run Average Objectives - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Mathematics of Operations Research Année : 2021

Finite-Memory Strategies in POMDPs with Long-Run Average Objectives

Résumé

We study the problem of approximation of optimal values in partially-observable Markov decision processes (POMDPs) with long-run average objectives. POMDPs are a standard model for dynamic systems with probabilistic and nondeterministic behavior in uncertain environments. In long-run average objectives rewards are associated with every transition of the POMDP and the payoff is the long-run average of the rewards along the executions of the POMDP. We establish strategy complexity and computational complexity results. Our main result shows that finite-memory strategies suffice for approximation of optimal values, and the related decision problem is recursively enumerable complete.

Dates et versions

hal-02268862 , version 1 (21-08-2019)

Identifiants

Citer

Krishnendu Chatterjee, Raimundo Saona, Bruno Ziliotto. Finite-Memory Strategies in POMDPs with Long-Run Average Objectives. Mathematics of Operations Research, In press, ⟨10.1287/moor.2020.1116⟩. ⟨hal-02268862⟩
58 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More