Reinforcement learning based design of sampling policies under cost constraints in Markov Random Fields - Archive ouverte HAL Accéder directement au contenu
Rapport Année : 2013

Reinforcement learning based design of sampling policies under cost constraints in Markov Random Fields

Résumé

Markov randomfields (MRF) offer a powerful representation for reasoning on large sets of random variables in interaction. A classical, but difficult inference task is the evaluation of the most probable assignment of a variable given the values of some others (Maximum Posterior Marginal probability computation, MPM). Linked to that problem, optimising the choice of the variables to observe (a sample) in order to maximise the MPM probabilities is even more difficult. In the field of spatial statistics, the design of sampling policies has been largely studied in the case of continuous variables, using tools from the geostatistics domain. In the MRF case with discrete-valued variables, some heuristics have been proposed for the design problem but there exists no universally accepted solution, in particular when considering adaptive policies, as opposed to static ones. In this paper we formalise the problem of optimal adaptive sampling in aMRF as a finite-horizonMarkov Decision Process (MDP) with a factored state space. A policy of this MDP is a non stationnary decision rule which associates a set of sampling locations to the set of past observations. Solving this MDP amounts to computing the optimal adaptive sampling policy according to a given quality criterion. The translation of the initial optimization problem into the MDP framework enables to exploit the Reinforcement Learning (RL) paradigm and to propose an original algorithm for its approximate resolution. This generic procedure, named Least Square Dynamic Programming (LSDP), combines a parameterized representation of the value of a policy, the construction of a batch of simulated trajectories of the MDP and a backwards induction algorithm. It is not only dedicated to the optimal adaptive sampling problem but can be used to solve any factored MDP under finite horizon. Then LSDP can be specialized to solve the above-mentioned sampling problem. Based on an empirical comparison of the performance of LSDP with existing one-step-look-ahead sampling heuristics and solutions provided by classical RL algorithms, the following conclusions can be derived: (i) a na¨ıve heuristic, consisting in sampling sites where marginals are the most uncertain, is already an efficient sampling approach. (ii) LSDP outperforms all the classical RL approaches we have tested. (iii) LSDP outperforms the heuristic approach in cases when sampling costs are not uniform over the set of variables, or sampling actions are constrained.
Fichier principal
Vignette du fichier
BPS_CSDA_Rapport de Recherche_1 (691.05 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01005064 , version 1 (06-06-2020)

Identifiants

  • HAL Id : hal-01005064 , version 1
  • PRODINRA : 192798

Citer

Régis Sabbadin, Nathalie Dubois Peyrard Peyrard, Mathieu Bonneau Bonneau. Reinforcement learning based design of sampling policies under cost constraints in Markov Random Fields. 2013. ⟨hal-01005064⟩
32 Consultations
15 Téléchargements

Partager

Gmail Facebook X LinkedIn More