Numerical Approximations for Discounted Continuous Time Markov Decision Processes
Résumé
This paper deals with a continuous-time Markov decision process M, with Borel state and action spaces, under the total expected discounted cost optimality criterion. By suitably approximating an underlying probability measure with a measure with finite support and by discretizing the action sets of the control model, we can construct a finite state and action space Markov decision process that approximates M and that can be solved explicitly. We can derive bounds on the approximation error of the optimal discounted cost function; such bounds are written in terms of Wasserstein and Hausdorff distances. We show a numerical application to a queueing problem.