Impulse control of piecewise deterministic processes
Résumé
Piecewise deterministic Markov processes (PDMPs) have been introduced by M.H.A. Davis as a general class of stochastic hybrid models.
The path of a PDMP consists of deterministic trajectories punctuated by random jumps. These jumps occur either spontaneously in a Poisson like fashion or deterministically when the process hits the boundary of the state space. We consider the infinite horizon expected discounted impulse control problem where the controller instantaneously moves the process to a new point of the state space at some specified time. There exists an extensive literature related to the study of the optimality equation associated to such control problems but few works are devoted to the characterization of (quasi)optimal strategy. Our objective is to propose an approach to explicitly construct such strategies consisting of a sequence of intervention times and locations of the process after intervention. An attempt in this direction has been proposed by O.L.V. Costa and M.H.A. Davis. Roughly speaking, one step of
their approach consists in solving an optimal stopping problem which makes this technique quite difficult to implement. Our method has
the advantage of being constructive and is loosely speaking based on the iteration of a single-jump-or-intervention operator associated to an
auxiliary PDMP. Moreover, it is important to emphasize that we do not require the knowledge of the optimal value function as in other works of the literature.