Bellman equation and viscosity solutions for mean-field stochastic control problem
Résumé
We consider the stochastic optimal control problem of McKean-Vlasov stochastic differential equation. By using feedback controls, we reformulate the problem into a deterministic control problem with only the marginal distribution as controlled state variable, and prove that dynamic programming principle holds in its general form. Then, by relying on the notion of differentiability with respect to probability measures recently introduced by P.L. Lions in [30], and a special Itô formula for flows of probability measures, we derive the (dynamic programming) Bellman equation for mean-field stochastic control problem. This Bellman equation in the Wassertein space of probability measures reduces to the classical finite dimensional partial differential equation in the case of no mean-field interaction. We prove a verification theorem in our McKean-Vlasov framework, and give explicit solutions to the Bellman equation for the linear quadratic mean-field control problem, with applications to the mean-variance portfolio selection and a systemic risk model. Finally, we consider a notion of lifted viscosity solutions for the Bellman equation, and show the viscosity property and uniqueness of the value function to the McKean-Vlasov control problem.
Origine | Fichiers produits par l'(les) auteur(s) |
---|