Deterministic state constrained optimal control problems without controllability assumptions
Résumé
In the present paper, we consider nonlinear optimal control problems with constraints on the state of the system. We are interested in the characterization of the value function without any controllability assumption. In the unconstrained case, it is possible to derive a characterization of the value function by means of a Hamilton-Jacobi-Bellman (HJB) equation. This equation expresses the behavior of the value function along the trajectories arriving or starting from any position $x$. In the constrained case, when no controllability assumption is made, the HJB equation may have several solutions. Our first result aims to give the precise information that should be added to the HJB equation in order to obtain a characterization of the value function. This result is very general and holds even when the dynamics is not continuous and the state constraints set is not smooth. On the other hand we study also some stability results for relaxed or penalized control problems.
Origine | Fichiers produits par l'(les) auteur(s) |
---|