Viscosity solutions of the Bellman equation for perturbed optimal control problems with exit times
In a series of papers, we presented new theorems characterizing the value function in optimal control as the unique bounded-from-below viscosity solution of the corresponding Bellman equation that satisfies appropriate side conditions. Instead of the usual assumption that the instantaneous costs are uniformly positive, our results assumed that all trajectories satisfying a certain integral condition must asymptotically approach the target. In this note, we study perturbed exit time problems which have the property that all trajectories satisfying the integral condition must stay in a bounded set. This is a weaker asymptotic property, since it allows bounded oscillating trajectories and attractors other than the target. We show that, under this weaker asymptotic condition, the value function is still the unique bounded-from-below solution of the corresponding Bellman equation that vanishes on the target. Our theorem applies to problems which are not tractable by the known results. The significance of our work is that (i) applied control abounds with problems whose dynamics are only known up to a margin of error, which can be represented by perturbations, and (ii) our theorem implies the convergence of numerical methods which can be used to approximate value functions for problems that satisfy our relaxed hypotheses.
Publication Source (Journal or Book title)
Proceedings of the IEEE Conference on Decision and Control
Malisoff, M. (2002). Viscosity solutions of the Bellman equation for perturbed optimal control problems with exit times. Proceedings of the IEEE Conference on Decision and Control, 2, 2348-2353. Retrieved from https://digitalcommons.lsu.edu/mathematics_pubs/1026