New research unlocks unlimited potential for optimal control problems in real-world applications.
The article discusses optimal control problems with continuous value functions in unrestricted state spaces. The researchers establish the Dynamic Programming Principle and derive the Hamilton-Jacobi-Bellman equation for the value function. By applying this theory, they prove that the value function can be uniquely characterized as the viscosity solution of the corresponding equation.