Bellman's Equation Revolutionizes Optimal Control Theory with Discontinuous Value Functions
The article explores a method called dynamic programming to solve optimal control problems. It shows that the Bellman function, which helps find the best path for a given situation, is often not smooth. This means traditional equations can't always be used. Researchers are developing new ways, like using generalized gradients, to tackle these challenges. They focus on analyzing the function's behavior at specific points and directions to find solutions.