The solution of optimal control problems through the Hamilton-Jacobi-Bellman (HJB) equation offers guaranteed satisfaction of both the necessary and sufficient conditions for optimality. However, finding an exact solution to the HJB equation is a near impossible task for many optimal control problems. This thesis presents an approximation method for solving finite-horizon optimal control problems involving nonlinear dynamical systems. The method uses finite-order approximations of the partial derivatives of the cost-to-go function, and successive higher-order differentiations of the HJB equation. Natural byproducts of the proposed method provide sensitivities of the controls to changes in the initial states, which can be used to approximate the solution to neighboring optimal control problems. For highly nonlinear problems, the method is modified to calculate control sensitivities about a nominal trajectory. In this framework, the method is shown to provide accurate control sensitivities at much lower orders of approximation. Several numerical examples are presented to illustrate both applications of the approximation method.
Identifer | oai:union.ndltd.org:tamu.edu/oai:repository.tamu.edu:1969.1/ETD-TAMU-2010-05-7898 |
Date | 2010 May 1900 |
Creators | McCrate, Christopher M. |
Contributors | Vadali, Srinivas R. |
Source Sets | Texas A and M University |
Language | English |
Detected Language | English |
Type | Book, Thesis, Electronic Thesis, text |
Format | application/pdf |
Page generated in 0.0049 seconds