The role of vision as an additional sensing mechanism has received a lot of attention in recent years in the context of autonomous flight applications. Modern Unmanned Aerial Vehicles (UAVs) are equipped with vision sensors because of their light-weight, low-cost characteristics and also their ability to provide a rich variety of information of the environment in which the UAVs are navigating in. The problem of vision based autonomous flight is very difficult and challenging since it requires bringing together concepts from image processing and computer vision, target tracking and state estimation, and flight guidance and control.
This thesis focuses on the adaptive state estimation, guidance and control problems involved in vision-based formation flight. Specifically, the thesis presents a composite adaptation approach to the partial state estimation of a class of nonlinear systems with unmodeled dynamics. In this approach, a linear time-varying Kalman filter is the nominal state estimator which is augmented by the output of an adaptive neural network (NN) that is trained with two error signals. The benefit of the proposed approach is in its faster and more accurate adaptation to the modeling errors over a conventional approach.
The thesis also presents two approaches to the design of adaptive guidance and control (G&C) laws for line-of-sight formation flight. In the first approach, the guidance and autopilot systems are designed separately and then combined together by assuming time-scale separation. The second approach is based on integrating the guidance and autopilot design process. The developed G&C laws using both approaches are adaptive to unmodeled leader aircraft acceleration and to own aircraft aerodynamic uncertainties.
The thesis also presents theoretical justification based on Lyapunov-like stability analysis for integrating the adaptive state estimation and adaptive G&C designs. All the developed designs are validated in nonlinear, 6DOF fixed-wing aircraft simulations.
Finally, the thesis presents a decentralized coordination strategy for vision-based multiple-aircraft formation control. In this approach, each aircraft in formation regulates range from up to two nearest neighboring aircraft while simultaneously tracking nominal desired trajectories common to all aircraft and avoiding static obstacles.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/16272 |
Date | 17 May 2007 |
Creators | Sattigeri, Ramachandra Jayant |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Detected Language | English |
Type | Dissertation |
Page generated in 0.0026 seconds