碩士 / 國立臺灣科技大學 / 自動化及控制研究所 / 98 / In recent years, small Unmanned Aerial Vehicles (UAVs) have experienced a strong boost in performance, opening the prospect to several military and civil applications, such as surveillance, monitoring, and inspection. However, the lack of effective autonomous navigation abilities has severely limited the opportunities for deployment. Visual navigation methods are attractive candidates because of the small weight of video cameras. The major issues in the development of a visual navigation system for small UAVs can be characterized as follows: 1) technical constraints, 2) robust image feature matching, 3) efficient and precise method for visual navigation. This thesis addresses these three issues, provides methods for their solution, and evaluates their feasibility and effectiveness.
The technical constraints of small UAVs inhibit on-board computation of visual navigation. This limitation can be overcome with the proposed wireless networked control system, which out-sources the data processing from the UAV to a ground-based process computer. The feature matching, which represents the font-end of all feature based visual navigation methods, is addressed with a robust method based on SIFT feature descriptors, which achieves real-time performance by detaching the explicit scale invariance of image features. The presented navigation concept implements a visual odometry system with a single calibrated camera. The proposed method uses a framework for incremental reconstruction of the camera path and the structure of the environment based on two-view epipolar geometry, followed by sparse bundle adjustment.
The concept for a wireless networked control system was evaluated with latency- and throughput measurements in different environments. The experiment setup conforming to the IEEE 802.11n standard achieves an average latency of 1.3 ms and a data throughput of 3.000 kB/s up to a distance of 70 m. The results demonstrate the feasibility of real-time closed-loop navigation control with the proposed concept.
The presented feature matching method was tested with ten frames of a benchmark image sequence. The evaluation shows similar results compared with SIFT in the number of feature correspondences, and superior performance with respect to the number of false feature matches when applied to visual navigation. The proposed method for robust feature matching achieves up to 8.4 times faster computation compared to SIFT on images of size 640×480 pixels.
The visual odometry was evaluated with real-life image sequences. The proposed method achieved an error of 1.65% with respect to the total path length of 9.43 m on a circular trajectory. The reconstruction from 840 images includes 42 camera positions and 2113 3D world points.
Identifer | oai:union.ndltd.org:TW/098NTUS5146004 |
Date | January 2010 |
Creators | Christian Ivancsits, 尹克清 |
Contributors | Min-Fan Ricky Lee, 李敏凡 |
Source Sets | National Digital Library of Theses and Dissertations in Taiwan |
Language | en_US |
Detected Language | English |
Type | 學位論文 ; thesis |
Format | 136 |
Page generated in 0.0014 seconds