Autonomous visual navigation algorithms for ground mobile robotic systems working in unstructured environments have been extensively studied for decades. Among these work, algorithm performance evaluations between different design configurations mainly involve the use of benchmark datasets with a limited number of real-world trails. Such evaluations, however, have difficulties to provide sufficient statistical power for performance quantification. In addition, they are unable to independently assess the algorithm robustness to individual realistic uncertainty sources, including the environment variations and processing errors. This research presents a quantitative approach to performance and robustness evaluation and optimisation of autonomous visual navigation algorithms, using large scale Monte-Carlo analyses. The Monte-Carlo analyses are supported by a simulation environment designed to represent a real-world level of visual information, using the perturbations from realistic visual uncertainties and processing errors. With the proposed evaluation method, a stereo vision based autonomous visual navigation algorithm is designed and iteratively optimised. This algorithm encodes edge-based 3D patterns into a topological map, and use them for the subsequent global localisation and navigation. An evaluation on the performance perturbations from individual uncertainty sources indicates that the stereo match error produces significant limitation for the current system design. Therefore, an optimisation approach is proposed to mitigate such an error. This maximises the Fisher information available in stereo image pairs by manipulating the stereo geometry. Moreover, the simulation environment is further updated in association with the algorithm design, which include the quantitative modelling and simulation of localisation error to the subsequent navigation behaviour. During a long-term Monte-Carlo evaluation and optimisation, the algorithm performance has been significantly improved. Simulation experiments demonstrate that the navigation of a 3-DoF robotic system is achieved in an unstructured environment, while possessing sufficient robustness to realistic visual uncertainty sources and systematic processing errors.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:756829 |
Date | January 2017 |
Creators | Tian, Jingduo |
Contributors | Thacker, Neil |
Publisher | University of Manchester |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | https://www.research.manchester.ac.uk/portal/en/theses/quantitative-performance-evaluation-of-autonomous-visual-navigation(be6349b5-3b38-4ac5-aba2-fc64597fd98a).html |
Page generated in 0.0016 seconds