Return to search

Analysis of Optical Flow for Indoor Mobile Robot Obstacle Avoidance.

This thesis investigates the use of visual-motion information sampled through optical flow for the task of indoor obstacle avoidance on autonomous mobile robots. The methods focus on the practical use of optical flow and visual motion information in performing the obstacle avoidance task in real indoor environments. The methods serve to identify visual-motion properties that must be used in synergy with visual-spatial properties toward the goal of a complete robust visual-only obstacle avoidance system, as is evidently seen within nature. A review of vision-based obstacle avoidance techniques shows that early research mainly focused on visual-spatial techniques, which heavily rely on various assumptions of their environments to function successfully. On the other hand, more current research that looks toward the use of visual-motion information (sampled through optical flow) tends to focus on using optical flow in a subsidiary manner, and does not completely take advantage of the information encoded within an optical flow field. In the light of the current research limitations, this thesis describes two different approaches and evaluates their use of optical flow to perform the obstacle avoidance task. The first approach begins with the construction of a conventional range map using optical flow that stems from the structure-from-motion domain and the theory that optical flow encodes 3D environmental information under certain conditions. The second approach investigates optical flow in a causal mechanistic manner using machine learning of motor responses directly from optical flow - motivated from physical and behavioural evidence observed in biological creatures. Specifically, the second approach is designed with three main objectives in mind: 1) to investigate whether optical flow can be learnt for obstacle avoidance; 2) to create a system capable of repeatable obstacle avoidance performance in real-life environments; and 3) to analyse the system to determine what optical flow properties are actually being used for the motor control task. The range-map reconstruction results have demonstrated some good distance estimations through the use of a feature-based optical flow algorithm. However, the number of flow points were too sparse to provide adequate obstacle detection. Results froma differential-based optical flow algorithm helped to increase the density of flow points, but highlighted the high sensitivity of the optical flow field to the rotational errors and outliers that plague the majority of frames under real-life robot situations. Final results demonstrated that current optical flow algorithms are ill-suited to estimate obstacle distances consistently, as range-estimation techniques require an extremely accurate optical flow field with adequate density and coverage for success. This is a difficult problem within the optical flow estimation domain itself. In the machine learning approach, an initial study to examine whether optical flow can be machine learnt for obstacle avoidance and control in a simple environment was successful. However,there were certain problems. Several critical issues which arise with the use of a machine learning approach were highlighted. These included sample set completeness, sample set biases, and control system instability. Consequently, an extended neural network was proposed that had several improvements made to overcome the initial problems. Designing an automated system for gathering training data helped to eliminate most of the sample set problems. Key changes in the neural network architecture, optical flow filters, and navigation technique vastly improved the control system stability. As a result, the extended neural network system was able to successfully perform multiple obstacle avoidance loops in both familiar and unfamiliar real-life environments without collisions. The lap times of the machine learning approach were comparable to those of the laser-based navigation technique. The the machine learning approach was 13% slower in the familiar and 25% slower in the unfamiliar environment. Furthermore, through analysis of the neural network approach, flow magnitudes were revealed to be learnt for range information in an absolute manner, while flow directions were used to detect the focus of expansion (FOE) in order to predict critical collision situations and improve control stability. In addition, the precision of the flow fields was highlighted as an important requirement, as opposed to the high accuracy of flow vectors. For robot control purposes, image-processing techniques such as region finding and object boundary detection were employed to detect changes between optical flow vectors in the image space.

Identiferoai:union.ndltd.org:ADTP/285433
CreatorsTobias Low
Source SetsAustraliasian Digital Theses Program
Detected LanguageEnglish

Page generated in 0.0017 seconds