Return to search

Low-level image features and navigation systems for visually impaired people

This thesis is concerned with the development of a computer-aided autonomous navigation system for a visually-impaired person. The system is intended to work in both indoor and outdoor locations and is based around the use of camera systems and computer vision. Following a review of the literature to identify previous work in navigation systems for the blind, the location of accurate image features is shown to be a vital importance for a vision based navigation system. There are many operators that identify image features and it is shown that existing methods for identifying which has the best performance are inconsistent. A statistically valid evaluation and comparison methodology is established, centered around the use of McNemar's test and ANOVA. It is shown that these statistical tests require a larger number of test images than is commonly used in the literature to establish which feature operators perform best. A ranking of feature operators is produced based on this rigorous statistical approach and compared with similar rankings in the literature. Corner detectors are especially useful for a navigation system because they identify the boundaries of obstacles. However, the results from our testing suggest that the internal angle of a corner is one factor in determining whether a corner is detected correctly. Hence an in-depth study of angular sensitivity of corners is presented. This leads to the development of a pair of descriptors, known as CMIE and AMIE, which describe corners. Experiments show that these descriptors are able to be computed at video rate and are effective at matching corners in successive frames of video sequences. Finally, a complete navigation system is presented. This makes use of both a conventional colour camera and a depth sensor combined in a device known as the Microsoft Kinect. It is shown that the system performs robustly in both indoor and outdoor environments, giving audio feedback to the user when an obstacle is detected. Audio instructions for obstacle avoidance are also given. Testing of the system by both blindfolded and blind users demonstrates its effectiveness.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:605555
Date January 2013
CreatorsKanwal, Nadia
PublisherUniversity of Essex
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation

Page generated in 0.0021 seconds