According to the National Federation of the Blind, there are an estimated 10 million people in the United States who are visually impaired. Of these, 1.3 million are legally blind. Many people with extreme vision loss receive orientation and mobility training in order to help them learn skills that allow them to travel and navigate multiple types of indoor and outdoor environments. Even with this training, a fundamental problem these people face is learning new routes, especially in environments with which they are not familiar. Although the research community has developed a number of localization and navigation aids that are meant to provide navigation assistance, only a handful have reached the marketplace, and the adoption rate for these devices remains low. Most assistive navigation devices take responsibility for the navigation and localization processes, leaving the user only to respond to the devices' commands. This thesis takes a different approach and proposes that because of the high level of navigation ability achieved through years of training and everyday travel, the navigation skills of people with visual impairments should be considered an integral part of the navigation system. People with visual impairments are capable of following natural language instructions similar to those given by a visually impaired person communicating route directions over the phone to another person with visual impairments. Devices based on this premise can be built, delivering only verbal route descriptions. As a result, it is not necessary to install complex sensors in the environment. This thesis has four hypotheses that are addressed by two systems. The first hypothesis is that a navigational assistance system for the blind can leverage the skills and abilities of the visually impaired, and does not necessarily need complex sensors embedded in the environment to succeed. The second hypothesis is that verbal route descriptions are adequate for guiding a person with visual impairments when shopping in a supermarket for products located in aisles on shelves. These two hypotheses are addressed by ShopTalk, a system which helps blind users shop independently in a grocery store using verbal route descriptions. The third hypothesis is that information extraction techniques can be used to extract landmarks from natural language route descriptions. The fourth and final hypothesis is that new natural language route descriptions can be inferred from a set of landmarks and a set of natural language route descriptions whose statements have been tagged with landmarks from the landmark set. These two hypotheses are addressed by the Route Analysis Engine, an information extraction-based system for analyzing natural language route descriptions.
Identifer | oai:union.ndltd.org:UTAHS/oai:digitalcommons.usu.edu:etd-1668 |
Date | 01 May 2010 |
Creators | Nicholson, John |
Publisher | DigitalCommons@USU |
Source Sets | Utah State University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | All Graduate Theses and Dissertations |
Rights | Copyright for this work is held by the author. Transmission or reproduction of materials protected by copyright beyond that allowed by fair use requires the written permission of the copyright owners. Works not in the public domain cannot be commercially exploited without permission of the copyright owner. Responsibility for any use rests exclusively with the user. For more information contact Andrew Wesolek (andrew.wesolek@usu.edu). |
Page generated in 0.0016 seconds