There have been substantial improvements in the area of rehabilitation robotics in the recent past. However, these advances are inaccessible to a large number of people with disabilities who are in most need of such assistance. This group includes people who are in a severely paralyzed state, that they are completely "locked-in" in their own bodies. Such persons usually retain full cognitive abilities, but have no voluntary muscle control.
For these persons, a Brain Computer Interface (BCI) is often the only way to communicate with the outside world and/or control an assistive device. One major drawback to BCI devices is their low information transfer rate, which can take as long as 30 seconds to select a single command. This can result in mental fatigue to the user, specially if it necessary to make multiple selections over the BCI to complete a single task. Therefore, P300 based BCI control is not efficient for controlling a assistive robotic device such as a robotic arm.
To address this shortcoming, a novel vision based Brain Robot Interface (BRI) is presented in this thesis. This visual user interface allows for selecting an object from an unstructured environment and then performing an action on the selected object using a robotic arm mounted to a power wheelchair. As issuing commands through BCI is slow, this system was designed to allow a user to perform a complete task via a BCI using an autonomous robotic system while issuing as few commands as possible. Furthermore, the new visual interface allows the user to perform the task without losing concentration on the stimuli or the task. In our interface, a scene image is captured by a camera mounted on the wheelchair, from which, a dynamically sized non-uniform stimulus grid is created using edge information. Dynamically sized grids improve object selection efficiency. Oddball paradigm and P300 Event Related Potentials (ERP) are used to select stimuli, where the stimuli being each cell in the grid. Once selected, object segmentation and matching is used to identify the object. Then the user, using BRI, chooses an action to be performed on the object by the wheelchair mounted robotic arm (WMRA). Tests on 8 healthy human subjects validated the functionality of the system. An average accuracy of 85.56% was achieved for stimuli selection over all subjects. With the proposed system, it took the users an average of 5 commands to perform a task on an object. The system will eventually be useful for completely paralyzed or locked-in patients for performing activities of daily living (ADL) tasks.
Identifer | oai:union.ndltd.org:USF/oai:scholarcommons.usf.edu:etd-6488 |
Date | 15 July 2014 |
Creators | Pathirage, Don Indika Upashantha |
Publisher | Scholar Commons |
Source Sets | University of South Flordia |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Graduate Theses and Dissertations |
Rights | default |
Page generated in 0.0018 seconds