Return to search

Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot

The recent advent of consumer grade Brain-Computer Interfaces (BCI) provides a new revolutionary and accessible way to control computers. BCI translate cognitive electroencephalography (EEG) signals into computer or robotic commands using specially built headsets. Capable of enhancing traditional interfaces that require interaction with a keyboard, mouse or touchscreen, BCI systems present tremendous opportunities to benefit various fields. Movement restricted users can especially benefit from these interfaces. In this thesis, we present a new way to interface a consumer-grade BCI solution to a mobile robot. A Red-Green-Blue-Depth (RGBD) camera is used to enhance the navigation of the robot with cognitive thoughts as commands. We introduce an interface presenting 3 different methods of robot-control: 1) a fully manual mode, where a cognitive signal is interpreted as a command, 2) a control-flow manual mode, reducing the likelihood of false-positive commands and 3) an automatic mode assisted by a remote RGBD camera. We study the application of this work by navigating the mobile robot on a planar surface using the different control methods while measuring the accuracy and usability of the system. Finally, we assess the newly designed interface’s role in the design of future generation of BCI solutions.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/22896
Date January 2012
CreatorsJacques, Maxime
ContributorsPetriu, Emil
PublisherUniversité d'Ottawa / University of Ottawa
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis

Page generated in 0.0018 seconds