51 |
Toward an Optical Brain-computer Interface based on Consciously-modulated Prefrontal Hemodynamic ActivityPower, Sarah Dianne 19 December 2012 (has links)
Brain-computer interface (BCI) technologies allow users to control external devices through brain activity alone, circumventing the somatic nervous system and the need for overt physical movement. BCIs may potentially benefit individuals with severe neuromuscular disorders who experience significant, and often total, loss of voluntary muscle control (e.g. amyotrophic lateral sclerosis, multiple sclerosis, brainstem stroke). Though a majority of BCI research to date has focused on electroencephalography (EEG) for brain signal acquisition, recently researchers have noted the potential of an optical imaging technology called near-infrared spectroscopy (NIRS) for BCI applications.
This thesis investigates the feasibility of a practical, online optical BCI based on conscious modulation of prefrontal cortex activity through the performance of different cognitive tasks, specifically mental arithmetic (MA) and mental singing (MS). The thesis comprises five studies, each representing a step toward the realization of a practical optical BCI. The first study demonstrates the feasibility of a two-choice synchronized optical BCI based on intentional control states corresponding to MA and MS. The second study explores a more user-friendly alternative - a two-choice system-paced BCI supporting a single intentional control state (either MA or MS) and a natural baseline, or "no-control (NC)", state. The third study investigates the feasibility of a three-choice system-paced BCI supporting both MA and MS, as well as the NC state. The fourth study examines the consistency with which the relevant mental states can be differentiated over multiple sessions. The first four studies involve healthy adult participants; in the final study, the feasibility of optical BCI use by a user with Duchenne muscular dystrophy is explored.
In the first study, MA and MS were classified with an average accuracy of 77.2% (n=10), while in the second, MA and MS were differentiated individually from the NC state with average accuracies of 71.2% and 62.7%, respectively (n=7). In the third study, an average accuracy of 62.5% was obtained for the MA vs. MS vs. NC problem (n=4). The fourth study demonstrated that the ability to classify mental states (specifically MA vs. NC) remains consistent across multiple sessions (p=0.67), but that there is intersession variability in the spatiotemporal characteristics that best discriminate the states. In the final study, a two-session average accuracy of 71.1% was achieved in the MA vs. NC classification problem for the participant with Duchenne muscular dystrophy.
|
52 |
A Concept-based P300 Communication SystemSmith, Colleen Denyse Desaulniers 27 November 2012 (has links)
Severe motor impairments can severely restrict interaction with one's surroundings. Brain computer interfaces combined with text-based communication systems, such as the P300 Speller, have allowed individuals with motor disabilities to spell messages with their EEG signals. Although providing full composition flexibility, they enable communication rates of only a few characters per minute. Utterance-based communication systems have been developed for individuals with disability and have greatly increased communication speeds, but have yet to be applied to BCIs. This paper proposes an utterance-based communication system using the P300-BCI in which words are organized in a network structure that facilitates rapid retrieval. In trials with able-bodied participants, the proposed system achieved greater message speeds, but rated lower in effectiveness than the P300 Speller. Nonetheless, subject preferences and reports of self-perceived effectiveness suggested an inclination towards the proposed word system and thus further investigation of word-based networks is warranted in brain-controlled AAC systems.
|
53 |
Development of a Multimodal Human-computer Interface for the Control of a Mobile RobotJacques, Maxime 07 June 2012 (has links)
The recent advent of consumer grade Brain-Computer Interfaces (BCI) provides a new revolutionary and accessible way to control computers. BCI translate cognitive electroencephalography (EEG) signals into computer or robotic commands using specially built headsets. Capable of enhancing traditional interfaces that require interaction with a keyboard, mouse or touchscreen, BCI systems present tremendous opportunities to benefit various fields. Movement restricted users can especially benefit from these interfaces. In this thesis, we present a new way to interface a consumer-grade BCI solution to a mobile robot. A Red-Green-Blue-Depth (RGBD) camera is used to enhance the navigation of the robot with cognitive thoughts as commands. We introduce an interface presenting 3 different methods of robot-control: 1) a fully manual mode, where a cognitive signal is interpreted as a command, 2) a control-flow manual mode, reducing the likelihood of false-positive commands and 3) an automatic mode assisted by a remote RGBD camera. We study the application of this work by navigating the mobile robot on a planar surface using the different control methods while measuring the accuracy and usability of the system. Finally, we assess the newly designed interface’s role in the design of future generation of BCI solutions.
|
54 |
Visual exploratory analysis of large data sets : evaluation and applicationLam, Heidi Lap Mun 11 1900 (has links)
Large data sets are difficult to analyze. Visualization has been proposed to assist exploratory data analysis (EDA) as our visual systems can process signals in
parallel to quickly detect patterns. Nonetheless, designing an effective visual
analytic tool remains a challenge.
This challenge is partly due to our incomplete understanding of how common
visualization techniques are used by human operators during analyses, either in
laboratory settings or in the workplace.
This thesis aims to further understand how visualizations can be used to support EDA. More specifically, we studied techniques that display multiple levels of visual information resolutions (VIRs) for analyses using a range of methods.
The first study is a summary synthesis conducted to obtain a snapshot of
knowledge in multiple-VIR use and to identify research questions for the thesis:
(1) low-VIR use and creation; (2) spatial arrangements of VIRs. The next two
studies are laboratory studies to investigate the visual memory cost of image
transformations frequently used to create low-VIR displays and overview use
with single-level data displayed in multiple-VIR interfaces.
For a more well-rounded evaluation, we needed to study these techniques in
ecologically-valid settings. We therefore selected the application domain of web
session log analysis and applied our knowledge from our first three evaluations
to build a tool called Session Viewer. Taking the multiple coordinated view
and overview + detail approaches, Session Viewer displays multiple levels of
web session log data and multiple views of session populations to facilitate data
analysis from the high-level statistical to the low-level detailed session analysis
approaches.
Our fourth and last study for this thesis is a field evaluation conducted at
Google Inc. with seven session analysts using Session Viewer to analyze their
own data with their own tasks. Study observations suggested that displaying
web session logs at multiple levels using the overview + detail technique helped bridge between high-level statistical and low-level detailed session analyses, and
the simultaneous display of multiple session populations at all data levels using
multiple views allowed quick comparisons between session populations. We also
identified design and deployment considerations to meet the needs of diverse
data sources and analysis styles.
|
55 |
Classifying responses to imagined movements in scalp and intracranial EEG for a brain computer interfaceZelmann, Rina. January 1900 (has links)
Thesis (M.Eng.). / Written for the Dept. of Biomedical Engineering. Title from title page of PDF (viewed 2008/07/29). Includes bibliographical references.
|
56 |
Ontology recapitulates phylogeny : design, implementation and potential for usage of a comparative anatomy information system /Travillian, Ravensara S. January 2006 (has links)
Thesis (Ph. D.)--University of Washington, 2006. / Vita. Includes bibliographical references (p. 114-124).
|
57 |
Assessing EEG neuroimaging with machine learningStewart, Andrew David January 2016 (has links)
Neuroimaging techniques can give novel insights into the nature of human cognition. We do not wish only to label patterns of activity as potentially associated with a cognitive process, but also to probe this in detail, so as to better examine how it may inform mechanistic theories of cognition. A possible approach towards this goal is to extend EEG 'brain-computer interface' (BCI) tools - where motor movement intent is classified from brain activity - to also investigate visual cognition experiments. We hypothesised that, building on BCI techniques, information from visual object tasks could be classified from EEG data. This could allow novel experimental designs to probe visual information processing in the brain. This can be tested and falsified by application of machine learning algorithms to EEG data from a visual experiment, and quantified by scoring the accuracy at which trials can be correctly classified. Further, we hypothesise that ICA can be used for source-separation of EEG data to produce putative activity patterns associated with visual process mechanisms. Detailed profiling of these ICA sources could be informative to the nature of visual cognition in a way that is not accessible through other means. While ICA has been used previously in removing 'noise' from EEG data, profiling the relation of common ICA sources to cognitive processing appears less well explored. This can be tested and falsified by using ICA sources as training data for the machine learning, and quantified by scoring the accuracy at which trials can be correctly classified using this data, while also comparing this with the equivalent EEG data. We find that machine learning techniques can classify the presence or absence of visual stimuli at 85% accuracy (0.65 AUC) using a single optimised channel of EEG data, and this improves to 87% (0.7 AUC) using data from an equivalent single ICA source. We identify data from this ICA source at time period around 75-125 ms post-stimuli presentation as greatly more informative in decoding the trial label. The most informative ICA source is located in the central occipital region and typically has prominent 10-12Hz synchrony and a -5 μV ERP dip at around 100ms. This appears to be the best predictor of trial identity in our experiment. With these findings, we then explore further experimental designs to investigate ongoing visual attention and perception, attempting online classification of vision using these techniques and IC sources. We discuss how these relate to standard EEG landmarks such as the N170 and P300, and compare their use. With this thesis, we explore this methodology of quantifying EEG neuroimaging data with machine learning separation and classification and discuss how this can be used to investigate visual cognition. We hope the greater information from EEG analyses with predictive power of each ICA source quantified by machine learning separation and classification and discuss how this can be used to investigate visual cognition. We hope the greater information from EEG analyses with predictive power of each ICA source quantified by machine learning might give insight and constraints for macro level models of visual cognition.
|
58 |
Visualization of microprocessor execution in computer architecture courses: a case study at Kabul UniversityHedayati, Mohammad Hadi January 2010 (has links)
Magister Scientiae - MSc / Computer architecture and assembly language programming microprocessor execution are basic courses taught in every computer science department. Generally, however, students have difficulties in mastering many of the concepts in the courses, particularly students whose first language is not English. In addition to their difficulties in understanding the purpose of given instructions, students struggle to mentally visualize the data movement, control and processing operations. To address this problem, this research proposed a graphical visualization approach and investigated the visual illustrations of such concepts and instruction execution by implementing a graphical visualization simulator as a teaching aid. The graphical simulator developed during the course of this research was applied in a computer architecture course at Kabul University, Afghanistan. Results obtained from student evaluation of the simulator show significant levels of success using the visual simulation teaching aid. The results showed that improved learning was achieved, suggesting that this approach could be useful in other computer science departments in Afghanistan, and elsewhere where similar challenges are experienced. / South Africa
|
59 |
Visual exploratory analysis of large data sets : evaluation and applicationLam, Heidi Lap Mun 11 1900 (has links)
Large data sets are difficult to analyze. Visualization has been proposed to assist exploratory data analysis (EDA) as our visual systems can process signals in
parallel to quickly detect patterns. Nonetheless, designing an effective visual
analytic tool remains a challenge.
This challenge is partly due to our incomplete understanding of how common
visualization techniques are used by human operators during analyses, either in
laboratory settings or in the workplace.
This thesis aims to further understand how visualizations can be used to support EDA. More specifically, we studied techniques that display multiple levels of visual information resolutions (VIRs) for analyses using a range of methods.
The first study is a summary synthesis conducted to obtain a snapshot of
knowledge in multiple-VIR use and to identify research questions for the thesis:
(1) low-VIR use and creation; (2) spatial arrangements of VIRs. The next two
studies are laboratory studies to investigate the visual memory cost of image
transformations frequently used to create low-VIR displays and overview use
with single-level data displayed in multiple-VIR interfaces.
For a more well-rounded evaluation, we needed to study these techniques in
ecologically-valid settings. We therefore selected the application domain of web
session log analysis and applied our knowledge from our first three evaluations
to build a tool called Session Viewer. Taking the multiple coordinated view
and overview + detail approaches, Session Viewer displays multiple levels of
web session log data and multiple views of session populations to facilitate data
analysis from the high-level statistical to the low-level detailed session analysis
approaches.
Our fourth and last study for this thesis is a field evaluation conducted at
Google Inc. with seven session analysts using Session Viewer to analyze their
own data with their own tasks. Study observations suggested that displaying
web session logs at multiple levels using the overview + detail technique helped bridge between high-level statistical and low-level detailed session analyses, and
the simultaneous display of multiple session populations at all data levels using
multiple views allowed quick comparisons between session populations. We also
identified design and deployment considerations to meet the needs of diverse
data sources and analysis styles. / Science, Faculty of / Computer Science, Department of / Graduate
|
60 |
Development of a Multimodal Human-computer Interface for the Control of a Mobile RobotJacques, Maxime January 2012 (has links)
The recent advent of consumer grade Brain-Computer Interfaces (BCI) provides a new revolutionary and accessible way to control computers. BCI translate cognitive electroencephalography (EEG) signals into computer or robotic commands using specially built headsets. Capable of enhancing traditional interfaces that require interaction with a keyboard, mouse or touchscreen, BCI systems present tremendous opportunities to benefit various fields. Movement restricted users can especially benefit from these interfaces. In this thesis, we present a new way to interface a consumer-grade BCI solution to a mobile robot. A Red-Green-Blue-Depth (RGBD) camera is used to enhance the navigation of the robot with cognitive thoughts as commands. We introduce an interface presenting 3 different methods of robot-control: 1) a fully manual mode, where a cognitive signal is interpreted as a command, 2) a control-flow manual mode, reducing the likelihood of false-positive commands and 3) an automatic mode assisted by a remote RGBD camera. We study the application of this work by navigating the mobile robot on a planar surface using the different control methods while measuring the accuracy and usability of the system. Finally, we assess the newly designed interface’s role in the design of future generation of BCI solutions.
|
Page generated in 0.0634 seconds