Spelling suggestions: "subject:"computerinterface"" "subject:"airwaterinterface""
61 |
User-Centered Design Strategies for Clinical Brain-Computer Interface Assistive Technology DevicesLight, Geraldine 01 January 2019 (has links)
Although in the past 50 years significant advances based on research of brain-computer interface (BCI) technology have occurred, there is a scarcity of BCI assistive technology devices at the consumer level. This multiple case study explored user-centered clinical BCI device design strategies used by computer scientists designing BCI assistive technologies to meet patient-centered outcomes. The population for the study encompassed computer scientists experienced with clinical BCI assistive technology design located in the midwestern, northeastern, and southern regions of the United States, as well as western Europe. The multi-motive information systems continuance model was the conceptual framework for the study. Interview data were collected from 7 computer scientists and 28 archival documents. Guided by the concepts of user-centered design and patient-centered outcomes, thematic analysis was used to identify codes and themes related to computer science and the design of BCI assistive technology devices. Notable themes included customization of clinical BCI devices, consideration of patient/caregiver interaction, collective data management, and evolving technology. Implications for social change based on the findings from this research include focus on meeting individualized patient-centered outcomes; enhancing collaboration between researchers, caregivers, and patients in BCI device development; and reducing the possibility of abandonment or disuse of clinical BCI assistive technology devices.
|
62 |
A Brain Robot Interface for Autonomous Activities of Daily Living TasksPathirage, Don Indika Upashantha 15 July 2014 (has links)
There have been substantial improvements in the area of rehabilitation robotics in the recent past. However, these advances are inaccessible to a large number of people with disabilities who are in most need of such assistance. This group includes people who are in a severely paralyzed state, that they are completely "locked-in" in their own bodies. Such persons usually retain full cognitive abilities, but have no voluntary muscle control.
For these persons, a Brain Computer Interface (BCI) is often the only way to communicate with the outside world and/or control an assistive device. One major drawback to BCI devices is their low information transfer rate, which can take as long as 30 seconds to select a single command. This can result in mental fatigue to the user, specially if it necessary to make multiple selections over the BCI to complete a single task. Therefore, P300 based BCI control is not efficient for controlling a assistive robotic device such as a robotic arm.
To address this shortcoming, a novel vision based Brain Robot Interface (BRI) is presented in this thesis. This visual user interface allows for selecting an object from an unstructured environment and then performing an action on the selected object using a robotic arm mounted to a power wheelchair. As issuing commands through BCI is slow, this system was designed to allow a user to perform a complete task via a BCI using an autonomous robotic system while issuing as few commands as possible. Furthermore, the new visual interface allows the user to perform the task without losing concentration on the stimuli or the task. In our interface, a scene image is captured by a camera mounted on the wheelchair, from which, a dynamically sized non-uniform stimulus grid is created using edge information. Dynamically sized grids improve object selection efficiency. Oddball paradigm and P300 Event Related Potentials (ERP) are used to select stimuli, where the stimuli being each cell in the grid. Once selected, object segmentation and matching is used to identify the object. Then the user, using BRI, chooses an action to be performed on the object by the wheelchair mounted robotic arm (WMRA). Tests on 8 healthy human subjects validated the functionality of the system. An average accuracy of 85.56% was achieved for stimuli selection over all subjects. With the proposed system, it took the users an average of 5 commands to perform a task on an object. The system will eventually be useful for completely paralyzed or locked-in patients for performing activities of daily living (ADL) tasks.
|
63 |
Characterising Evoked Potential Signals using Wavelet Transform Singularity Detection.McCooey, Conor Gerard, cmccooey@ieee.org January 2008 (has links)
This research set out to develop a novel technique to decompose Electroencephalograph (EEG) signal into sets of constituent peaks in order to better describe the underlying nature of these signals. It began with the question; can a localised, single stimulation of sensory nervous tissue in the body be detected in the brain? Flash Visual Evoked Potential (VEP) tests were carried out on 3 participants by presenting a flash and recording the response in the occipital region of the cortex. By focussing on analysis techniques that retain a perspective across different domains � temporal (time), spectral (frequency/scale) and epoch (multiple events) � useful information was detected across multiple domains, which is not possible in single domain transform techniques. A comprehensive set of algorithms to decompose evoked potential data into sets of peaks was developed and tested using wavelet transform singularity detection methods. The set of extracted peaks then forms the basis for a subsequent clustering analysis which identifies sets of localised peaks that contribute the most towards the standard evoked response. The technique is quite novel as no closely similar work in research has been identified. New and valuable insights into the nature of an evoked potential signal have been identified. Although the number of stimuli required to calculate an Evoked Potential response has not been reduced, the amount of data contributing to this response has been effectively reduced by 75%. Therefore better examination of a small subset of the evoked potential data is possible. Furthermore, the response has been meaningfully decomposed into a small number (circa 20) of constituent peaksets that are defined in terms of the peak shape (time location, peak width and peak height) and number of peaks within the peak set. The question of why some evoked potential components appear more strongly than others is probed by this technique. Delineation between individual peak sizes and how often they occur is for the first time possible and this representation helps to provide an understanding of how particular evoked potentials components are made up. A major advantage of this techniques is the there are no pre-conditions, constraints or limitations. These techniques are highly relevant to all evoked potential modalities and other brain signal response applications � such as in brain-computer interface applications. Overall, a novel evoked potential technique has been described and tested. The results provide new insights into the nature of evoked potential peaks with potential application across various evoked potential modalities.
|
64 |
A Neural Network Based Brain-Computer Interface for Classification of Movement Related EEGForslund, Pontus January 2003 (has links)
<p>A brain-computer interface, BCI, is a technical system that allows a person to control the external world without relying on muscle activity. This thesis presents an EEG based BCI designed for automatic classification of two dimensional hand movements. The long-term goal of the project is to build an intuitive communication system for operation by people with severe motor impairments. If successful, such system could for example be used by a paralyzed patient to control a word processor or a wheelchair.</p><p>The developed BCI was tested in an offine pilot study. In response to an external cue, a test subject moved a joystick in one of four directions. During the movement, EEG was recorded from seven electrodes mounted on the subject's scalp. An autoregressive model was fitted to the data, and the extracted coefficients were used as input features to a neural network based classifier. The classifier was trained to recognize the direction of the movements. During the first half of the experiment, real physical movements were performed. In the second half, subjects were instructed just to imagine the hand moving the joystick, but to avoid any muscle activity.</p><p>The results of the experiment indicate that the EEG signals do in fact contain extractable and classifiable information about the performed movements, during both physical and imagined movements.</p>
|
65 |
Localisation of brain functions : stimuling brain activity and source reconstruction for classificationNoirhomme, Quentin 18 October 2006 (has links)
A key issue in understanding how the brain functions is the ability to
correlate functional information with anatomical localisation.
Functional information can be provided by a variety of techniques like
positron emission tomography (PET), functional MRI (fMRI),
electroencephalography (EEG), magnetoencephalography (MEG) or
transcranial magnetic stimulation (TMS). All these methods provide
different, but complementary, information about the functional areas of
the brain. PET and fMRI provide spatially accurate picture of brain
regions involved in a given task. TMS permits to infer the contribution
of the stimulated brain area to the task under investigation. EEG and
MEG, which reflects brain activity directly, have temporal accuracy of
the order of a millisecond. TMS, EEG and MEG are offset by their low
spatial resolution. In this thesis, we propose two methods to improve
the spatial accuracy of method based on TMS and EEG.
The first part of this thesis presents an automatic method to improve
the localisation of TMS points. The method enables real-time
visualisation and registration of TMS evoked responses and MRI. A MF
digitiser is used to sample approximately 200 points on the subject's
head following a specific digitisation pattern. Registration is obtained
by minimising the RMS point to surface distance, computed efficiently
using the Euclidean distance transform. Functional maps are created from
TMS evoked responses projected onto the brain surface previously
segmented from MRI.
The second part presents the possibilities to set up a brain-computer
interface (BCI) based on reconstructed sources of EEG activity and the
parameters to adjust. Reconstructed sources could improve the EEG
spatial accuracy as well as add biophysical information on the origin of
the signal. Both informations could improve the BCI classification step.
Eight BCIs are built to enable comparison between electrode-based and
reconstructed source-based BCIs. Tests on detection of laterality of
upcoming hand movement demonstrate the interest of reconstructed
sources.
|
66 |
Near-infrared Spectroscopy Signal Classification: Towards a Brain-computer InterfaceTai, Kelly 04 March 2010 (has links)
A brain-computer interface (BCI) allows individuals to communicate through the modulation of regional brain activity. Clinical near-infrared spectroscopy (NIRS) is used to monitor changes in cerebral blood oxygenation due to functional activation. It was hypothesized that visually-cued emotional induction tasks can elicit detectable activity in the prefrontal cortex. Data were collected from eleven participants as they performed positively and negatively-valenced emotional induction tasks. Baseline and activation trials were classified offline with accuracies from 75.0-96.7% after applying a feature selection algorithm to determine optimal performance parameters for each participant. Feature selection identified common discriminatory features across participants and relationships between performance parameters. Additionally, classification accuracy was used to quantify NIRS hemodynamic response latency. Significant increases in classification rates were found as early as 2.5 s after initial stimulus presentation. These results suggest the potential application of emotional induction as a NIRS-BCI control paradigm.
|
67 |
Development of an Optical Brain-computer Interface Using Dynamic Topographical Pattern ClassificationSchudlo, Larissa Christina 26 November 2012 (has links)
Near-infrared spectroscopy (NIRS) in an imaging technique that has gained much attention in brain-computer interfaces (BCIs). Previous NIRS-BCI studies have primarily employed temporal features, derived from the time course of hemodynamic activity, despite potential value contained in the spatial attributes of a response. In an initial offline study, we investigated the value of using joint spatial-temporal pattern classification with dynamic NIR topograms to differentiate intentional cortical activation from rest. With the inclusion of spatiotemporal features, we demonstrated a significant increase in achievable classification accuracies from those obtained using temporal features alone (p < 10-4). In a second study, we evaluated the feasibility of implementing joint spatial-temporal pattern classification in an online system. We developed an online system-paced NIRS-BCI, and were able to differentiate two cortical states with high accuracy (77.4±10.5%). Collectively, these findings demonstrate the value of including spatiotemporal features in the classification of functional NIRS data for BCI applications.
|
68 |
Near-infrared Spectroscopy Signal Classification: Towards a Brain-computer InterfaceTai, Kelly 04 March 2010 (has links)
A brain-computer interface (BCI) allows individuals to communicate through the modulation of regional brain activity. Clinical near-infrared spectroscopy (NIRS) is used to monitor changes in cerebral blood oxygenation due to functional activation. It was hypothesized that visually-cued emotional induction tasks can elicit detectable activity in the prefrontal cortex. Data were collected from eleven participants as they performed positively and negatively-valenced emotional induction tasks. Baseline and activation trials were classified offline with accuracies from 75.0-96.7% after applying a feature selection algorithm to determine optimal performance parameters for each participant. Feature selection identified common discriminatory features across participants and relationships between performance parameters. Additionally, classification accuracy was used to quantify NIRS hemodynamic response latency. Significant increases in classification rates were found as early as 2.5 s after initial stimulus presentation. These results suggest the potential application of emotional induction as a NIRS-BCI control paradigm.
|
69 |
Construction of an Electroencephalogram-Based Brain-Computer Interface Using an Artificial Neural NetworkKOBAYASHI, Takeshi, HONDA, Hiroyuki, OGAWA, Tetsuo, SHIRATAKI, Tatsuaki, IMANISHI, Toshiaki, HANAI, Taizo, HIBINO, Shin, LIU, Xicheng 01 September 2003 (has links)
No description available.
|
70 |
A Neural Network Based Brain-Computer Interface for Classification of Movement Related EEGForslund, Pontus January 2003 (has links)
A brain-computer interface, BCI, is a technical system that allows a person to control the external world without relying on muscle activity. This thesis presents an EEG based BCI designed for automatic classification of two dimensional hand movements. The long-term goal of the project is to build an intuitive communication system for operation by people with severe motor impairments. If successful, such system could for example be used by a paralyzed patient to control a word processor or a wheelchair. The developed BCI was tested in an offine pilot study. In response to an external cue, a test subject moved a joystick in one of four directions. During the movement, EEG was recorded from seven electrodes mounted on the subject's scalp. An autoregressive model was fitted to the data, and the extracted coefficients were used as input features to a neural network based classifier. The classifier was trained to recognize the direction of the movements. During the first half of the experiment, real physical movements were performed. In the second half, subjects were instructed just to imagine the hand moving the joystick, but to avoid any muscle activity. The results of the experiment indicate that the EEG signals do in fact contain extractable and classifiable information about the performed movements, during both physical and imagined movements.
|
Page generated in 0.0874 seconds