• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 31
  • 12
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 224
  • 224
  • 224
  • 78
  • 59
  • 57
  • 52
  • 49
  • 46
  • 37
  • 35
  • 33
  • 32
  • 31
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Exploring Empirical Guidelines for Selecting Computer Assistive Technology for People with Disabilities

Border, Jennifer January 2011 (has links)
No description available.
122

Development of Microcontroller-based Handheld Electroencephalography Device for use in Diagnostic Analysis of Acute Neurological Emergencies (E-Hand)

Jones, Brittany M.G. January 2015 (has links)
No description available.
123

ASSESSMENT OF FACTORS RELATED TO CHRONIC INTRACORTICAL RECORDING RELIABILITY

Jingle, Jiang 08 February 2017 (has links)
No description available.
124

Generalized Methods for User-Centered Brain-Computer Interfacing

Dhindsa, Jaskiret 11 1900 (has links)
Brain-computer interfaces (BCIs) create a new form of communication and control for humans by translating brain activity directly into actions performed by a computer. This new field of research, best known for its breakthroughs in enabling fully paralyzed or locked-in patients to communicate and control simple devices, has resulted in a variety of remarkable technological developments. However, the field is still in its infancy, and facilitating control of a computer application via thought in a broader context involves a number of a challenges that have not yet been met. Advancing BCIs beyond the experimental phase continues to be a struggle. End-users have rarely been reached, except for in the case of a few highly specialized applications which require continual involvement of BCI experts. While these applications are profoundly beneficial for the patients they serve, the potential for BCIs is much broader in scope and powerful in effect. Unfortunately, the current approaches to brain-computer interfacing research have not been able to address the primary limitations in the field: the poor reliability of most BCIs and the highly variable performance across individuals. In addition to this, the modes of control available to users tend to be restrictive and unintuitive (\emph{e.g.}, imagining complex motor activities to answer ``Yes" or ``No" questions). This thesis presents a novel approach that addresses both of these limitations simultaneously. Brain-computer interfacing is currently viewed primarily as a machine learning problem, wherein the computer must learn the patterns of brain activity associated with a user's mental commands. In order to simplify this problem, researchers often restrict mental commands to those which are well characterized and easily distinguishable based on \emph{a priori} knowledge about their corresponding neural correlates. However, this approach does not fully recognize two properties of a BCI which makes it unique to other human-computer interfaces. First, individuals can vary widely with respect to the patterns of activation associated with how their brains generate similar mental activity and with respect to which kinds of mental activity have been most trained due to life experience. Thus, it is not surprising that BCIs based on predefined neural correlates perform inconsistently for different users. Second, for a BCI to perform well, the human and the computer must become a cohesive unit such that the computer can adapt as the user's brain naturally changes over time and while the user learns to make their mental commands more consistent and distinguishable given feedback from the computer. This not only implies that BCI use is a skill that must be developed, honed, and maintained in relation to the computer's algorithms, but that the human is the fundamental component of the system in a way that makes human learning just as important as machine learning. In this thesis it is proposed that, in the long term, a generalized BCI that can discover the appropriate neural correlates of individualized mental commands is preferable to the traditional approach. Generalization across mental strategies allows each individual to make better use of their own experience and cognitive abilities in order to interact with BCIs in a more free and intuitive way. It is further argued that in addition to generalization, it is necessary to develop improved training protocols respecting the potential of the user to learn to effectively modulate their own brain activity for BCI use. It is shown through a series of studies exploring generalized BCI methods, the influence of prior non-BCI training on BCI performance, and novel methods for training individuals to control their own brain activity, that this new approach based on balancing the roles of the user and the computer according to their respective capabilities is a promising avenue for advancing brain-computer interfacing towards a broader array of applications usable by the general population. / Thesis / Doctor of Philosophy (PhD)
125

(r)Evolution in Brain-Computer Interface Technologies for Play: (non)Users in Mind

Cloyd, Tristan Dane 29 January 2014 (has links)
This dissertation addresses user responses to the introduction of Brain-Computer Interface technologies (BCI) for gaming and consumer applications in the early part of the 21st century. BCI technology has emerged from the contexts of interrelated medical, academic, and military research networks including an established computer and gaming industry. First, I show that the emergence and development of BCI technology are based on specific economic, socio-cultural, and material factors, and secondly, by utilizing user surveys and interviews, I argue that the success of BCI are not determined by these contextual factors but are dependent on user acceptance and interpretation. Therefore, this project contributes to user-technology studies by developing a model which illustrates the interrelations between producers, users, values, and technology and how they contribute to the acceptance, resistance, and modification in the technological development of emerging BCI technologies. This project focuses on human computer interaction researchers, independent developers, the companies producing BCI headsets, and neuro-gadget companies who are developing BCI's for users as an alternative interface for the enhancement of human performance and gaming and computer simulated experience. Moreover, BCI production and use as modes of enhancement align significantly with social practices of play which allows an expanded definition of technology to include cultural dimensions of play. / Ph. D.
126

Sampling Controlled Stochastic Recursions: Applications to Simulation Optimization and Stochastic Root Finding

Hashemi, Fatemeh Sadat 08 October 2015 (has links)
We consider unconstrained Simulation Optimization (SO) problems, that is, optimization problems where the underlying objective function is unknown but can be estimated at any chosen point by repeatedly executing a Monte Carlo (stochastic) simulation. SO, introduced more than six decades ago through the seminal work of Robbins and Monro (and later by Kiefer and Wolfowitz), has recently generated much attention. Such interest is primarily because of SOs flexibility, allowing the implicit specification of functions within the optimization problem, thereby providing the ability to embed virtually any level of complexity. The result of such versatility has been evident in SOs ready adoption in fields as varied as finance, logistics, healthcare, and telecommunication systems. While SO has become popular over the years, Robbins and Monros original stochastic approximation algorithm and its numerous modern incarnations have seen only mixed success in solving SO problems. The primary reason for this is stochastic approximations explicit reliance on a sequence of algorithmic parameters to guarantee convergence. The theory for choosing such parameters is now well-established, but most such theory focuses on asymptotic performance. Automatically choosing parameters to ensure good finite-time performance has remained vexingly elusive, as evidenced by continuing efforts six decades after the introduction of stochastic approximation! The other popular paradigm to solve SO is what has been called sample-average approximation. Sample-average approximation, more a philosophy than an algorithm to solve SO, attempts to leverage advances in modern nonlinear programming by first constructing a deterministic approximation of the SO problem using a fixed sample size, and then applying an appropriate nonlinear programming method. Sample-average approximation is reasonable as a solution paradigm but again suffers from finite-time inefficiency because of the simplistic manner in which sample sizes are prescribed. It turns out that in many SO contexts, the effort expended to execute the Monte Carlo oracle is the single most computationally expensive operation. Sample-average approximation essentially ignores this issue since, irrespective of where in the search space an incumbent solution resides, prescriptions for sample sizes within sample-average approximation remain the same. Like stochastic approximation, notwithstanding beautiful asymptotic theory, sample-average approximation suffers from the lack of automatic implementations that guarantee good finite-time performance. In this dissertation, we ask: can advances in algorithmic nonlinear programming theory be combined with intelligent sampling to create solution paradigms for SO that perform well in finite-time while exhibiting asymptotically optimal convergence rates? We propose and study a general solution paradigm called Sampling Controlled Stochastic Recursion (SCSR). Two simple ideas are central to SCSR: (i) use any recursion, particularly one that you would use (e.g., Newton and quasi- Newton, fixed-point, trust-region, and derivative-free recursions) if the functions involved in the problem were known through a deterministic oracle; and (ii) estimate objects appearing within the recursions (e.g., function derivatives) using Monte Carlo sampling to the extent required. The idea in (i) exploits advances in algorithmic nonlinear programming. The idea in (ii), with the objective of ensuring good finite-time performance and optimal asymptotic rates, minimizes Monte Carlo sampling by attempting to balance the estimated proximity of an incumbent solution with the sampling error stemming from Monte Carlo. This dissertation studies the theoretical and practical underpinnings of SCSR, leading to implementable algorithms to solve SO. We first analyze SCSR in a general context, identifying various sufficient conditions that ensure convergence of SCSRs iterates to a solution. We then analyze the nature of such convergence. For instance, we demonstrate that in SCSRs which guarantee optimal convergence rates, the speed of the underlying (deterministic) recursion and the extent of Monte Carlo sampling are intimately linked, with faster recursions permitting a wider range of Monte Carlo effort. With the objective of translating such asymptotic results into usable algorithms, we formulate a family of SCSRs called Adaptive SCSR (A-SCSR) that adaptively determines how much to sample as a recursion evolves through the search space. A-SCSRs are dynamic algorithms that identify sample sizes to balance estimated squared bias and variance of an incumbent solution. This makes the sample size (at every iteration of A-SCSR) a stopping time, thereby substantially complicating the analysis of the behavior of A-SCSRs iterates. That A-SCSR works well in practice is not surprising" the use of an appropriate recursion and the careful sample size choice ensures this. Remarkably, however, we show that A-SCSRs are convergent to a solution and exhibit asymptotically optimal convergence rates under conditions that are no less general than what has been established for stochastic approximation algorithms. We end with the application of a certain A-SCSR to a parameter estimation problem arising in the context of brain-computer interfaces (BCI). Specifically, we formulate and reduce the problem of probabilistically deciphering the electroencephalograph (EEG) signals recorded from the brain of a paralyzed patient attempting to perform one of a specified set of tasks. Monte Carlo simulation in this context takes a more general view, as the act of drawing an observation from a large dataset accumulated from the recorded EEG signals. We apply A-SCSR to nine such datasets, showing that in most cases A-SCSR achieves correct prediction rates that are between 5 and 15 percent better than competing algorithms. More importantly, due to the incorporated adaptive sampling strategies, A-SCSR tends to exhibit dramatically better efficiency rates for comparable prediction accuracies. / Ph. D.
127

Methods and Applications of Controlling Biomimetic Robotic Hands

Paluszek, Matthew Alan 06 February 2014 (has links)
Vast improvements in robotics and wireless communication have made teleoperated robots significantly more prevalent in industry, defense, and research. To help bridge the gap for these robots in the workplace, there has been a tremendous increase in research toward the development of biomimetic robotic hands that can simulate human operators. However, current methods of control are limited in scope and do not adequately represent human muscle memory and skills. The vision of this thesis is to provide a pathway for overcoming these limitations and open an opportunity for development and implementation of a cost effective methodology towards controlling a robotic hand. The first chapter describes the experiments conducted using Flexpoint bend sensors in conjunction with a simple voltage divider to generate a cost-effective data glove that is significantly less expensive than the commercially available alternatives. The data glove was able to provide sensitivity of less than 0.1 degrees. The second chapter describes the molding process for embedding pressure sensors in silicone skin and data acquisition from them to control the robotic hand. The third chapter describes a method for parsing and observing the information from the data glove and translating the relevant control variables to the robotic hand. The fourth chapter focuses on the feasibility of the brain computer interfaces (BCI) and successfully demonstrates the implementation of a simple brain computer interface in controlling a robotic hand. / Master of Science
128

Methodology and Techniques for Building Modular Brain-Computer Interfaces

Cummer, Jason 05 January 2015 (has links)
Commodity brain-computer interfaces (BCI) are beginning to accompany everything from toys and games to sophisticated health care devices. These contemporary interfaces allow for varying levels of interaction with a computer. Not surprisingly, the more intimately BCIs are integrated into the nervous system, the better the control a user can exert on a system. At one end of the spectrum, implanted systems can enable an individual with full body paralysis to utilize a robot arm and hold hands with their loved ones [28, 62]. On the other end of the spectrum, the untapped potential of commodity devices supporting electroencephalography (EEG) and electromyography (EMG) technologies require innovative approaches and further research. This thesis proposes a modularized software architecture designed to build flexible systems based on input from commodity BCI devices. An exploratory study using a commodity EEG provides concrete assessment of the potential for the modularity of the system to foster innovation and exploration, allowing for a combination of a variety of algorithms for manipulating data and classifying results. Specifically, this study analyzes a pipelined architecture for researchers, starting with the collection of spatio temporal brain data (STBD) from a commodity EEG device and correlating it with intentional behaviour involving keyboard and mouse input. Though classification proves troublesome in the preliminary dataset considered, the architecture demonstrates a unique and flexible combination of a liquid state machine (LSM) and a deep belief network (DBN). Research in methodologies and techniques such as these are required for innovation in BCIs, as commodity devices, processing power, and algorithms continue to improve. Limitations in terms of types of classifiers, their range of expected inputs, discrete versus continuous data, spatial and temporal considerations and alignment with neural networks are also identified. / Graduate / 0317 / 0984 / jasoncummer@gmail.com
129

Adaptive Brain-Computer Interface Systems For Communication in People with Severe Neuromuscular Disabilities

Mainsah, Boyla O. January 2016 (has links)
<p>Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel. </p><p>This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication. </p><p>In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.</p> / Dissertation
130

Development of Electroencephalography based Brain Controlled Switch and Nerve Conduction Study Simulator Software

Qian, Kai 08 December 2010 (has links)
This thesis investigated the development of an EEG-based brain controlled switch and the design of a software for nerve conduction study. For EEG-based brain controlled switch, we proposed a novel paradigm for an online brain-controlled switch based on Event-Related Synchronizations (ERDs) following external sync signals. Furthermore, the ERD feature was enhanced by 3 event-related moving averages and the performance was tested online. Subjects were instructed to perform an intended motor task following an external sync signal in order to turn on a virtual switch. Meanwhile, the beta-band (16-20Hz) relative ERD power (ERD in reverse value order) of a single EEG Laplacian channel from primary motor area was calculated and filtered by 3 event-related moving average in real-time. The computer continuously monitored the filtered relative ERD power level until it exceeded a pre-set threshold selected based on the observations of ERD power range to turn on the virtual switch. Four right handed healthy volunteers participated in this study. The false positive rates encountered among the four subjects during the operation of the virtual switch were 0.8±0.4%, whereby the response time delay was 36.9±13.0s and the subjects required approximately 12.3±4.4 s of active urging time to perform repeated attempts in order to turn on the switch in the online experiments. The aim of nerve conduction simulator software design is to create software that can be used by nerve conduction simulator to serve as a medical simulator or education tool to train novice physicians for nerve conduction study test. The real response waveform of 10 different upper limb nerves in conduction studies were obtained from the equipment used in real patient studies. A waveform generation model was built to generalize the response waveform near the standard stimulus site within study interest region based on the extracted waveforms and normal reference parameters of each study and stimulus site coordinates. Finally, based on the model, a software interface was created to simulate 10 different nerve conduction studies of the upper limb with 9 pathological conditions.

Page generated in 0.0953 seconds