• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 38
  • 33
  • 16
  • 13
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 349
  • 349
  • 227
  • 96
  • 79
  • 65
  • 61
  • 61
  • 54
  • 52
  • 49
  • 38
  • 37
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Design and Evaluation of a Context-aware User-interface for Patient Rooms

Bhatnagar, Manas 21 November 2013 (has links)
The process of patient care relies on clinical data spread across specialized hospital departments. Powerful software is being designed to assimilate this disconnected patient data before treatment can be decided. However, these data are often presented to clinicians on interfaces that do not fit clinical workflows, leading to poor operational efficiency and increased patient safety risks. This project relies on ethnographic design methods to create evidence of clinician preferences pertaining to the presentation and collection of information on user interfaces in patient rooms. Using data gathered in clinical observation, a prototype interface was designed to enable doctors to conduct clinical tasks through a usable patient room interface. The prototype evaluation with doctors identified clinical tasks that are relevant in the patient room and provided insight into the perceived usability of such an interface. The evaluation sessions also elucidated on issues of patient-centeredness in technology design, effortless authentication and interface customizability.
202

Non-Linear Adaptive Bayesian Filtering for Brain Machine Interfaces

Li, Zheng January 2010 (has links)
<p>Brain-machine interfaces (BMI) are systems which connect brains directly to machines or computers for communication. BMI-controlled prosthetic devices use algorithms to decode neuronal recordings into movement commands. These algorithms operate using models of how recorded neuronal signals relate to desired movements, called models of tuning. Models of tuning have typically been linear in prior work, due to the simplicity and speed of the algorithms used with them. Neuronal tuning has been shown to slowly change over time, but most prior work do not adapt tuning models to these changes. Furthermore, extracellular electrical recordings of neurons' action potentials slowly change over time, impairing the preprocessing step of spike-sorting, during which the neurons responsible for recorded action potentials are identified.</p> <p></p> <p>This dissertation presents a non-linear adaptive Bayesian filter and an adaptive spike-sorting method for BMI decoding. The adaptive filter consists of the n-th order unscented Kalman filter and Bayesian regression self-training updates. The unscented Kalman filter estimates desired prosthetic movements using a non-linear model of tuning as its observation model. The model is quadratic with terms for position, velocity, distance from center of workspace, and velocity magnitude. The tuning model relates neuronal activity to movements at multiple time offsets simultaneously, and the movement model of the filter is an order n autoregressive model.</p> <p>To adapt the tuning model parameters to changes in the brain, Bayesian regression self-training updates are performed periodically. Tuning model parameters are stored as probability distributions instead of point estimates. Bayesian regression uses the previous model parameters as priors and calculates the posteriors of the regression between filter outputs, which are assumed to be the desired movements, and neuronal recordings. Before each update, filter outputs are smoothed using a Kalman smoother, and tuning model parameters are passed through a transition model describing how parameters change over time. Two variants of Bayesian regression are presented: one uses a joint distribution for the model parameters which allows analytical inference, and the other uses a more flexible factorized distribution that requires approximate inference using variational Bayes.</p> <p>To adapt spike-sorting parameters to changes in spike waveforms, variational Bayesian Gaussian mixture clustering updates are used to update the waveform clustering used to calculate these parameters. This Bayesian extension of expectation-maximization clustering uses the previous clustering parameters as priors and computes the new parameters as posteriors. The use of priors allows tracking of clustering parameters over time and facilitates fast convergence.</p> <p>To evaluate the proposed methods, experiments were performed with 3 Rhesus monkeys implanted with micro-wire electrode arrays in arm-related areas of the cortex. Off-line reconstructions and on-line, closed-loop experiments with brain-control show that the n-th order unscented Kalman filter is more accurate than previous linear methods. Closed-loop experiments over 29 days show that Bayesian regression self-training helps maintain control accuracy. Experiments on synthetic data show that Bayesian regression self-training can be applied to other tracking problems with changing observation models. Bayesian clustering updates on synthetic and neuronal data demonstrate tracking of cluster and waveform changes. These results indicate the proposed methods improve the accuracy and robustness of BMIs for prosthetic devices, bringing BMI-controlled prosthetics closer to clinical use.</p> / Dissertation
203

Brain-Computer Interface Control of an Anthropomorphic Robotic Arm

Clanton, Samuel T. 21 July 2011 (has links)
This thesis describes a brain-computer interface (BCI) system that was developed to allow direct cortical control of 7 active degrees of freedom in a robotic arm. Two monkeys with chronic microelectrode implants in their motor cortices were able to use the arm to complete an oriented grasping task under brain control. This BCI system was created as a clinical prototype to exhibit (1) simultaneous decoding of cortical signals for control of the 3-D translation, 3-D rotation, and 1-D finger aperture of a robotic arm and hand, (2) methods for constructing cortical signal decoding models based on only observation of a moving robot, (3) a generalized method for training subjects to use complex BCI prosthetic robots using a novel form of operator-machine shared control, and (4) integrated kinematic and force control of a brain-controlled prosthetic robot through a novel impedance-based robot controller. This dissertation describes each of these features individually, how their integration enriched BCI control, and results from the monkeys operating the resulting system.
204

Mobility enhancement using simulated artificial human vision

Dowling, Jason Anthony January 2007 (has links)
The electrical stimulation of appropriate components of the human visual system can result in the perception of blobs of light (or phosphenes) in totally blind patients. By stimulating an array of closely aligned electrodes it is possible for a patient to perceive very low-resolution images from spatially aligned phosphenes. Using this approach, a number of international research groups are working toward developing multiple electrode systems (called Artificial Human Vision (AHV) systems or visual prostheses) to provide a phosphene-based substitute for normal human vision. Despite the great promise, there are currently a number of constraints with current AHV systems. These include limitations in the number of electrodes which can be implanted and the perceived spatial layout and display frequency of phosphenes. Therefore the development of computer vision techniques that can maximise the visualisation value of the limited number of phosphenes would be useful in compensating for these constraints. The lack of an objective method for comparing different AHV system displays, in addition to comparing AHV systems and other blind mobility aids (such as the long cane), has been a significant problem for AHV researchers. Finally, AHV research in Australia and many other countries relies strongly on theoretical models and animal experimentation due to the difficult of prototype human trials. Because of this constraint the experiments conducted in this thesis were limited to simulated AHV devices with normally sighted research participants and the true impact on blind people can only be regarded as approximated. In light of these constraints, this thesis has two general aims. The first aim is to investigate, evaluate and develop effective techniques for mobility assessment which will allow the objective comparison of different AHV system phosphene presentation methods. The second aim is to develop a useful display framework to guide the development of AHV information presentation, and use this framework to guide the development of an AHV simulation device. The first research contribution resulting from this work is a conceptual framework based on literature reviews of blind and low vision mobility, AHV technology, and computer vision. This framework incorporates a comprehensive number of factors which affect the effectiveness of information presentation in an AHV system. Experiments reported in this thesis have investigated a number of these factors using simulated AHV with human participants. It has been found that higher spatial resolution is associated with accurate walking (reduced veering), whereas higher display rate is associated with faster walking speeds. In this way it has been demonstrated that the conceptual framework supports and guides the development of an adaptive AHV system, with the dynamic adjustment of display properties in real-time. The second research contribution addresses mobility assessment which has been identified as an important issue in the AHV literature. This thesis presents the adaptation of a mobility assessment method from the blind and low vision literature to measure simulated AHV mobility performance using real-time computer based analysis. This method of mobility assessment (based on parameters for walking speed, obstacle contacts and veering) is demonstrated experimentally in two different indoor mobility courses. These experiments involved sixty-five participants wearing a head-mounted simulation device. The final research contribution in this thesis is the development and evaluation of an original real-time looming obstacle detector, based on coarse optical flow, and implemented on a Windows PocketPC based Personal Digital Assistant (PDA) using a CF card camera. PDA based processors are a preferred main processing platform for AHV systems due to their small size, light weight and ease of software development. However, PDA devices are currently constrained by restricted random access memory, lack of a floating point unit and slow internal bus speeds. Therefore any real-time software needs to maximise the use of integer calculations and minimise memory usage. This contribution was significant as the resulting device provided a selection of experimental results and subjective opinions.
205

The wind of change : individuals change when technology change /

Fridell, Kent. January 2007 (has links)
Lic.-avh. (sammanfattning) Stockholm : Karolinska institutet, 2007. / Härtill 2 uppsatser.
206

Traitement du signal ECoG pour Interface Cerveau Machine à grand nombre de degrés de liberté pour application clinique / ECoG signal processing for Brain Computer Interface with multiple degrees of freedom for clinical application

Schaeffer, Marie-Caroline 06 June 2017 (has links)
Les Interfaces Cerveau-Machine (ICM) sont des systèmes qui permettent à des patients souffrant d'un handicap moteur sévère d'utiliser leur activité cérébrale pour contrôler des effecteurs, par exemple des prothèses des membres supérieurs dans le cas d'ICM motrices. Les intentions de mouvement de l'utilisateur sont estimées en appliquant un décodeur sur des caractéristiques extraites de son activité cérébrale. Des challenges spécifiques au déploiement clinique d'ICMs motrices ont été considérés, à savoir le contrôle mono-membre ou séquentiel multi-membre asynchrone et précis. Un décodeur, le Markov Switching Linear Model (MSLM), a été développé pour limiter les activations erronées de l'ICM, empêcher des mouvements parallèles des effecteurs et décoder avec précision des mouvements complexes. Le MSLM associe des modèles linéaires à différents états possibles, e.g. le contrôle d'un membre spécifique ou une phase de mouvement particulière. Le MSLM réalise une détection d'état dynamique, et les probabilités des états sont utilisées pour pondérer les modèles linéaires.La performance du décodeur MSLM a été évaluée pour la reconstruction asynchrone de trajectoires de poignet et de doigts à partir de signaux electrocorticographiques. Il a permis de limiter les activations erronées du système et d'améliorer la précision du décodage du signal cérébral. / Brain-Computer Interfaces (BCI) are systems that allow severely motor-impaired patients to use their brain activity to control external devices, for example upper-limb prostheses in the case of motor BCIs. The user's intentions are estimated by applying a decoder on neural features extracted from the user's brain activity. Signal processing challenges specific to the clinical deployment of motor BCI systems are addressed in the present doctoral thesis, namely asynchronous mono-limb or sequential multi-limb decoding and accurate decoding during active control states. A switching decoder, namely a Markov Switching Linear Model (MSLM), has been developed to limit spurious system activations, to prevent parallel limb movements and to accurately decode complex movements.The MSLM associates linear models with different possible control states, e.g. activation of a specific limb, specific movement phases. Dynamic state detection is performed by the MSLM, and the probability of each state is used to weight the linear models. The performance of the MSLM decoder was assessed for asynchronous wrist and multi-finger trajectory reconstruction from electrocorticographic signals. It was found to outperform previously reported decoders for the limitation of spurious activations during no-control periods and permitted to improve decoding accuracy during active periods.
207

Handheld augmented reality interaction : spatial relations / Interaction en réalité augmentée sur dispositif mobile : relations spatiales

Vincent, Thomas 02 October 2014 (has links)
Nous nous intéressons à l'interaction dans le contexte de la Réalité Augmentée sur dispositifs mobiles. Le dispositif mobile est utilisé comme une lentille magique qui 'augmente' la perception de l'environnement physique avec du contenu numérique. Nous nous sommes particulièrement intéressés aux relations spatiales entre le contenu affiché à l'écran et l'environnement physique. En effet la combinaison des environnements physique et numérique repose sur ces relations spatiales, comme l'adaptation de l'augmentation numérique en fonction de la localisation de l'utilisateur. Mais ces relations spatiales définissent aussi des contraintes pour l'interaction. Par exemple l'effet des tremblements naturels de la main rend instable la vidéo affichée sur l'écran du dispositif mobile et par conséquent a un impact direct sur la précision d'interaction. La question est alors, comment peut-on relâcher ces contraintes spatiales pour améliorer l'interaction sans pour autant casser la co-localisation physique-numérique. Nous apportons trois contributions. Tout d'abord, nous avons établi un espace de conception décrivant le contenu affiché à l'écran dans le contexte de la Réalité Augmentée sur dispositifs mobiles. Cet espace conceptuel met en exergue les relations spatiales entre les différents repères des composants le structurant. Cet espace permet d'étudier systématiquement la conception de techniques d'interaction dans le contexte de la Réalité Augmentée sur dispositifs mobiles. Deuxièmement, nous avons conçu des techniques de pointage améliorant la précision du pointage en Réalité Augmentée sur supports mobiles. Ces techniques de pointage ont été évaluées lors d'expériences utilisateur. Enfin, dans le cadre du projet en partenariat AIST-Tuskuba, Schneider France et Schneider Japon dans lequel s'inscrit cette thèse, nous avons développé une boîte à outils pour le développement d'applications de Réalité Augmentée sur dispositifs mobiles. Cette boîte à outils a été utilisée pour développer plusieurs démonstrateurs. / We explored interaction within the context of handheld Augmented Reality (AR), where a handheld device is used as a physical magic lens to 'augment' the physical surrounding. We focused, in particular, on the role of spatial relations between the on-screen content and the physical surrounding. On the one hand, spatial relations define opportunities for mixing environments, such as the adaptation of the digital augmentation to the user's location. On the other hand, spatial relations involve specific constraints for interaction such as the impact of hand tremor on on-screen camera image stability. The question is then, how can we relax spatial constraints while maintaining the feeling of digital-physical collocation. Our contribution is three-fold. First, we propose a design space for handheld AR on-screen content with a particular focus on the spatial relations between the different identified frames of reference. This design space defines a framework for systematically studying interaction with handheld AR applications. Second, we propose and evaluate different handheld AR pointing techniques to improve pointing precision. Indeed, with handheld AR set-up, both touch-screen input and the spatial relations between the on-screen content and the physical surrounding impair the precision of pointing. Third, as part of a collaborative research project involving AIST-Tsukuba and Schneider- France and Japan, we developed a toolkit supporting the development of handheld AR applications. The toolkit has been used to develop several demonstrators.
208

Transfer of Motor Learning from a Virtual to Real Task Using EEG Signals Resulting from Embodied and Abstract Thoughts

January 2013 (has links)
abstract: This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We test whether motor learning transfer is more related to use of shared neural structures between imagery and motor execution or to more generalized cognitive factors. Using an EEG-BCI, we train one group of participants to control the movements of a cursor using embodied motor imagery. A second group is trained to control the cursor using abstract motor imagery. A third control group practices moving the cursor using an arm and finger on a touch screen. We hypothesized that if motor learning transfer is related to the use of shared neural structures then the embodied motor imagery group would show more learning transfer than the abstract imaging group. If, on the other hand, motor learning transfer results from more general cognitive processes, then the abstract motor imagery group should also demonstrate motor learning transfer to the manual performance of the same task. Our findings support that motor learning transfer is due to the use of shared neural structures between imaging and motor execution of a task. The abstract group showed no motor learning transfer despite being better at EEG-BCI control than the embodied group. The fact that more participants were able to learn EEG-BCI control using abstract imagery suggests that abstract imagery may be more suitable for EEG-BCIs for some disabilities, while embodied imagery may be more suitable for others. In Part 2, EEG data collected in the above experiment was used to train an artificial neural network (ANN) to map EEG signals to machine commands. We found that our open-source ANN using spectrograms generated from SFFTs is fundamentally different and in some ways superior to Emotiv's proprietary method. Our use of novel combinations of existing technologies along with abstract and embodied imagery facilitates adaptive customization of EEG-BCI control to meet needs of individual users. / Dissertation/Thesis / Ph.D. Psychology 2013
209

Detecção e rastreamento de íris para implementação de uma interface homem-computador

Fernandes Junior, Valmir 10 August 2010 (has links)
Made available in DSpace on 2016-03-15T19:37:33Z (GMT). No. of bitstreams: 1 Valmir Fernandes Junior.pdf: 2218220 bytes, checksum: f12b7829c2024510149ca8f24aa66e26 (MD5) Previous issue date: 2010-08-10 / This paper presents a technique to iris detection and tracking that can be used in a human computer interface which allows people with mobility restricted, including no mobility above the shoulders, can control the mouse pointer only moving their eyes, without using expensive equipments. The unique data input used is an ordinary webcam without optical zoon, special lights or restricting user face mobility. The mouse displacement will occur in a straight way, in other words, the mouse cursor will be positioned at the place estimated by the technique. To the iris detection tests 60 images were used. 90.83% of the iris were identified correctly, there were 4.17% of missing iris and 5% false positives (iris were estimated in a wrong place). Using images generated straight from the webcam the iris were found correctly in 87,5%, no iris were found in 11,11% and in 1,39% the technique found iris in wrong places, the average time between positioning and a click is about 20 seconds. / Este trabalho apresenta uma técnica de detecção e rastreamento de íris para ser utilizada em uma interface homem-computador que permita pessoas com mobilidade restrita, inclusive sem mobilidade nos ombros, possam controlar o cursor do mouse com o movimento dos olhos, sem a necessidade de adquirir equipamentos caros. A única entrada de dados utilizada é uma webcam simples sem auxílio de zoon ótico, iluminação especial ou fixação da face do usuário. A movimentação do mouse dar-se-á de maneira direta, ou seja, o ponteiro do mouse será direcionado para a região estimada pela técnica. Para a realização dos testes de detecção de íris foram utilizadas 60 imagens. Em 90.83% dos casos as íris foram encontradas corretamente, 4.17% dos casos não foram identificados e ocorreram 5% de falsos positivos (casos em que as íris foram estimadas no lugar errado). Com as imagens geradas diretamente pela webcam a identificação das íris ocorreu com sucesso em 87,50% dos casos, erros em 11,11 % e 1,39% de falsos positivos, o tempo médio entre o posicionamento e um clique é de cerca de 20 segundos.
210

Force Compensation and Recreation Accuracy in Humans

Rigsby, Benjamin 22 June 2017 (has links)
As industry becomes increasingly reliant on robotic assistance and human-computer interfaces, the demand to understand the human sensorimotor system’s characteristics intensifies. Although this field of research has been going on for over a century, new technologies push the limits of the human motor system and our knowledge of it. With new technologies come new abilities, and, in the area of medical care and rehabilitation, the need to expand our knowledge of the sensorimotor system comes from both the patient and physician. Two studies relating to human force interaction are presented in this thesis. The first study focuses on humans’ ability to bimanually recreate forces. That is, to feel a force on one hand and reproduce that force on the other. This skill is applicable in everyday lives from tasks such as a gardener using shears to trim a bush to a surgeon tying a delicate suture. These two tasks illustrate the different factors in this study on force recreation, which are the effects of: (1) occupational force dexterity, (2) force magnitude, and (3) the number of fingers used in the recreation task. Results showed statistical significance for force magnitude and number of fingers as factors in bimanual force recreation but not for occupation. The second study examines how humans compensate for force perturbations in different directions with respect to the line of action and the effects of restricting movement time. A dynamic tracking task was presented to participants in which they were told to follow a moving target as accurately as possible. During a fixed interval along the target’s path, a force field would perturb them in an undisclosed direction. Nine force conditions and three speeds were tested on both the left and right hands. Statistical analyses and comparison of error data indicate an effect of force direction on compensation accuracy. Speed is demonstrated as a statistically significant factor on accuracy, and a linear relationship between speed and error is posited.

Page generated in 0.0572 seconds