• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 38
  • 33
  • 16
  • 13
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 349
  • 349
  • 227
  • 96
  • 79
  • 65
  • 61
  • 61
  • 54
  • 52
  • 49
  • 38
  • 37
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Analýza a klasifikace dat ze snímače mozkové aktivity / Data Analysis and Clasification from the Brain Activity Detector

Persich, Alexandr January 2020 (has links)
This thesis describes recording, processing and classifying brain activity which is being captured by a brain-computer interface (BCI) device manufactured by OpenBCI company. Possibility of use of such a device for controlling an application with brain activity, specifically with thinking of left or right hand movement, is discussed. To solve this task methods of signal processing and machine learning are used. As a result a program that is capable of recording, processing and classifying brain activity using an artificial neural network is created. An average accuracy of classification of synthetic data is 99.156%. An average accuracy of classification of real data is 73.71%.
212

EEG-Based Estimation of Human Reaction Time Corresponding to Change of Visual Event.

January 2019 (has links)
abstract: The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the reaction time is also of interest. As electroencephalogram (EEG) signals are proportional to the change of brain functionalities with time, EEG signals from different locations of the brain are used as indicators of brain activities. As the different channels are from different parts of our brain, identifying most relevant channels can provide the idea of responsible brain locations. In this study, response time is estimated using EEG signal features from time, frequency and time-frequency domain. Regression-based estimation using the full data-set results in RMSE (Root Mean Square Error) of 99.5 milliseconds and a correlation value of 0.57. However, the addition of non-EEG features with the existing features gives RMSE of 101.7 ms and a correlation value of 0.58. Using the same analysis with a custom data-set provides RMSE of 135.7 milliseconds and a correlation value of 0.69. Classification-based estimation provides 79% & 72% of accuracy for binary and 3-class classication respectively. Classification of extremes (high-low) results in 95% of accuracy. Combining recursive feature elimination, tree-based feature importance, and mutual feature information method, important channels, and features are isolated based on the best result. As human response time is not solely dependent on brain activities, it requires additional information about the subject to improve the reaction time estimation. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2019
213

Brain Computer Interface (BCI) Applications: Privacy Threats and Countermeasures

Bhalotiya, Anuj Arun 05 1900 (has links)
In recent years, brain computer interfaces (BCIs) have gained popularity in non-medical domains such as the gaming, entertainment, personal health, and marketing industries. A growing number of companies offer various inexpensive consumer grade BCIs and some of these companies have recently introduced the concept of BCI "App stores" in order to facilitate the expansion of BCI applications and provide software development kits (SDKs) for other developers to create new applications for their devices. The BCI applications access to users' unique brainwave signals, which consequently allows them to make inferences about users' thoughts and mental processes. Since there are no specific standards that govern the development of BCI applications, its users are at the risk of privacy breaches. In this work, we perform first comprehensive analysis of BCI App stores including software development kits (SDKs), application programming interfaces (APIs), and BCI applications w.r.t privacy issues. The goal is to understand the way brainwave signals are handled by BCI applications and what threats to the privacy of users exist. Our findings show that most applications have unrestricted access to users' brainwave signals and can easily extract private information about their users without them even noticing. We discuss potential privacy threats posed by current practices used in BCI App stores and then describe some countermeasures that could be used to mitigate the privacy threats. Also, develop a prototype which gives the BCI app users a choice to restrict their brain signal dynamically.
214

Facial Feature Tracking and Head Pose Tracking as Input for Platform Games

Andersson, Anders Tobias January 2016 (has links)
Modern facial feature tracking techniques can automatically extract and accurately track multiple facial landmark points from faces in video streams in real time. Facial landmark points are defined as points distributed on a face in regards to certain facial features, such as eye corners and face contour. This opens up for using facial feature movements as a handsfree human-computer interaction technique. These alternatives to traditional input devices can give a more interesting gaming experience. They also open up for more intuitive controls and can possibly give greater access to computers and video game consoles for certain disabled users with difficulties using their arms and/or fingers. This research explores using facial feature tracking to control a character's movements in a platform game. The aim is to interpret facial feature tracker data and convert facial feature movements to game input controls. The facial feature input is compared with other handsfree inputmethods, as well as traditional keyboard input. The other handsfree input methods that are explored are head pose estimation and a hybrid between the facial feature and head pose estimation input. Head pose estimation is a method where the application is extracting the angles in which the user's head is tilted. The hybrid input method utilises both head pose estimation and facial feature tracking. The input methods are evaluated by user performance and subjective ratings from voluntary participants playing a platform game using the input methods. Performance is measured by the time, the amount of jumps and the amount of turns it takes for a user to complete a platform level. Jumping is an essential part of platform games. To reach the goal, the player has to jump between platforms. An inefficient input method might make this a difficult task. Turning is the action of changing the direction of the player character from facing left to facing right or vice versa. This measurement is intended to pick up difficulties in controling the character's movements. If the player makes many turns, it is an indication that it is difficult to use the input method to control the character movements efficiently. The results suggest that keyboard input is the most effective input method, while it is also the least entertaining of the input methods. There is no significant difference in performance between facial feature input and head pose input. The hybrid input version has the best results overall of the alternative input methods. The hybrid input method got significantly better performance results than the head pose input and facial feature input methods, while it got results that were of no statistically significant difference from the keyboard input method. Keywords: Computer Vision, Facial Feature Tracking, Head Pose Tracking, Game Control / Moderna tekniker kan automatiskt extrahera och korrekt följa multipla landmärken från ansikten i videoströmmar. Landmärken från ansikten är definerat som punkter placerade på ansiktet utefter ansiktsdrag som till exempel ögat eller ansikts konturer. Detta öppnar upp för att använda ansiktsdragsrörelser som en teknik för handsfree människa-datorinteraktion. Dessa alternativ till traditionella tangentbord och spelkontroller kan användas för att göra datorer och spelkonsoler mer tillgängliga för vissa rörelsehindrade användare. Detta examensarbete utforskar användbarheten av ansiktsdragsföljning för att kontrollera en karaktär i ett plattformsspel. Målet är att tolka data från en appliktion som följer ansiktsdrag och översätta ansiktsdragens rörelser till handkontrollsinmatning. Ansiktsdragsinmatningen jämförs med inmatning med huvudposeuppskattning, en hybrid mellan ansikstdragsföljning och huvudposeuppskattning, samt traditionella tangentbordskontroller. Huvudposeuppskattning är en teknik där applikationen extraherar de vinklar användarens huvud lutar. Hybridmetoden använder både ansiktsdragsföljning och huvudposeuppskattning. Inmatningsmetoderna granskas genom att mäta effektivitet i form av tid, antal hopp och antal vändningar samt subjektiva värderingar av frivilliga testanvändare som spelar ett plattformspel med de olika inmatningsmetoderna. Att hoppa är viktigt i ett plattformsspel. För att nå målet, måste spelaren hoppa mellan plattformar. En inefektiv inmatningsmetod kan göra detta svårt. En vändning är när spelarkaraktären byter riktning från att rikta sig åt höger till att rikta sig åt vänster och vice versa. Ett högt antal vändningar kan tyda på att det är svårt att kontrollera spelarkaraktärens rörelser på ett effektivt sätt. Resultaten tyder på att tangentbordsinmatning är den mest effektiva metoden för att kontrollera plattformsspel. Samtidigt fick metoden lägst resultat gällande hur roligt användaren hade under spelets gång. Där var ingen statisktiskt signifikant skillnad mellan huvudposeinmatning och ansikstsdragsinmatning. Hybriden mellan ansiktsdragsinmatning och huvudposeinmatning fick bäst helhetsresultat av de alternativa inmatningsmetoderna. Nyckelord: Datorseende, Följning av Ansiktsdrag, Följning av Huvud, Spelinmatning
215

Brain-Computer Interface (Bci) Evaluation in People With Amyotrophic Lateral Sclerosis

McCane, Lynn M., Sellers, Eric W., Mcfarland, Dennis J., Mak, Joseph N., Carmack, C. Steve, Zeitlin, Debra, Wolpaw, Jonathan R., Vaughan, Theresa M. 01 January 2014 (has links)
Brain-computer interfaces (BCIs) might restore communication to people severely disabled by amyotrophic lateral sclerosis (ALS) or other disorders. We sought to: 1) define a protocol for determining whether a person with ALS can use a visual P300-based BCI; 2) determine what proportion of this population can use the BCI; and 3) identify factors affecting BCI performance. Twenty-five individuals with ALS completed an evaluation protocol using a standard 6 × 6 matrix and parameters selected by stepwise linear discrimination. With an 8-channel EEG montage, the subjects fell into two groups in BCI accuracy (chance accuracy 3%). Seventeen averaged 92 (± 3)% (range 71-100%), which is adequate for communication (G70 group). Eight averaged 12 (± 6)% (range 0-36%), inadequate for communication (L40 subject group). Performance did not correlate with disability: 11/17 (65%) of G70 subjects were severely disabled (i.e. ALSFRS-R < 5). All L40 subjects had visual impairments (e.g. nystagmus, diplopia, ptosis). P300 was larger and more anterior in G70 subjects. A 16-channel montage did not significantly improve accuracy. In conclusion, most people severely disabled by ALS could use a visual P300-based BCI for communication. In those who could not, visual impairment was the principal obstacle. For these individuals, auditory P300-based BCIs might be effective.
216

The Effect of Binaural Tones on EEG Waveforms and Human Computational Performance

Diersing, Christina L. January 2021 (has links)
No description available.
217

Improving Brain-Computer Interface Performance: Giving the P300 Speller Some Color.

Ryan, David B. 17 August 2011 (has links) (PDF)
Individuals who suffer from severe motor disabilities face the possibility of the loss of speech. A Brain-Computer Interface (BCI) can provide a means for communication through non-muscular control. Current BCI systems use characters that flash from gray to white (GW), making adjacent character difficult to distinguish from the target. The current study implements two types of color stimulus (grey to color [GC] and color intensification [CI]) and I hypotheses that color stimuli will; (1) reduce distraction of nontargets (2) enhance target response (3) reduce eye strain. Online results (n=21) show that GC has increased information transfer rate over CI. Mean amplitude revealed that GC had earlier positive latency than GW and greater negative amplitude than CI, suggesting a faster perceptual process for GC. Offline performance of individual optimal channels revealed significant improvement over online standardized channels. Results suggest the importance of a color stimulus for enhanced response and ease of use.
218

Hand (Motor) Movement Imagery Classification of EEG Using Takagi-Sugeno-Kang Fuzzy-Inference Neural Network

Donovan, Rory Larson 01 June 2017 (has links) (PDF)
Approximately 20 million people in the United States suffer from irreversible nerve damage and would benefit from a neuroprosthetic device modulated by a Brain-Computer Interface (BCI). These devices restore independence by replacing peripheral nervous system functions such as peripheral control. Although there are currently devices under investigation, contemporary methods fail to offer adaptability and proper signal recognition for output devices. Human anatomical differences prevent the use of a fixed model system from providing consistent classification performance among various subjects. Furthermore, notoriously noisy signals such as Electroencephalography (EEG) require complex measures for signal detection. Therefore, there remains a tremendous need to explore and improve new algorithms. This report investigates a signal-processing model that is better suited for BCI applications because it incorporates machine learning and fuzzy logic. Whereas traditional machine learning techniques utilize precise functions to map the input into the feature space, fuzzy-neuro system apply imprecise membership functions to account for uncertainty and can be updated via supervised learning. Thus, this method is better equipped to tolerate uncertainty and improve performance over time. Moreover, a variation of this algorithm used in this study has a higher convergence speed. The proposed two-stage signal-processing model consists of feature extraction and feature translation, with an emphasis on the latter. The feature extraction phase includes Blind Source Separation (BSS) and the Discrete Wavelet Transform (DWT), and the feature translation stage includes the Takagi-Sugeno-Kang Fuzzy-Neural Network (TSKFNN). Performance of the proposed model corresponds to an average classification accuracy of 79.4 % for 40 subjects, which is higher than the standard literature values, 75%, making this a superior model.
219

A Comparison of Signal Processing and Classification Methods for Brain-Computer Interface

Renfrew, Mark E. January 2009 (has links)
No description available.
220

Cerebellar theta oscillations are synchronized during hippocampal theta-contingent trace conditioning

Hoffmann, Loren C. 03 September 2009 (has links)
No description available.

Page generated in 0.0636 seconds