111 |
Design and Evaluation of Affective Serious Games for Emotion Regulation TrainingJerčić, Petar January 2015 (has links)
Emotions are thought to be one of the key factors that critically influences human decision-making. Emotion regulation can help to mitigate emotion related decision biases and eventually lead to a better decision performance. Serious games emerged as a new angle introducing technological methods to learning emotion regulation, where meaningful biofeedback information communicates player's emotional state. Games are a series of interesting choices, where design of those choices could support an educational platform to learning emotion regulation. Such design could benefit digital serious games as those choices could be informed though player's physiology about emotional states in real time. This thesis explores design and evaluation methods for creating serious games where emotion regulation can be learned and practiced. Design of a digital serious game using physiological measures of emotions was investigated and evaluated. Furthermore, it investigates emotions and the effect of emotion regulation on decision performance in digital serious games. The scope of this thesis was limited to digital serious games for emotion regulation training using psychophysiological methods to communicate player's affective information. Using the psychophysiological methods in design and evaluation of digital serious games, emotions and their underlying neural mechanism have been explored. Effects of emotion regulation have been investigated where decision performance has been measured and analyzed. The proposed metrics for designing and evaluating such affective serious games have been extensively evaluated. The research methods used in this thesis were based on both quantitative and qualitative aspects, with true experiment and evaluation research, respectively. Digital serious games approach to emotion regulation was investigated, player's physiology of emotions informs design of interactions where regulation of those emotions could be practiced. The results suggested that two different emotion regulation strategies, suppression and cognitive reappraisal, are optimal for different decision tasks contexts. With careful design methods, valid serious games for training those different strategies could be produced. Moreover, using psychophysiological methods, underlying emotion neural mechanism could be mapped. This could inform a digital serious game about an optimal level of arousal for a certain task, as evidence suggests that arousal is equally or more important than valence for decision-making. The results suggest that it is possible to design and develop digital serious game applications that provide helpful learning environment where decision makers could practice emotion regulation and subsequently improve their decision-making. If we assume that physiological arousal is more important than physiological valence for learning purposes, results show that digital serious games designed in this thesis elicit high physiological arousal, suitable for use as an educational platform.
|
112 |
Detection of Movement Intention Onset for Brain-machine InterfacesMcGie, Steven 15 February 2010 (has links)
The goal of the study was to use electrical signals from primary motor cortex to generate
accurate predictions of the movement onset time of performed movements, for potential
use in asynchronous brain-machine interface (BMI) systems. Four subjects, two with
electroencephalogram and two with electrocorticogram electrodes, performed various movements while activity from their primary motor cortices was recorded. An analysis program used several criteria (change point, fractal dimension, spectral entropy, sum of differences, bandpower, bandpower integral, phase, and variance), derived from the neural recordings, to generate predictions of movement onset time, which it compared to electromyogram activity onset time, determining prediction accuracy by receiver operating characteristic curve areas. All criteria, excepting phase and change-point analysis, generated accurate predictions in some cases.
|
113 |
Development of Brain-machine InterfacesMarquez Chin, Cesar 31 August 2011 (has links)
A brain-machine interface (BMI) uses signals from the brain to control electronic devices. One application of this technology is the control of assistive devices to facilitate movement after paralysis. Ideally, the BMI would identify an intended movement and control an assistive device to produce the desired movement. To implement such a system, it is necessary to identify different movements involving a single limb and users must be able to issue commands at any instant instead of only during specific time windows determined by the BMI itself.
A novel processing technique to identify voluntary movements using only four electrodes is presented. Histograms containing the spectral components of intracranial neural signals displaying power changes correlated with movement were unique for each of three movements performed with one limb. Off-line classification of the histograms allowed the identification of the performed movement with an accuracy of 89%.
This movement identification system was interfaced with a neuroprosthesis for grasping, fitted to a tetraplegic individual. The user pressed a button triggering the random selection and classification of a brain signal previously recorded intracranially from a different person while performing specific arm movements. Correct identification of the movement triggered grasping functions. Movement identification accuracy was 94% allowing successful operation of the neuroprosthesis.
Finally, two BMIs for the real-time asynchronous control of two-dimensional movements were created using a single electrode. One EEG-based system was tested by a healthy participant. A second system was implemented and tested using recordings from an individual undergoing clinical intracranial electrode implantation. The users modulated their 7 Hz-13 Hz oscillatory rhythm through motor imagery. A power decrease below a threshold activated a ``brain-switch''. This switch was coupled with a novel asynchronous control strategy to control a miniature remotely-controlled vehicle as well as a computer cursor. Successful operation of the EEG system required 6 hrs of training. ECoG control was achieved after only 15 minutes. The operation of the BMI was simple enough to allow users to focus on the task at hand rather than on the actual operation of the BMI.
|
114 |
Development of Brain-machine InterfacesMarquez Chin, Cesar 31 August 2011 (has links)
A brain-machine interface (BMI) uses signals from the brain to control electronic devices. One application of this technology is the control of assistive devices to facilitate movement after paralysis. Ideally, the BMI would identify an intended movement and control an assistive device to produce the desired movement. To implement such a system, it is necessary to identify different movements involving a single limb and users must be able to issue commands at any instant instead of only during specific time windows determined by the BMI itself.
A novel processing technique to identify voluntary movements using only four electrodes is presented. Histograms containing the spectral components of intracranial neural signals displaying power changes correlated with movement were unique for each of three movements performed with one limb. Off-line classification of the histograms allowed the identification of the performed movement with an accuracy of 89%.
This movement identification system was interfaced with a neuroprosthesis for grasping, fitted to a tetraplegic individual. The user pressed a button triggering the random selection and classification of a brain signal previously recorded intracranially from a different person while performing specific arm movements. Correct identification of the movement triggered grasping functions. Movement identification accuracy was 94% allowing successful operation of the neuroprosthesis.
Finally, two BMIs for the real-time asynchronous control of two-dimensional movements were created using a single electrode. One EEG-based system was tested by a healthy participant. A second system was implemented and tested using recordings from an individual undergoing clinical intracranial electrode implantation. The users modulated their 7 Hz-13 Hz oscillatory rhythm through motor imagery. A power decrease below a threshold activated a ``brain-switch''. This switch was coupled with a novel asynchronous control strategy to control a miniature remotely-controlled vehicle as well as a computer cursor. Successful operation of the EEG system required 6 hrs of training. ECoG control was achieved after only 15 minutes. The operation of the BMI was simple enough to allow users to focus on the task at hand rather than on the actual operation of the BMI.
|
115 |
Effects of feedback on recovery of pointing movements in two training environments in stroke : a pilot studySubramanian, Sandeep. January 2007 (has links)
Virtual reality environments (VEs) are new tools to improve functional recovery in stroke survivors. Elements essential to maximize motor learning, can be optimized in VEs. Study objectives were: (a) to determine whether training in VE with enhanced feedback about movement patterns, leads to greater gains in arm movement quality, motor performance and decreased compensation compared to training in a similarly designed Physical environment (PE); (b) to estimate whether impairments in cognitive functioning affected the changes observed after training. Twelve stroke survivors practiced 72 pointing movements in VE or PE for 10 sessions with enhanced feedback. Kinematic analysis of pointing task, evaluations of arm impairment and function were carried out pre-post training. After training, VE group had increased shoulder flexion (p<0.05), increased shoulder horizontal adduction and decreased compensation, compared to PE group. Use of feedback correlated with fewer deficits in cognitive functioning. Training in VEs may lead to greater gains in movement quality.
|
116 |
Separable Spatio-spectral Patterns in EEG signals During Motor-imagery TasksShokouh Aghaei, Amirhossein 01 September 2014 (has links)
Brain-Computer Interface (BCI) systems aim to provide a non-muscular channel for the brain to control external devices using electrical activities of the brain. These BCI systems can be used in various applications, such as controlling a wheelchair, neuroprosthesis, or speech synthesizer for disabled individuals, navigation in virtual environment, and assisting healthy individuals in performing highly demanding tasks. Motor-imagery BCI systems in particular are based on decoding imagination of motor tasks, e.g., to control the movement of a wheelchair or a mouse curser on the computer screen and move it to the right or left directions by imagining right/left hand movement. During the past decade, there has been a growing interest in utilization of electroencephalogram (EEG) signals for non-invasive motor-imagery BCI systems, due to their low cost, ease of use, and widespread availability.
During motor-imagery tasks, multichannel EEG signals exhibit task-specific features in both spatial domain and spectral (or frequency) domain. This thesis studies the statistical characteristics of the multichannel EEG signals
in these two domains and proposes a new approach for spatio-spectral feature extraction in motor-imagery BCI systems. This approach is based on the fact that due to the
multichannel structure of the EEG data, its spatio-spectral features have a matrix-variate structure. This structure, which has been overlooked in the literate, can be exploited to design more efficient feature extraction methods for motor-imagery BCIs.
Towards this end, this research work adopts a matrix-variate Gaussian model for the spatio-spectral features, which assumes a separable Kronecker product structure for the covariance of these features. This separable structure, together with the general properties of the Gaussian model, enables us to design new feature extraction schemes which can operate on the data in its inherent matrix-variate structure to reduce the computational cost of the BCI system while improving its performance. Throughout this thesis, the proposed matrix-variate model and its implications will be studied in various different feature extraction scenarios.
|
117 |
脳波を用いた手足の運動想起判別における準備電位の傾きを用いた特徴抽出法に関する検討FURUHASHI, Takeshi, YOSHIKAWA, Tomohiro, TAKAHASHI, Hiromu, NAKAMURA, Shotaro, 古橋, 武, 吉川, 大弘, 高橋, 弘武, 中村, 翔太郎 15 November 2010 (has links)
No description available.
|
118 |
A semiotic approach to the use of metaphor in human-computer interfacesCondon, Chris January 1999 (has links)
Although metaphors are common in computing, particularly in human-computer interfaces, opinion is divided on their usefulness to users and little evidence is available to help the designer in choosing or implementing them. Effective use of metaphors depends on understanding their role in the computer interface, which in tum means building a model of the metaphor process. This thesis examines some of the approaches which might be taken in constructing such a model before choosing one and testing its applicability to interface design. Earlier research into interface metaphors used experimental psychology techniques which proved useful in showing the benefits or drawbacks of specific metaphors, but did not give a general model of the metaphor process. A cognitive approach based on mental models has proved more successful in offering an overall model of the process, although this thesis questions whether the researchers tested it adequately. Other approaches which have examined the metaphor process (though not in the context of human-computer interaction) have come from linguistic fields, most notably semiotics, which extends linguistics to non-verbal communication and thus could cover graphical user interfaces (GUls). The main work described in this thesis was the construction of a semiotic model of human-computer interaction. The basic principle of this is that even the simplest element of the user interface will signify many simultaneous meanings to the user. Before building the model, a set of assertions and questions was developed to check the validity of the principles on which the model was based. Each of these was then tested by a technique appropriate to the type of issue raised. Rhetorical analysis was used to establish that metaphor is commonplace in command-line languages, in addition to its more obvious use in GUIs. A simple semiotic analysis, or deconstruction, of the Macintosh user interface was then used to establish the validity of viewing user interfaces as semiotic systems. Finally, an experiment was carried out to test a mental model approach proposed by previous researchers. By extending their original experiment to more realistically complex interfaces and tasks and using a more typical user population, it was shown that users do not always develop mental models of the type proposed in the original research. The experiment also provided evidence to support the existence of multiple layers of signification. Based on the results of the preliminary studies, a simple means of testing the semiotic model's relevance to interface design was developed, using an interview technique. The proposed interview technique was then used to question two groups of users about a simple interface element. Two independent researchers then carried out a content analysis of the responses. The mean number of significations in each interview, as categorised by the researchers, was 15. The levels of signification were rapidly revealed, with the mean time for each interview being under two minutes, providing effective evidence that interfaces signify many meanings to users, a substantial number of which are easily retrievable. It is proposed that the interview technique could provide a practical and valuable tool for systems analysis and interface designers. Finally, areas for further research are proposed, in particular to ascertain how the model and the interview technique could be integrated with other design methods.
|
119 |
The design and evaluation of non-visual information systems for blind usersMorley, Sarah January 1999 (has links)
This research was motivated by the sudden increase of hypermedia information (such as that found on CD-ROMs and on the World Wide Web), which was not initially accessible to blind people, although offered significant advantages over traditional braille and audiotape information. Existing non-visual information systems for blind people had very different designs and functionality, but none of them provided what was required according to user requirements studies: an easy-to-use non-visual interface to hypermedia material with a range of input devices for blind students. Furthermore, there was no single suitable design and evaluation methodology which could be used for the development of non-visual information systems. The aims of this research were therefore: (1) to develop a generic, iterative design and evaluation methodology consisting of a number of techniques suitable for formative evaluation of non-visual interfaces; (2) to explore non-visual interaction possibilities for a multimodal hypermedia browser for blind students based on user requirements; and (3) to apply the evaluation methodology to non-visual information systems at different stages of their development. The methodology developed and recommended consists of a range of complementary design and evaluation techniques, and successfully allowed the systematic development of prototype non-visual interfaces for blind users by identifying usability problems and developing solutions. Three prototype interfaces are described: the design and evaluation of two versions of a hypermedia browser; and an evaluation of a digital talking book. Recommendations made from the evaluations for an effective non-visual interface include the provision of a consistent multimodal interface, non-speech sounds for information and feedback, a range of simple and consistent commands for reading, navigation, orientation and output control, and support features. This research will inform developers of similar systems for blind users, and in addition, the methodology and design ideas are considered sufficiently generic, but also sufficiently detailed, that the findings could be applied successfully to the development of non-visual interfaces of any type.
|
120 |
The Automated Detection of Changes in Cerebral Perfusion Accompanying a Verbal Fluency Task: A Novel Application of Transcranial DopplerFaulkner, Hayley 07 December 2011 (has links)
Evidence suggests that cerebral blood flow patterns accompanying a mental activity are retained in many locked-in patients. Thus, real-time monitoring with functional transcranial Doppler (TCD) together with a specific mental task could control a brain-computer interface (BCI), thereby providing self-initiated interaction.
The objective of this study was to create an automatic detection algorithm to differentiate hemodynamic responses coincident with one's performance of verbal fluency (VF) versus counting tasks.
We recruited 10 healthy adults who each silently performed up to 30 VF tasks and counted between each. Both middle cerebral arteries were simultaneously imaged using TCD. Linear Discriminant Analyses (LDA) successfully differentiated between VF and both prior and post counting tasks. For every participant, LDA achieved the 70% classification accuracy sufficient for BCIs. Results demonstrate automatic detection of a VF task by TCD and warrant further investigation of TCD as a BCI.
|
Page generated in 0.1133 seconds