• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 374
  • 51
  • 40
  • 39
  • 34
  • 28
  • 19
  • 18
  • 11
  • 10
  • 9
  • 8
  • 6
  • 4
  • 3
  • Tagged with
  • 776
  • 776
  • 126
  • 109
  • 88
  • 83
  • 72
  • 70
  • 69
  • 68
  • 66
  • 63
  • 59
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Interaction between visual attention and the processing of visual emotional stimuli in humans : eye-tracking, behavioural and event-related potential experiments

Acunzo, David Jean Pascal January 2013 (has links)
Past research has shown that the processing of emotional visual stimuli and visual attention are tightly linked together. In particular, emotional stimuli processing can modulate attention, and, reciprocally, the processing of emotional stimuli can be facilitated or inhibited by attentional processes. However, our understanding of these interactions is still limited, with much work remaining to be done to understand the characteristics of this reciprocal interaction and the different mechanisms that are at play. This thesis presents a series of experiments which use eye-tracking, behavioural and event-related potential (ERP) methods in order to better understand these interactions from a cognitive and neuroscientific point of view. First, the influence of emotional stimuli on eye movements, reflecting overt attention, was investigated. While it is known that the emotional gist of images attracts the eye (Calvo and Lang, 2004), little is known about the influence of emotional content on eye movements in more complex visual environments. Using eye-tracking methods, and by adapting a paradigm originally used to study the influence of semantic inconsistencies in scenes (Loftus and Mackworth, 1978), we found that participants spend more time fixating emotional than neutral targets embedded in visual scenes, but do not fixate them earlier. Emotional targets in scenes were therefore found to hold, but not to attract, the eye. This suggests that due to the complexity of the scenes and the limited processing resources available, the emotional information projected extra-foveally is not processed in such a way that it drives eye movements. Next, in order to better characterise the exogenous deployment of covert attention toward emotional stimuli, a sample of sub-clinically anxious individuals was studied. Anxiety is characterised by a reflexive attentional bias toward threatening stimuli. A dot-probe task (MacLeod et al., 1986) was designed to replicate and extend past findings of this attentional bias. In particular, the experiment was designed to test whether the bias was caused by faster reaction times to fear-congruent probes or slower reaction times to neutral-congruent probes. No attentional bias could be measured. A further analysis of the literature suggests that subliminal cue stimulus presentation, as used in our case, may not generate reliable attentional biases, unlike longer cue presentations. This would suggest that while emotional stimuli can be processed without awareness, further processing may be necessary to trigger reflexive attentional shifts in anxiety. Then the time-course of emotional stimulus processes and its modulation by attention was investigated. Modulations of the very early visual ERP C1 component by emotional stimuli (e.g. Pourtois et al., 2004; Stolarova et al., 2006), but also by visual attention (Kelly et al., 2008), were reported in the literature. A series of three experiments were performed, investigating the interactions between endogenous covert spatial attention and object-based attention with emotional stimuli processing in the C1 time window (50–100 ms). It was found that emotional stimuli modulated the C1 only when they were spatially attended and task-irrelevant. This suggests that whilst spatial attention gates emotional facial processing from the earliest stages, only incidental processing triggers a specific response before 100 ms. Additionally, the results suggest a very early modulation by feature-based attention which is independent from spatial attention. Finally, simulated and actual electroencephalographic data were used to show that modulations of early ERP and event-related field (ERF) components are highly dependent on the high-pass filter used in the pre-processing stage. A survey of the literature found that a large part of ERP/ERF reports (about 40%) use high-pass filters that may bias the results. More particularly, a large proportion of papers reporting very early modulations also use such filters. Consequently, a large part of the literature may need to be re-assessed. The work described in this thesis contributes to a better understanding of the links between emotional stimulus processing and attention at different levels. Using various experimental paradigms, this work confirms that emotional stimuli processing is not ‘automated’, but highly dependent on the focus of attention, even at the earlier stages of visual processing. Furthermore, the uncovered potential bias generated by filtering will help to improve the reliability and precision of research in the ERP/ERF field, and more particularly in studies looking at early effects.
262

Anxiety, attention and performance variability in visuo-motor skills

Vine, Samuel James January 2010 (has links)
The aims of the current program of research were to examine the impact of anxiety on performance and attentional control during the execution of two far aiming tasks, and to examine the efficacy of gaze training interventions in mediating these effects. Attentional control theory (ACT), which suggests that anxious individuals have impaired goal-directed attentional control, was adopted as a theoretical framework, and the Quiet Eye, characterised by long final fixations on relevant locations, was adopted as an objective measure of overt attentional control. In Studies 1 and 2 increased pressure impaired goal directed attentional control (QE) at the expense of stimulus-driven control (more fixations of shorter duration to various targets). The aim of studies 3 and 4 was therefore to examine the efficacy of an intervention designed to train effective visual attentional control (QE training) for novices, and determine whether such training protected against attentional disruptions associated with performing under pressure. In both studies the QE trained group maintained more effective visual attentional control and performed significantly better in a subsequent pressure test compared to the Control group, providing support for the efficacy of attentional training for visuo-motor skills. The aim of study 5 was to examine the effectiveness of a brief QE training intervention for elite golfers and to examine if potential benefits shown for novices in studies 3 and 4 transferred to competitive play. The QE-trained group maintained their optimal QE and performance under pressure conditions, whereas the control group experienced reductions in QE and performance. Importantly, these advantages transferred to the golf course, where QE-trained golfers reduced their putts per round by 1.9 putts, compared to pre-training, whereas the control group showed no change in their putting statistics. This series of studies has therefore implicated the role of attention in the breakdown of performance under pressure, but has also suggested that visual attentional training regimes may be a useful technique for alleviating this problem.
263

Emotion processing after childhood Acquired Brain Injury (ABI) : an eye tracking study

Oliphant, Jenna January 2012 (has links)
Few studies have explored emotion processing abilities in children following Acquired Brain Injury (ABI). This study develops previous research in this area by exploring emotion processing skills in children with focal ABI, using eye tracking technology. It was hypothesised that children with focal ABI would demonstrate impaired emotion recognition abilities relative to a control group and that, similar to adult eye tracking studies, they would show an atypical pattern of eye moments when viewing faces. Sixteen participants with focal ABI (10-16 years) and 27 healthy controls (10-16 years) completed one novel and one adapted visual emotion processing task, presented using a T120 Tobii eye-tracker. The eye-tracker measured eye-movement fixations in three areas of interest (AOIs; eyes, nose, mouth), as participants viewed the stimuli. Emotion perception accuracy was recorded. All participants from the ABI group also completed neuropsychological assessment of their immediate visual memory, visual attention, visuospatial abilities, and everyday executive function. The results of the study showed no significant difference in accuracy between the ABI and control groups. However, on average children with ABI appeared slightly less accurate than the control group in both emotion recognition tasks. Within-subjects analysis revealed no effect of lesion location and laterality or age at lesion onset upon emotion recognition accuracy. Eye tracking analysis showed that children within the ABI group presented with an atypical pattern of eye movements relative to the control group, demonstrating significantly greater fixation times within the eye region, when viewing disgusted, fearful angry and happy faces. The ABI group also showed reduced mean percentage fixation duration within the nose and mouth regions, relative to controls. Furthermore, it was observed that the ABI group took longer on average to give an accurate response to sad, disgusted, happy and surprised faces and this difference reached statistical significance for the accurate recognition of happy and surprised faces. It is suggested that the atypical fixation patterns noted within the ABI group, may represent a difficulty with dividing visual attention rapidly across the whole of the face. This slowing may have an impact upon functioning in everyday social situations, where rapid processing and appraisal of emotion is thought to be particularly important. It is therefore suggested that eye tracking technology may be a valuable method for the identification of subtle difficulties in facial emotion processing, following focal ABI in childhood, and may also have an application in the rehabilitation of these difficulties in future.
264

Visual Attention to Photograph and Cartoon Images in Social StoriesTM: A Comparison of Typically Developing Children and Children with ASD

Sedeyn, Chelsea Michelle 01 January 2017 (has links)
ABSTRACT Autism Spectrum Disorder (ASD) is often accompanied by atypical attention to faces. Some previous studies have suggested that children with ASD demonstrate strengths when processing visual information from cartoons, whereas others have argued that photographic stimuli confer benefits. No previous studies have compared photograph and cartoon images of faces (i.e., Boardmaker [BM] images) in the context of a Social Story™ (Gray, 2010): a common intervention to support behavior and social cognition in children with ASD. In this study, we examined visual attention to static face stimuli in the context of Social Stories™. Participants were 19 typically developing (TD) children and 18 age-matched children with ASD. We addressed two questions: 1) Is there a difference between TD children and children with ASD in how they attend to cartoon and photographic stimuli in the context of a Social Story™? and 2) Do group differences in visual attention to BM and/or photographic stimuli correlate with age and indices of autism severity, executive function, intellectual functioning, and weak central coherence? With regard to question 1 and with one exception, we found no differences between groups when viewing images of faces. The exception involved our cartoon and photograph images that differed in content from the other face images in that they represented a person's full body as well as a range of objects (i.e., it was a more complex scene). For these images an interaction was observed such that the TD and ASD groups were no different in their looking patterns in the BoardMaker condition but they were different in the photograph condition. More specifically, we found that a shift toward more mouth-looking in the photograph condition among children with ASD was negatively associated with attention shifting and verbal IQ and that a shift toward more "other"-looking (i.e., looking that occurred outside the eye and mouth region of the face) was negatively associated with attention shifting, age, and central coherence. These findings suggest that children with ASD demonstrate typical visual attention patterns to both cartoon and photographic stimuli representing faces but that children with ASD employ an atypical scanning strategy when presented with photographic stimuli representing more complex social scenes. The theoretical and clinical implications of the findings are discussed.
265

The Stare-In-The-Crowd Effect: Phenomenology, Psychophysiology, And Relations To Psychopathology

Crehan, Eileen Tara 01 January 2016 (has links)
The eyes are a valuable source of information for a range of social processes. The stare-in-the-crowd effect describes the ability to detect self-directed gaze. Impairment in gaze detection mechanisms, such as the stare-in-the-crowd effect, has implications for social interactions and development of social relationships. Given the frequency with which humans utilize gaze detection in interactions, there is a need to better characterize the stare-in-the-crowd effect. This study utilized a previously validated dynamic visual paradigm to capture the stare-in-the-crowd effect. We compared typically-developing (TD) young adults and young adults with Autism Spectrum Disorder (ASD) on multiple measures of psychophysiology, including eye tracking and heart rate monitoring. Four conditions of visual stimuli were presented: averted gaze, mutual gaze, catching another staring, and getting caught staring. Eye tracking outcomes and arousal (pupil size and heart rate variability) were compared by diagnosis (TD or ASD) and condition (averted, mutual, catching another staring, getting caught staring) using repeated measure ANOVA. Significant interaction of diagnosis and condition was found for IA dwell time, IA fixation count, and IA second fixation duration. Hierarchical regression was used to assess how dimensional behavioral measures predicted eye tracking outcomes and arousal; only two models with advanced theory of mind as a predictor were significant. Overall, we demonstrated that individuals with ASD do respond differently to various gaze conditions in similar patterns to TD individuals, but to a lesser extent. This offers potential targets to social interventions to capitalize on this present but underdeveloped response to gaze. Implications and future directions are discussed.
266

Mobilní telefon prizmatem sociologie / Mobile Phone Through the Prism of Sociology

Kolářová, Kristina January 2012 (has links)
The theoretical part of this thesis intends to familiarize the reader with review of the results of recent studies of how a mobile phone affects individuals, communication and society as a whole. Space is given to the issue of behavioural differences between men and women in relation to mobile phones. The last section shows how big is the attention given to a mobile phone in the field of marketing research. In the research part, the author uses methods typical rather to marketing research, an eye tracking, RTA and interviews to do the research whose goal was to describe the process of selection when buying a mobile phone via e- shop and to point out the possible differences between men and women. Measuring eye attention of 50 participants in the age of 18 - 29 years (25 men, 25 women) showed that eye behaviour of men and women when choosing a mobile phone differed significantly, particularly at the time that respondents gave to particular information areas. The interviews resulted in further differences such as differences in the importance of different criteria for both genders or unequal level of independence in selecting the phone.
267

Behavioral and cognitive basis of sequential actions : can human intentions be revealed trough movement kinematics ? / Intentionnalité et interactions motrices : comment appréhender les intentions d'autrui à partir de la dynamique comportementale ?

Lewkowicz, Daniel 06 December 2013 (has links)
L'objectif de ma thèse est de participer à la construction d'un nouveau robot humanoïde qui peut réaliser des interactions intuitives avec l'humain à travers l'observation et l'imitation. Pour cela, j'ai conduit une série d'études expérimentales chez le jeune adulte pour caractériser les propriétés cinématiques des mouvements du bras réalisés pendant des interactions motrices et sociales, autant d'éléments qui seront les patterns de référence pour le futur robot. En se concentrant sur le comportement non-verbal, nous avons testé comment les contraintes externes et internes (difficulté, prédictibilité, temporalité) façonne la cinématique des mouvements du bras et de la main dans une simple action séquentielle de prise et de pose d'un objet (étude 1 et 2). Les résultats révèlent des modulations précoces dans la cinématique de la phase d'atteinte et de saisie, en fonction de la taille et de la stabilité du réceptacle terminal sur lequel l'objet devait être placé. Ces modulations observées dans le premier élément de la séquence sont en contradiction avec les modèles d'optimisation de trajectoire utilisés en robotique pour les séquences d'action. Ils suggèrent un couplage fort entre les paramètres moteurs dans une stratégie de planification encapsulée qui rétro-propage les contraintes contextuelles sur les éléments précoces de la séquence. Pour confirmer ces résultats, une seconde série d'étude a été conduite en utilisant des tâches cinématiques et vidéos pour montrer que les intentions motrices humaines pouvaient être lues à travers la détection de ces modulations cinématiques précoces. En utilisant un système de classification artificiel, nous avons testé si les indices de bas niveau pouvaient permettre une catégorisation des essais. Les résultats montrent qu'en absence de capacité cognitive particulière, le réseau de neurone pouvait catégoriser les intentions significativement au-dessus du niveau de la chance en observant les 500 premières millisecondes de l'action (étude 3). La troisième partie de mon travail de thèse s'est tournée vers les mesures en eye-tracking. Nous avons révélé ici que la stratégie proactive de fixations oculaires utilisée pendant l'observation de l'action était similaire à celle utilisée pendant son exécution (étude 4). De plus, les catégorisations correctes d'intentions motrices étaient caractérisées par des saccades plus précises et des fixations plus longues sur l'objet. Les mouvements oculaires sont connus pour jouer un rôle important dans les interactions sociales. Ainsi, dans une dernière expérience (étude 5), nous avons mis en place un jeu compétitif en face à face révélant des effets spécifiques du contexte social qui modifie la cinématique des mouvements d'atteinte selon le type de situations interactives. Dans le manuscrit de thèse je propose une discussion qui replace les résultats dans les modèles neuronaux et cognitifs de l'intégration sensori-mmotrice. Lorsque c'est le cas, des directions futures sont suggérées à la fois pour les modèles cognitifs de contrôle moteur et pour le développement e systèmes artificiels neuro-inspirés intégrant des capacités d'interaction sociale intuitive. / The aim of my PhD thesis was to participate in the construction of a new humanoid robot that can sustain intuitive interactions with humans through observation and imitation. As such, I conducted a series of experimental studies in young adults to better characterize the kinematic properties of those arm movements performed during motor and social interactions, elements that are the reference patterns for the to-come robot. Focusing on non-verbal behavior, we tested how external and internal constraints (difficulty, predictability, timing) shaped the kinematics of both arm and hand movements in a very simple pick and place sequential action (study 1 and 2). The results revealed early modulations in kinematics in the reach-to-grasp phase depending of the size and the stability of the target pad on which the object had to be placed. These modulations observed within the first element of the sequence were in contradiction with the current optimized trajectory models used in robotics for action sequences. They suggest in fact a strong coupling of the motor parameters within an encapsulated planning strategy that back-propagates the contextual constraints on to the early elements of the motor sequence. To confirm these findings, a second serie of studies were conducted using kinematic and video based tasks to show that human motor intentions can be read through the detection of these early kinematic modulations (study 3). Using basic artificial classification, we tested whether low-revel motor indices could afford trial categorization without the need for higher-level process such as motor imagery. results indicated that indeed without cognitive abilities the neural network could categorize the intention of an observed action within the first 500ms, significantly above chance level (study 4). The third place of my PhD work turned to eye tracking. Here, we revealed that the proactive strategy of eye-fixations used during action observation were similar to those made during executed actions. Additionally, good categorization of motor intention was characterized by more accurate saccades and longer object fixations. Eye movements are known to play an important role in social intercations. Hence, in a final experiment (study 5), we setup a face-to-face competitive game to reveal the specific effects thet the social context may play on the kinematic properties of reaching during different types of interactive situations. In the PhD mansucript, I propose a general discussion that sets these results within the current cognitive and neuronal models of sensori-motor integration. When appropriate, future directions are suggested both for cognitive models of motor control and for the development of neuro-inspired articicial systems constitued with intuitive social interaction skills.
268

The effect of context on the activation and processing of word meaning over time

Frassinelli, Diego January 2015 (has links)
The aim of this thesis is to study the effect that linguistic context exerts on the activation and processing of word meaning over time. Previous studies have demonstrated that a biasing context makes it possible to predict upcoming words. The context causes the pre-activation of expected words and facilitates their processing when they are encountered. The interaction of context and word meaning can be described in terms of feature overlap: as the context unfolds, the semantic features of the processed words are activated and words that match those features are pre-activated and thus processed more quickly when encountered. The aim of the experiments in this thesis is to test a key prediction of this account, viz., that the facilitation effect is additive and occurs together with the unfolding context. Our first contribution is to analyse the effect of an increasing amount of biasing context on the pre-activation of the meaning of a critical word. In a self-paced reading study, we investigate the amount of biasing information required to boost word processing: at least two biasing words are required to significantly reduce the time to read the critical word. In a complementary visual world experiment we study the effect of context as it unfolds over time. We identify a ceiling effect after the first biasing word: when the expected word has been pre-activated, an increasing amount of context does not produce any additional significant facilitation effect. Our second contribution is to model the activation effect observed in the previous experiments using a bag-of-words distributional semantic model. The similarity scores generated by the model significantly correlate with the association scores produced by humans. When we use point-wise multiplication to combine contextual word vectors, the model provides a computational implementation of feature overlap theory, successfully predicting reading times. Our third contribution is to analyse the effect of context on semantically similar words. In another visual world experiment, we show that words that are semantically similar generate similar eye-movements towards a related object depicted on the screen. A coherent context pre-activates the critical word and therefore increases the expectations towards it. This experiment also tested the cognitive validity of a distributional model of semantics by using this model to generate the critical words for the experimental materials used.
269

Measuring Immersion and Enjoyment in a 2D Top-Down Game by Replacing the Mouse Input with Eye Tracking

Fransson, Jonatan, Hiiriskoski, Teemu January 2019 (has links)
Background. Eye tracking has been evaluated and tried in different 2D settings for research purposes. Most commercial games that are using eye tracking use it as an assistive extra input method and are focused around third or first person. There are few 2D games developed with eye tracking as an input method. This thesis aims to test the use of eye tracking as a replacement input method with a chosen set of mechanics for the purpose of playing a 2D top-down game with eye tracking as the main input method. Objectives. To test eye tracking in a 2D top-down game and use it as a replacement input method for the mouse in a novel effort to evaluate immersion and enjoyment. Method. To conduct this study the Tobii 4C eye tracker is used as the replacement peripheral in a 2D game prototype developed for the study. The game prototype is developed with the Unity game engine which the participants played through twice with a different input mode each time. Once with a keyboard and mouse and a second time with a keyboard and an eye tracker. The participants played different modes in alternating order to not sway the results. For the game prototype three different mechanics were implemented, to aim, search for hidden items and remove shadows. To measure immersion and enjoyment an experiment was carried out in a controlled manner, letting participants play through the game prototype and evaluating their experience. To evaluate the experience the participants answered a questionnaire with 12 questions relating to their perceived immersion and a small interview with 5 questions about their experience and perceived enjoyment. The study had a total of 12 participants. Results. The results from the data collected through the experiment indicate that the participants enjoyed and felt more involvement in the game, 10 out of 12 participants answering that they felt more involved with the game using eye tracking compared to the mouse. Analyzing the interviews, the participants stated that eye tracking made the game more difficult and less natural to control compared to the mouse. There is a potential problem that might sway the results toward eye tracking, most participants stated that eye tracking is a new experience and none of the participants had used it to play video games before. Conclusions. The results from the questionnaire prove the hypothesis with statistics, with a p-value of 0.02 < 5% for both increased involvement and enjoyment using eye tracking. Although the result might be biased due to the participant's inexperience with eye tracking in video games. Most of the participants reacted positively towards eye tracking with the most common reason being that it was a new experience to them.
270

Utilisation de l'eye-tracking pour l'interaction mobile dans un environnement réel augmenté / Robust gaze tracking for advanced mobile interaction

Ju, Qinjie 09 April 2019 (has links)
Les dispositifs d'eye-tracking ont un très fort potentiel en tant que modalité d'entrée en IHM (Interaction Homme Machine), en particulier en situation de mobilité. Dans cette thèse, nous nous concentrons sur la mise en œuvre de cette potentialité en mettant en évidence les scénarios dans lesquels l’eye-tracking possède des avantages évidents par rapport à toutes les autres modalités d’interaction. Au cours de nos recherches, nous avons constaté que cette technologie ne dispose pas de méthodes pratiques pour le déclenchement de commandes, ce qui réduit l'usage de tels dispositifs. Dans ce contexte, nous étudions la combinaison d'un eye-tracking et des mouvements volontaires de la tête lorsque le regard est fixe, ce qui permet de déclencher des commandes diverses sans utiliser les mains ni changer la direction du regard. Nous avons ainsi proposé un nouvel algorithme pour la détection des mouvements volontaires de la tête à regard fixe en utilisant uniquement les images capturées par la caméra de scène qui équipe les eye-trackers portés sur la tête, afin de réduire le temps de calcul. Afin de tester la performance de notre algorithme de détection des mouvements de la tête à regard fixe, et l'acceptation par l'utilisateur du déclenchement des commandes par ces mouvements lorsque ses deux mains sont occupées par une autre activité, nous avons effectué des expériences systématiques grâce à l'application EyeMusic que nous avons conçue et développée. Cette application EyeMusic est un système pour l'apprentissage de la musique capable de jouer les notes d’une mesure d'une partition que l’utilisateur ne comprend pas. En effectuant un mouvement volontaire de la tête qui fixe de son regard une mesure particulière d'une partition, l'utilisateur obtient un retour audio. La conception, le développement et les tests d’utilisabilité du premier prototype de cette application sont présentés dans cette thèse. L'utilisabilité de notre application EyeMusic est confirmée par les résultats expérimentaux : 85% des participants ont été en mesure d’utiliser tous les mouvements volontaires de la tête à regard fixe que nous avons implémentés dans le prototype. Le taux de réussite moyen de cette application est de 70%, ce qui est partiellement influencé par la performance intrinsèque de l'eye-tracker que nous utilisons. La performance de notre algorithme de détection des mouvements de la tête à regard fixe est 85%, et il n’y a pas de différence significative entre la performance de chaque mouvement de la tête testé. Également, nous avons exploré deux scénarios d'applications qui reposent sur les mêmes principes de commande, EyeRecipe et EyePay, dont les détails sont également présentés dans cette thèse. / Eye-tracking has a very strong potential in human computer interaction (HCI) as an input modality, particularly in mobile situations. In this thesis, we concentrate in demonstrating this potential by highlighting the scenarios in which the eye-tracking possesses obvious advantages comparing with all the other interaction modalities. During our research, we find that this technology lacks convenient action triggering methods, which can scale down the performance of interacting by gaze. In this instance, we investigate the combination of eye-tracking and fixed-gaze head movement, which allows us to trigger various commands without using our hands or changing gaze direction. We have proposed a new algorithm for fixed-gaze head movement detection using only scene images captured by the scene camera equipped in front of the head-mounted eye-tracker, for the purpose of saving computation time. To test the performance of our fixed-gaze head movement detection algorithm and the acceptance of triggering commands by these movements when the user's hands are occupied by another task, we have implemented some tests in the EyeMusic application that we have designed and developed. The EyeMusic system is a music reading system, which can play the notes of a measure in a music score that the user does not understand. By making a voluntary head movement when fixing his/her gaze on the same point of a music score, the user can obtain the desired audio feedback. The design, development and usability testing of the first prototype for this application are presented in this thesis. The usability of our EyeMusic application is confirmed by the experimental results, as 85% of participants were able to use all the head movements we implemented in the prototype. The average success rate of this application is 70%, which is partly influenced by the performance of the eye-tracker we use. The performance of our fixed-gaze head movement detection algorithm is 85%, and there were no significant differences between the performance of each head movement. Apart from the EyeMusic application, we have explored two other scenarios that are based on the same control principles: EyeRecipe and EyePay, the details of these two applications are also presented in this thesis.

Page generated in 0.0679 seconds