Spelling suggestions: "subject:"[een] EYE TRACKING"" "subject:"[enn] EYE TRACKING""
81 |
The relationship between gaze and information pickup during action observation : implications for motor skill (re)learningD'Innocenzo, Giorgia January 2018 (has links)
The aim of the present thesis was to investigate the relationship between individuals' allocation of overt visual attention during action observation and their consequent pickup of information. Four interrelated studies were conducted to achieve this. In Study 1 we examined the effects of visual guidance - colour highlighting of relevant aspects of the action - on observational learning of the golf swing. The results showed that the visual guides facilitated novices' intake of information pertaining to the model's posture, which was reflected in faster learning. In the remaining studies, transcranial magnetic stimulation and eye tracking data were acquired concurrently to measure the interaction between gaze behaviour and motor resonance - a neurophysiological index of the motor system's engagement with a viewed action, and thus a correlate of information extraction. In Study 2, we directed observers' gaze to distinct locations of the display while they viewed thumb adduction/abduction movements. The results showed that, by directing gaze to a location that maximised the amount of thumb motion across the fovea, motor resonance was maximised relative to a free viewing condition. In Study 3 we examined the link between gaze and motor resonance during the observation of transitive actions. Participants viewed reach-to-grasp actions with natural gaze, or while looking at a target- or an effector- based visual guide. The results showed that the effector-based guide disrupted natural gaze behaviour, and this was associated with a reversal of the motor resonance response. In Study 4 we showed novice and skilled golfers videos of the golf swing and of a reach-grasp-lift action. The results revealed that, for both actions, the extent of motor resonance was related to the location of participants' fixations. The present work provides the first evidence of a relationship between gaze and motor resonance and highlights the importance of appropriate gaze behaviour for observational learning.
|
82 |
Gaze cues and language in communicationMacDonald, R. G. January 2014 (has links)
During collaboration, people communicate using verbal and non-verbal cues, including gaze cues. Spoken language is usually the primary medium of communication in these interactions, yet despite this co-occurrence of speech and gaze cueing, most experiments have used paradigms without language. Furthermore, previous research has shown that myriad social factors influence behaviour during interactions, yet most studies investigating responses to gaze have been conducted in a lab, far removed from any natural interaction. It was the aim of this thesis to investigate the relationship between language and gaze cue utilisation in natural collaborations. For this reason, the initial study was largely observational, allowing for spontaneous natural language and gaze. Participants were found to rarely look at their partners, but to do so strategically, with listeners looking more at speakers when the latter were of higher social status. Eye movement behaviour also varied with the type of language used in instructions, so in a second study, a more controlled (but still real-world) paradigm was used to investigate the effect of language type on gaze utilisation. Participants used gaze cues flexibly, by seeking and following gaze more when the cues were accompanied by distinct featural verbal information compared to overlapping spatial verbal information. The remaining three studies built on these findings to investigate the relationship between language and gaze using a much more controlled paradigm. Gaze and language cues were reduced to equivalent artificial stimuli and the reliability of each cue was manipulated. Even in this artificial paradigm, language was preferred when cues were equally reliable, supporting the idea that gaze cues are supportive to language. Typical gaze cueing effects were still found, however the size of these effects was modulated by gaze cue reliability. Combined, the studies in this thesis show that although gaze cues may automatically and quickly affect attention, their use in natural communication is mediated by the form and content of concurrent spoken language.
|
83 |
Receptive verb knowledge in the second year of life: an eye-tracking studyValleau, Matthew James 07 July 2016 (has links)
The growth of a child’s early vocabulary is one of the most salient indicators of progress in language development, but measuring a young child’s comprehension of words is non-trivial. Parental checklists are prone to underestimation of a child’s vocabulary (Houston-Price et al., 2007; Brady et al. 2014), so it may be that more direct measures, such as measuring a child’s eye movements during comprehension, may provide a better assessment of children’s vocabulary. Prior research has found relationships between gaze patterns and vocabulary development (Fernald et al. 2006), and the present exploratory study investigates these relationships with verbs, along with a number of methodological considerations. In addition, recent research supports the idea that verbs may differ in difficulty of acquisition based on word class, with manner verbs being easier to learn than result verbs (Horvath et al. 2015). The present study has two aims: 1) investigate the effect of dynamic stimuli on correlations with vocabulary scores and 2) experimentally investigate the notion that manner verbs are easier to learn than result verbs.
Forty children (Mean age = 22.97 months) were recruited for participation and shown a vocabulary test. While no significant correlations were found between vocabulary measures and accuracy and latency, several experimental measures proved to be related to vocabulary development, including fixation density and length of first fixation to the non-target. Additionally, results indicate that children knew the same number of manner and result verbs. Finally, these results could inform vocabulary tests using eye-tracking measures that specifically target verb knowledge.
|
84 |
Quiet Eye Training and the Focus of Visual Attention in Golf PuttingJanuary 2019 (has links)
abstract: Previous research has shown that training visual attention can improve golf putting performance. A technique called the Quiet Eye focuses on increasing a player’s length of fixation between the ball and the hole. When putting, the final fixation is made on the ball before executing the stroke leaving players to rely on their memory of the hole’s distance and location. The present study aimed to test the effectiveness of Quiet Eye training for final fixation on the hole. Twelve Arizona State University (ASU) students with minimal golf experience putted while wearing eye tracking glasses under the following conditions: from three feet with final fixation on the ball, from six feet with final fixation on the ball, from three feet with final fixation on the hole and from six feet with final fixation on the hole. Participant’s performance was measured before training, following quiet eye training, and under simulated pressure conditions. Putting performance was not significantly affected by final fixation for all conditions. The number of total putts made was significantly greater when putting from three feet for all conditions. Future research should test the effects of this training with expert golfers whose processes are more automatic compared to novices and can afford to look at the hole while putting. / Dissertation/Thesis / Masters Thesis Human Systems Engineering 2019
|
85 |
Design e processamento cognitivo de informação online : um estudo de Eye TrackingFerreira, Sofia da Natividade Pinto January 2009 (has links)
Tese de mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 2009
|
86 |
A state machine representation of pilot eye movementsHarris, Artistee Shayna 01 July 2009 (has links)
With the development of these new interfaces, such as Next Generation Air Transportation System (NextGen), and the evolution of the United States National Air System (NAS) from a ground-based system of Air Traffic Control (ATC) to a satellite-based system of air traffic management (FAA, 2009), new evaluations for efficiency and safety are required. Therefore, these tasks require visual behaviors such as search, fixation, tracking, and grouping. Therefore, designing and implementing a virtual eye movement application that generates gaze and action visualizations could provide detailed data on the allocation of visual attention across interface entities.The goal is to develop state-machine representations of straight-and-level flight, turns, climbs and descents within the Pilot Eye Flight Deck Application to simulate pilots' eye movement.
|
87 |
Predicting Levels of Learning with Eye TrackingUnknown Date (has links)
E-Learning is transforming the delivery of education. Today, millions of students take selfpaced
online courses. However, the content and language complexity often hinders
comprehension, and that with lack of immediate help from the instructor leads to weaker
learning outcomes. Ability to predict difficult content in real time enables eLearning
systems to adapt content as per students' level of learning. The recent introduction of lowcost
eye trackers has opened a new class of applications based on eye response. Eye
tracking devices can record eye response on the visual element or concept in real time. The
response and the variations in eye response to the same concept over time may be indicative
of the levels of learning.
In this study, we have analyzed reading patterns using eye tracker and derived 12 eye
response features based on psycholinguistics, contextual information processing, anticipatory behavior analysis, recurrence fixation analysis, and pupils' response. We use
eye responses to predict the level of learning for a term/concept. One of the main
contribution is the spatio-temporal analysis of the eye response on a term/concept to derive
relevant first pass (spatial) and reanalysis (temporal) eye response features. A spatiotemporal
model, built using these derived features, analyses slide images, extracts words
(terms), maps the subject's eye response to words, and prepares a term-response map. A
parametric baseline classifier, trained with labeled data (term-response maps) classifies a
term/concept as a novel (positive class) or familiar (negative class), using majority voting
method. On using, only first pass features for prediction, the baseline classifier shows 61%
prediction accuracy, but on adding reanalysis features, baseline achieves 66.92% accuracy
for predicting difficult terms. However, all proposed features do not have the same
response to learning difficulties for all subjects, as we consider reading as an individual
characteristic.
Hence, we developed a non-parametric, feature weighted linguistics classifier (FWLC),
which assigns weight to features based on their relevance. The FWLC classifier achieves
a prediction accuracy of 90.54% an increase of 23.62% over baseline and 29.54% over the
first-pass variant of baseline. Predicting novel terms as familiar is more expensive because
content adapts by using this information. Hence, our primary goal is to increase the
prediction rate of novel terms by minimizing the cost of false predictions. On comparing
the performance of FWLC with other frequently used machine learning classifiers, FWLC
achieves highest true positive rate (TPR) and lowest ratio of false negative rate (FNR) to
false positive rate (FPR). The higher prediction performance of proposed spatio-temporal eye response model to predict levels of learning builds a strong foundation for eye response
driven adaptive e-Learning. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
|
88 |
Analysis of Eye Response to Video Quality and StructureUnknown Date (has links)
Real-time eye tracking systems with human-computer interaction mechanism are being adopted to advance user experience in smart devices and consumer electronic systems. Eye tracking systems measure eye gaze and pupil response non-intrusively. This research presents an analysis of eye pupil and gaze response to video structure and content. The set of experiments for this study involved presenting different video content to subjects and measuring eye response with an eye tracker. Results show significant changes in video and scene cuts led to sharp constrictions. User response to videos can provide insights that can improve subjective quality assessment metrics. This research also presents an analysis of the pupil and gaze response to quality changes in videos. The results show pupil constrictions for noticeable changes in perceived quality and higher fixations/saccades ratios with lower quality. Using real-time eye tracking systems for video analysis and quality evaluation can open a new class of applications for consumer electronic systems. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
|
89 |
Realtime computer interaction via eye trackingDubey, Premnath January 2004 (has links)
Through eye tracking technology, scientists have explored the eyes diverse aspects and capabilities. There are many potential applications that benefit from eye tracking. Each benefit from advances in computer technology as this results in improved quality and decreased costs for eye-tracking systems.This thesis presents a computer vision-based eye tracking system for human computer interaction. The eye tracking system allows the user to indicate a region of interest in a large data space and to magnify that area, without using traditional pointer devices. Presented is an iris tracking algorithm adapted from Camshift; an algorithm originally designed for face or hand tracking. Although the iris is much smaller and highly dynamic. the modified Camshift algorithm efficiently tracks the iris in real-time. Also presented is a method to map the iris centroid, in video coordinates to screen coordinates; and two novel calibration techniques, four point and one-point calibration. Results presented show that the accuracy for the proposed one-point calibration technique exceeds the accuracy obtained from calibrating with four points. The innovation behind the one-point calibration comes from using observed eye scanning behaviour to constrain the calibration process. Lastly, the thesis proposes a non-linear visualisation as an eye-tracking application, along with an implementation.
|
90 |
A Single-Camera Gaze Tracker using Controlled Infrared IlluminationWallenberg, Marcus January 2009 (has links)
<p>Gaze tracking is the estimation of the point in space a person is “looking at”. This is widely used in both diagnostic and interactive applications, such as visual attention studies and human-computer interaction. The most common commercial solution used to track gaze today uses a combination of infrared illumination and one or more cameras. These commercial solutions are reliable and accurate, but often expensive. The aim of this thesis is to construct a simple single-camera gaze tracker from off-the-shelf components. The method used for gaze tracking is based on infrared illumination and a schematic model of the human eye. Based on images of reflections of specific light sources in the surfaces of the eye the user’s gaze point will be estimated. Evaluation is also performed on both the software and hardware components separately, and on the system as a whole. Accuracy is measured in spatial and angular deviation and the result is an average accuracy of approximately one degree on synthetic data and 0.24 to 1.5 degrees on real images at a range of 600 mm.</p>
|
Page generated in 0.0798 seconds