Spelling suggestions: "subject:"eyegaze"" "subject:"begaze""
11 |
Embedded eye-gaze tracking on mobile devicesAckland, Stephen Marc January 2017 (has links)
The eyes are one of the most expressive non-verbal tools a person has and they are able to communicate a great deal to the outside world about the intentions of that person. Being able to decipher these communications through robust and non-intrusive gaze tracking techniques is increasingly important as we look toward improving Human-Computer Interaction (HCI). Traditionally, devices which are able to determine a user's gaze are large, expensive and often restrictive. This work investigates the prospect of using common mobile devices such as tablets and phones as an alternative means for obtaining a user's gaze. Mobile devices now often contain high resolution cameras, and their ever increasing computational power allows increasingly complex algorithms to be performed in real time. A mobile solution allows us to turn that device into a dedicated portable gaze-tracking device for use in a wide variety of situations. This work specifically looks at where the challenges lie in transitioning current state-of-the-art gaze methodologies to mobile devices and suggests novel solutions to counteract the specific challenges of the medium. In particular, when the mobile device is held in the hands fast changes in position and orientation of the user can occur. In addition, since these devices lack the technologies typically ubiquitous to gaze estimation such as infra-red lighting, novel alternatives are required that work under common everyday conditions. A person's gaze can be determined through both their head pose as well as the orientation of the eye relative to the head. To meet the challenges outlined a geometric approach is taken where a new model for each is introduced that by design are completely synchronised through a common origin. First, a novel 3D head-pose estimation model called the 2.5D Constrained Local Model (2.5D CLM) is introduced that directly and reliably obtains the head-pose from a monocular camera. Then, a new model for gaze-estimation is introduced -- the Constrained Geometric Binocular Model (CGBM), where the visual ray representing the gaze from each eye is jointly optimised to intersect a known monitor plane in 3D space. The potential for both is that the burden of calibration is placed on the camera and monitor setup, which on mobile devices are fixed and can be determined during factory construction. In turn, the user requires either no calibration or optionally a one-time estimation of the visual offset angle. This work details the new models and specifically investigates their applicability and suitability in terms of their potential to be used on mobile platforms.
|
12 |
An examination of how existing technologies support mutual eye gaze between humans and an avatarStollenwerk, Per January 2004 (has links)
Future warfare concerns information superiority, i.e. supplying decisionmakers with better information than their opponents. Their decisions will be based on information from various sensors such as radars, UAVs, satellites, or any other object that can supply the decision makers with information. A system like this will make use of a huge amount of data, and the decisionmakers may not be able to handle all information that will be presented to them. Because of this, they might make decisions that are not optimal for the task. To enhance tha capability in decision making, the national defence college will create an avatar which will be located in a 3D presentation device, called the Visioscope (tm). If the computer system detects that a person might have made an erroneous decision, the avatar will act and point out the error, i.e. there will be a dialog between the decision makers and the avatar. One important factor when humans communicate with each other is mutual eye gaze. If mutual eye gaze can occur between the users and the avatar, and if the avatar can behave like a human, the communication process will be improved and the users will make less errors. This literature study aims to generate some ideas about how existing technology supports mutual eye gaze between the avatar and the users in the ROLF 2010 environment. The study partly concerns how a computer system can control an avatar so that it behaves like a human.
|
13 |
Eye-gaze in multimodal interactions involving children with autism spectrum disordersKorkoakangas, Terhi Kirsi January 2012 (has links)
Autism is a neurodevelopmental disorder that characteristically involves an impaired capacity to engage in reciprocal social interaction and to use eye-gaze for social purposes. This collection of conversation analytic studies examines naturally-occurring interactions involving Finnish children diagnosed with autism. The data consist of video-recorded interactions of four children, aged between 9-12 years, each engaged in dyadic or multiparty interactions with a range of familiar co-participants (teachers, parents, and siblings) at home, school, and music club. Comparative data from neurotypical interactions are also considered. The aim is to use conversation analysis to better understand how the children with autism interact in everyday settings. The study examines the organization of interactions as sequences of action, and how eye-gaze and other multimodal resources are involved in the orientation to and production of initiating and responsive actions (e.g. questions and answers). The analyses show (1) competencies with respect to using eye-gaze at relevant sequential environments to mobilise a response from a co-participant, and using smiling as an interactional resource while orienting to the response-implicativeness of eye-gaze; (2) displays of self-consciousness (involving averted gaze and other conduct) can occur when the participants orient to the children's non-production of a response that has been made relevant; (3) child's gaze aversion can become problematic in particular sequential locations, namely, when the child's response is noticeably absent and treated as unforthcoming; (4) how the handling of material objects can provide a resource when eliciting interactional involvement with the child. The findings indicate areas of interactional competence and show how, on some occasions, the direction of eye-gaze and body orientation can become interactionally problematic. The merits of researching naturally occurring interactions, and the prospect of incorporating a conversation analytic component as part of clinical assessments, are discussed.
|
14 |
Not All Gaze Cues Are the Same: Face Biases Influence Object Attention in InfancyPickron, Charisse 17 July 2015 (has links)
In their first year, infants’ ability to follow eye gaze to allocate attention shifts from being a response to low-level perceptual cues, to a deeper understanding of social intent. By 4 months infants look longer to uncued versus cued targets following a gaze cuing event, suggesting that infants better encode targets cued by shifts in eye gaze compared to targets not cued by eye gaze. From 6 to 9 months of age infants develop biases in face processing such that they show increased differentiation of faces within highly familiar groups (e.g., own-race) and a decreased differentiation of faces within unfamiliar or infrequently experienced groups (e.g., other-race). Although the development of cued object learning and face biases are both important social processes, they have primarily been studied independently. The current study examined whether early face processing biases for familiar compared to unfamiliar groups influences object encoding within the context of a gaze-cuing paradigm. Five- and 10-month-old infants viewed videos of adults, who varied by race and sex, shift their eye gaze towards one of two objects. The two objects were then presented side-by-side and fixation duration for the cued and uncued object was measured. Results revealed 5-month-old infants look significantly longer to uncued versus cued objects when the cuing face was a female. Additionally, 10-month-old infants displayed significantly longer looking to the uncued relative to the cued object when the cuing face was a female and from the infant’s own-race group. These findings are the first to demonstrate that perceptual narrowing based on sex and race shape infants’ use of social cues for allocating visual attention to objects in their environment.
|
15 |
Multimodal interface integrating eye gaze tracking and speech recognitionMahajan, Onkar January 2015 (has links)
No description available.
|
16 |
An Evaluation of the Use of Eye Gaze to Measure Preference for Individuals with Multiple DisabilitiesWheeler, Geoffrey M. 29 September 2009 (has links)
No description available.
|
17 |
The effects of eye gaze and emotional facial expression on the allocation of visual attentionCooper, Robbie Mathew January 2006 (has links)
This thesis examines the way in which meaningful facial signals (i.e., eye gaze and emotional facial expressions) influence the allocation of visual attention. These signals convey information about the likely imminent behaviour of the sender and are, in turn, potentially relevant to the behaviour of the viewer. It is already well established that different signals influence the allocation of attention in different ways that are consistent with their meaning. For example, direct gaze (i.e., gaze directed at the viewer) is considered both to draw attention to its location and hold attention when it arrives, whereas observing averted gaze is known to create corresponding shifts in the observer’s attention. However, the circumstances under which these effects occur are not yet understood fully. The first two sets of experiments in this thesis tested directly whether direct gaze is particularly difficult to ignore when the task is to ignore it, and whether averted gaze will shift attention when it is not relevant to the task. Results suggest that direct gaze is no more difficult to ignore than closed eyes, and the shifts in attention associated with viewing averted gaze are not evident when the gaze cues are task-irrelevant. This challenges the existing understanding of these effects. The remaining set of experiments investigated the role of gaze direction in the allocation of attention to emotional facial expressions. Without exception, previous work looking at this issue has measured the allocation of attention to such expressions when gaze is directed at the viewer. Results suggest that while the type of emotional expression (i.e., angry or happy) does influence the allocation of attention, the associated gaze direction does not, even when the participants are divided in terms of anxiety level (a variable known to influence the allocation of attention to emotional expressions). These findings are discussed in terms of how the social meaning of the stimulus can influence preattentive processing. This work also serves to highlight the need for general theories of visual attention to incorporate such data. Not to do so fundamentally risks misrepresenting the nature of attention as it operates out-with the laboratory setting.
|
18 |
Automatic Eye-Gaze Following from 2-D Static Images: Application to Classroom Observation Video AnalysisAung, Arkar Min 23 April 2018 (has links)
In this work, we develop an end-to-end neural network-based computer vision system to automatically identify where each person within a 2-D image of a school classroom is looking (“gaze following�), as well as who she/he is looking at. Automatic gaze following could help facilitate data-mining of large datasets of classroom observation videos that are collected routinely in schools around the world in order to understand social interactions between teachers and students. Our network is based on the architecture by Recasens, et al. (2015) but is extended to (1) predict not only where, but who the person is looking at; and (2) predict whether each person is looking at a target inside or outside the image. Since our focus is on classroom observation videos, we collect gaze dataset (48,907 gaze annotations over 2,263 classroom images) for students and teachers in classrooms. Results of our experiments indicate that the proposed neural network can estimate the gaze target - either the spatial location or the face of a person - with substantially higher accuracy compared to several baselines.
|
19 |
The Influence of Aging, Gaze Direction, and Context on Emotion Discrimination PerformanceMinton, Alyssa Renee 01 April 2019 (has links)
This study examined how younger and older adults differ in their ability to discriminate between pairs of emotions of varying degrees of similarity when presented with an averted or direct gaze in either a neutral, congruent, or incongruent emotional context. For Task 1, participants were presented with three blocks of emotion pairs (i.e., anger/disgust, sadness/disgust, and fear/disgust) and were asked to indicate which emotion was being expressed. The actors’ gaze direction was manipulated such that emotional facial expressions were depicted with a direct gaze or an averted gaze. For Task 2, the same stimuli were placed into emotional contexts (e.g., evocative backgrounds and expressive body posture) that were either congruent or incongruent with the emotional facial expression. Participants made emotion discrimination judgments for two emotion pairings: anger/disgust (High Similarity condition) and fear/disgust (Low Similarity condition). Discrimination performance varied as a function of age, gaze direction, degree of similarity of emotion pairs, and the congruence of the context. Across task, performance was best when evaluating less similar emotion pairs and worst when evaluating more similar emotion pairs. In addition, evaluating emotion in stimuli with averted eye gaze generally led to poorer performance than when evaluating stimuli communicating emotion with a direct eye gaze. These outcomes held for both age groups. When participants observed emotion facial expressions in the presence of congruent or incongruent emotional contexts, age differences in discrimination performance were most pronounced when the context did not support one’s estimation of the emotion expressed by the actors.
|
20 |
What Do We Know About Joint Attention in Shared Book Reading? An Eye-tracking Intervention StudyGuo, Jia January 2011 (has links)
<p>Joint attention is critical for social learning activities such as parent-child shared book reading. However, there is a potential disassociation of attention when the adult reads texts while the child looks at pictures. I hypothesize that the lack of joint attention limits children's opportunity to learn print-related skills. The current study tests the hypothesis with interventions that enhance real-time joint attention. Eye movements of parents and children were simultaneously tracked when they read books together on computer screens. I also provided real-time feedback to the parent regarding where the child was looking, and vice versa. Changes of dyads' reading behaviors before and after the joint attention intervention were measured from both eye movements and video records. Baseline data showed little joint attention in parent-child shared book reading. The real-time attention feedback significantly increased the joint attention and children's print-related learning. These findings supported my hypothesis that engaging in effective joint attention is critical for children to acquire knowledge and skills during shared reading and other collaborative learning activities.</p> / Dissertation
|
Page generated in 0.0203 seconds