Spelling suggestions: "subject:"eye covements"" "subject:"eye comovements""
161 |
Coordinating speech-related eye movements between comprehension and productionKreysa, Helene January 2009 (has links)
Although language usually occurs in an interactive and world-situated context (Clark, 1996), most research on language use to date has studied comprehension and production in isolation. This thesis combines research on comprehension and production, and explores the links between them. Its main focus is on the coordination of visual attention between speakers and listeners, as well as the influence this has on the language they use and the ease with which they understand it. Experiment 1 compared participants’ eye movements during comprehension and production of similar sentences: in a syntactic priming task, they first heard a confederate describe an image using active or passive voice, and then described the same kind of picture themselves (cf. Branigan, Pickering, & Cleland, 2000). As expected, the primary influence on eye movements in both tasks was the unfolding sentence structure. In addition, eye movements during target production were affected by the structure of the prime sentence. Eye movements in comprehension were linked more loosely with speech, reflecting the ongoing integration of listeners’ interpretations with the visual context and other conceptual factors. Experiments 2-7 established a novel paradigm to explore how seeing where a speaker was looking during unscripted production would facilitate identification of the objects they were describing in a photographic scene. Visual coordination in these studies was created artificially through an on-screen cursor which reflected the speaker’s original eye movements (cf. Brennan, Chen, Dickinson, Neider, & Zelinsky, 2007). A series of spatial and temporal manipulations of the link between cursor and speech investigated the respective influences of linguistic and visual information at different points in the comprehension process. Implications and potential future applications are discussed, as well as the relevance of this kind of visual cueing to the processing of real gaze in face-to-face interaction.
|
162 |
Remote distractor effects in saccadic, manual and covert attention tasksBuonocore, Antimo January 2010 (has links)
The Remote Distractor Effect (RDE) is a robust phenomenon where a saccade to a lateralised target is delayed by the appearance of a distractor in the contralateral hemifield (Walker, Kentridge, & Findlay, 1995). The main aim of this thesis was to test whether the RDE generalises to response modalities other then the eyes. In Chapter 2, the RDE was tested on saccadic and simple manual keypress responses, and on a choice discrimination task requiring a covert shift of attention. The RDE was observed for saccades, but not simple manual responses, suggesting that spatially oriented responses may be necessary for the phenomenon. However, it was unclear whether distractor interference occurred in the covert task. Chapter 4 compared the effects of distractors between spatially equivalent tasks requiring saccadic and manual aiming responses respectively. Again, the RDE was observed for the eyes but not for the hands. This dissociation was also replicated in a more naturalistic task in which participants were free to move their eyes during manual aiming. In order to examine the time-course of distractor effects for the eyes and the hands, a third experiment investigated distractor effects across a wider range of target-distractor delays, finding no RDE for manual aiming responses at distractor delays of 0, 100, or 150 ms. The failure of the RDE to generalise to manual aiming suggests that target selection mechanisms are not shared between hand and eye movements. Chapter 5 further investigated the role of distractors during covert discrimination. The first experiment showed that distractor appearance did not interfere with discrimination performance. A second experiment, in which participants were also asked to saccade toward the target, confirmed the lack of RDE for covert discrimination while saccades were slower in distractor trials. The dissociation between covert and overt orienting suggests important differences between shifts of covert attention and preparation of eye movements. Finally, Chapter 6 investigated the mechanism driving the RDE. In particular it was assessed whether saccadic inhibition (Reingold & Stampe, 2002) is responsible for the increase in saccadic latency induced by remote distractors. Examination of the distributions of saccadic latencies at different distractor delays showed that each distractor produced a discrete dip in saccadic frequency, time-locked to distractor onset, conforming closely to the character of saccadic inhibition. It is concluded that saccadic inhibition underlies the remote distractor effect.
|
163 |
A Study of Saccade Dynamics and Adaptation in Athletes and Non AthletesBabu, Raiju Jacob January 2004 (has links)
Purpose: The aim of the study was to delineate differences in saccade characteristics between a population of athletes and non athletes. Aspects specifically investigated were latency, accuracy, peak velocity, and gain adaptation of saccades using both increasing and decreasing paradigms. Methods: A sample of 28 athletes (varsity badminton and squash players) and 18 non athletes (< 3 hour/week in sports) were studied. Eye movements were recorded at 120Hz using a video based eye tracker (ELMAR 2020). Each subject participated in 2 sessions on separate days. Baseline saccade responses to dot stimuli were measured in both sessions (stimulus size: 5-25 deg). The first session involved a gain decreasing paradigm, induced by displacing the stimulus backwards by 3 degrees from the initial target step (12 deg) for 500 trials. In the 2nd session a gain increase was induced by displacing the stimuli by 3 degrees in the forward direction. The latency and accuracy were calculated from the baseline. The asymptotic peak velocity was calculated from the main sequence (amplitude vs. peak velocity). The amplitude gains, calculated from the adaptation phase, were averaged for every 100 saccade responses. The averaged gains were normalized with respect to the baseline, fitted with a 3rd order polynomial, and differentiated to obtain the rate of change. Differences between the groups were compared using a regression analysis. Results: There were no significant differences in latency, accuracy, and asymptotic peak velocity between athletes and non athletes. No significant differences were seen between the two groups in the magnitude of saccadic adaptation, both for decreasing (- 15% in both groups) and increasing (athletes + 7% and non athletes + 5%) paradigms. However, athletes showed a significantly faster rate of adaptation for the gain increasing paradigm (F = 17. 96[3,6]; p = 0. 002). A significant difference was not observed in the rate of adaptation for the gain decreasing adaptation (F = 0. 856[3,6]; p = 0. 512). Conclusions: The study showed that the athletes do not respond better in terms of reaction time or accuracy of saccades. The significant difference in the rate of change of adaptation between the groups shows that online modification of saccades in the positive direction, although not greater in magnitude, occurs quicker in athletes than non athletes.
|
164 |
The influence of socio-biological cues on saccadic orientingGregory, Nicola Jean January 2011 (has links)
Previous research has suggested that viewing of another’s averted eye gaze causes automatic orienting of attention and eye movements in observers due to the importance of eye gaze for effective social interaction. Other types of visual cues with no social or biological relevance, such as arrows, are claimed not to produce such a direct effect on orienting behaviour. The finding that processing of eye gaze is reduced in individuals with Autistic Spectrum Disorders as well as following damage to the orbitofrontal cortex of the brain, suggests that gaze processing is indeed critical for effective social behaviour and therefore eye gaze may constitute a “special” directional cue. This thesis tested these ideas by examining the influence of socio-biological (eye gaze and finger pointing) and non-social cues (arrows and words) on eye movement responses in both healthy control participants and those with damage to the frontal lobes of the brain. It further investigated the relationship between orienting to gaze and arrow cues and autistic traits in a healthy population. Important differences between the effects of socio-biological and non-social cues were found on saccadic eye movements. Although in the pro-saccade tasks, arrow cues caused a similar facilitation of responses in the cued direction as eye gaze and pointing cues, in the anti-saccade tasks (in which participants have to respond away from the location of a peripheral onset), arrows had a greatly reduced effect on oculomotor programming relative to the biologically relevant cues. Importantly, although the socio-biological cues continued to influence saccadic responses, the facilitation was in the opposite direction to the cues. This finding suggests that the cues were being processed within the same "anti-response" task set (i.e. "go opposite") as the target stimulus. Word cues had almost no effects on saccadic orienting in either pro- or anti-saccade tasks. Schematicised eye gaze cues had a smaller magnitude effect than photographic gaze cues suggesting that ecological validity ("biological-ness") is an important factor in influencing oculomotor responses to social cues. No relationship was found between autistic traits and orienting to gaze or arrow cues in a large sample of males. However, findings from the neurological patients point to a possible double-dissociation between the neural mechanisms subserving processing of socio-biological and non-social cues, with the former reliant on the orbitofrontal cortex, and the latter on lateral frontal cortex. Taken together, these results suggest that biologically relevant cues have privileged access to the oculomotor system. The findings are interpreted in terms of a neurocognitive model of saccadic orienting to socio-biological and non-social cues, and an extension to an existing model of saccade generation is proposed. Finally, limitations of the research, its wider impact and directions for future work are discussed.
|
165 |
Attention shift and remapping across saccadesYao, Tao 19 December 2016 (has links)
No description available.
|
166 |
Visual working memory and saccadic eye movementsNotice, Keisha Joy January 2013 (has links)
Saccadic eye movements, produced by the oculomotor system, are used to bring salient information in line with the high resolution fovea. It has been suggested that visual working memory, the cognitive system that temporarily stores and manipulates visual information (Baddeley & Hitch, 1974), is utilised by the oculomotor system in order to maintain saccade programmes across temporal delays (Belopolsky & Theeuwes, 2011). Saccadic eye movements have been found to deviate away from information stored in visual working memory (Theeuwes and colleagues, 2005, 2006). Saccadic deviation away from presented visual stimuli has been associated with top-down suppression (McSorley, Haggard, & Walker, 2006). This thesis examines the extent to which saccade trajectories are influenced by information held in visual working memory. Through a series of experiments behavioural memory data and saccade trajectory data were explored and evidence for visual working memory-oculomotor interaction was found. Other findings included specific interactions with the oculomotor system for the dorsal and ventral pathways as well as evidence for both bottom-up and top-down processing. Evidence of further oculomotor interaction with manual cognitive mechanisms was also illustrated, suggesting that visual working memory does not uniquely interact with the oculomotor system to preserve saccade programmes. The clinical and theoretical implications of this thesis are explored. It is proposed that the oculomotor system may interact with a variety of sensory systems to inform accurate and efficient visual processing.
|
167 |
Bayesovske modely očných pohybov / Bayesian models of eye movementsLux, Erik January 2014 (has links)
Attention allows us to monitor objects or regions of visual space and extract information from them to use for report or storage. Classical theories of attention assumed a single focus of selection but many everyday activities, such as playing video games, suggest otherwise. Nonetheless, the underlying mechanism which can explain the ability to divide attention has not been well established. Numerous attempts have been made in order to clarify divided attention, including analytical strategies as well as methods working with visual phenomena, even more sophisticated predictors incorporating information about past selection decisions. Virtually all the attempts approach this problem by constructing a simplified model of attention. In this study, we develop a version of the existing Bayesian framework to propose such models, and evaluate their ability to generate eye movement trajectories. For the comparison of models, we use the eye movement trajectories generated by several analytical strategies. We measure the similarity between...
|
168 |
Bayesovske modely očných pohybov / Bayesian models of eye movementsLux, Erik January 2014 (has links)
Attention allows us to monitor objects or regions of visual space and extract information from them to use for report or storage. Classical theories of attention assumed a single focus of selection but many everyday activities, such as playing video games, suggest otherwise. Nonetheless, the underlying mechanism which can explain the ability to divide attention has not been well established. Numerous attempts have been made in order to clarify divided attention, including analytical strategies as well as methods working with visual phenomena, even more sophisticated predictors incorporating information about past selection decisions. Virtually all the attempts approach this problem by constructing a simplified model of attention. In this study, we develop a version of the existing Bayesian framework to propose such models, and evaluate their ability to generate eye movement trajectories. For the comparison of models, we use the eye movement trajectories generated by several analytical strategies. We measure the...
|
169 |
Modelling eye movements during Multiple Object Tracking / Modelling eye movements during Multiple Object TrackingDěchtěrenko, Filip January 2012 (has links)
In everyday situations people have to track several objects at once (e.g. driving or collective sports). Multiple object tracking paradigm (MOT) plausibly simulate tracking several targets in laboratory conditions. When we track targets in tasks with many other objects in scene, it becomes difficult to discriminate objects in periphery (crowding). Although tracking could be done only using attention, it is interesting question how humans plan their eye movements during tracking. In our study, we conducted a MOT experiment in which we presented participants repeatedly several trials with varied number of distractors, we recorded eye movements and we measured consistency of eye movements using Normalized scanpath saliency (NSS) metric. We created several analytical strategies employing crowding avoidance and compared them with eye data. Beside analytical models, we trained neural networks to predict eye movements in MOT trial. The performance of the proposed models and neuron networks was evaluated in a new MOT experiment. The analytical models explained variability of eye movements well (results comparable to intraindividual noise in the data); predictions based on neural networks were less successful.
|
170 |
Recognizing the setting before reporting the action: investigating how visual events are mentally constructed from scene imagesLarson, Adam M. January 1900 (has links)
Doctor of Philosophy / Department of Psychology / Lester C. Loschky / While watching a film, the viewer begins to construct mental representations of it, which are called events. During the opening scene of a film, the viewer is presented with two distinct pieces of information that can be used to construct the event, namely the setting and an action by the main character. But, which of these two constructs are first cognitively represented by the viewer? Experiment 1 examined the time-course of basic level action categorization with superordinate and basic level scene categorization using masking. The results indicated that categorization occurred in a course-to-fine manner, inconsistent with Rosch et al.’s (1976) basic level theory. Interestingly, basic level action categorization performance did not reach ceiling when it was processed for a 367 ms SOA, suggesting that additional scene information and processing time were required. Thus, Experiment 2 examined scene and action categorization performance over multiple fixations, and the scene information that was fixated for each categorization task. Both superordinate and basic level scene categorization required only a single fixation to reach ceiling performance, inconsistent with basic level primacy, whereas basic level action categorization took two to three fixations, and led to more object fixations than in either scene categorization task. Eye movements showed evidence of a person bias across all three categorization tasks. Additionally, the categorization task did produce differences in the scene information that was fixated (Yarbus, 1967). However, could basic level theory still be correct when subjects are given a different task? When the same scene images were named, basic level action terms were used more often than basic level scene category terms, while superordinate level action terms were used relatively less often, and superordinate level scene category terms were hardly ever used. This shows that linguistic categorization (naming) is sensitive to informative, middle-level categories, whereas early perceptual categorization makes use of coarse high level distinctions. Additionally, the early perceptual advantage for scene categorization over basic level action categorization suggests that the scene category is the first construct that is used to represent events in scene images, and maybe even events in visual narratives like film.
|
Page generated in 0.0904 seconds