• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 27
  • 23
  • 16
  • 16
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 374
  • 374
  • 88
  • 73
  • 59
  • 48
  • 46
  • 44
  • 37
  • 37
  • 36
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

The role of uncertainty and reward on eye movements in natural tasks

Sullivan, Brian Thomas 18 July 2012 (has links)
The human visual system is remarkable for the variety of functions it can be used for and the range of conditions under which it can perform, from the detection of small brightness changes to guiding actions in complex movements. The human eye is foveated and humans continually make eye and body movements to acquire new visual information. The mechanisms that control this acquisition and the associated sequencing of eye movements in natural circumstances are not well understood. While the visual system has highly parallel inputs, the fovea must be moved in a serial fashion. A decision process continually occurs where peripheral information is evaluated and a subsequent fixation target is selected. Prior explanations for fixation selection have largely focused on computer vision algorithms that find image areas with high salience, ones that incorporate reduction of uncertainty or entropy of visual features, as well as heuristic models. However, these methods are not well suited to model natural circumstances where humans are mobile and eye movements are closely coordinated for gathering ongoing task information. Following a computational model of gaze scheduling proposed by Sprague and Ballard (2004), I argue that a systematic explanation of human gaze behavior in complex natural tasks needs to represent task goals, a reward structure for these goals and a representation of uncertainty concerning progress towards those goals. If these variables are represented it is possible to formulate a decision computation for choosing fixation targets based on an expected value from uncertainty weighted reward. I present two studies of human gaze behavior in a simulated driving task that provide evidence of the human visual system’s sensitivity to uncertainty and reward. In these experiments observers tended to more closely monitor an information source if it had a high level of uncertainty but only for information also associated with high reward. Given this behavioral finding, I then present a set of simple candidate models in an attempt to explain how humans schedule the acquisition of information over time. These simple models are shown to be inadequate in describing the process of coordinated information acquisition in driving. I present an extended version of the gaze scheduling model adapted to our particular driving task. This formulation allows ordinal predictions on how humans use reward and uncertainty in the control of eye movements and is generally consistent with observed human behavior. I conclude by reviewing main results and discussing the merits and benefits of the computational models used, possible future behavioral experiments that would serve to more directly test the gaze scheduling model, as well as revisions to future implementations of the model to more appropriately capture human gaze behavior. / text
112

From vision to drawn metaphor : an artistic investigation into the relationship between eye-tracking and drawing

Baker, Catherine January 2012 (has links)
At its most essential drawing consists of the making of marks on a surface, however such an interpretation does not necessarily reflect the diverse practice of artists whose work seeks to challenge the conventions of drawing and establish new boundaries. This abstract documents a practice involving a new consideration for drawing which focuses on the active process of drawing as a physical and perceptual encounter. It proposes that eye movements and their associated cognitive processing can be considered as a drawing generating process. It does not seek to undermine the conventional three-way process of drawing involving eye, hand and brain but presents ideas which push against the established boundaries for drawing practice and has investigated new ways of making and new ways of considering the practice of drawing as a phenomenological contemplation. The proposition for drawing presented in this document, has been developed through a practice-led enquiry over the last eight years and involves using scientific methodologies found within the area of Active Vision. By examining artworks produced within the early part of the period of time defined within this thesis, emergent ideas relating to the act of making in-situ drawings and the recollection of such experiences brought about a series of questions regarding the process of generating a drawing. As the practice developed, using data obtained from different eye-tracking experiments, the author has explored the possibilities for drawing through using scientific methods of tracking the act of looking to investigate the relationship between the observer and the observer entity. Using the relationship between the drawn mark and visual responses to it as the basis for a practice-led period of research, this thesis presents the notion that by using technologies designed for other disciplines artists can explore the potential for drawing beyond the conventions cited above. Through the use of eye-tracking data the artist and author seeks to firmly establish the use of this scientific methodology within an artistic framework. It is a framework that responds to new ways of thinking about spatiality and the relations between sight and thought, taking into account the value of experience within the production of art; how the physical act itself becomes the manifestation of a process of drawing, understanding and knowledge of the world around us.
113

Evidence of intelligent neural control of human eyes

Najemnik, Jiri 22 June 2011 (has links)
Nearly all imaginable human activities rest on a context-appropriate dynamic control of the flow of retinal data into the nervous system via eye movements. The brain’s task is to move the eyes so as to exert intelligent predictive control over the informational content of the retinal data stream. An intelligent oculomotor controller would first model future contingent upon each possible next action in the oculomotor repertoire, then rank-order the repertoire by assigning a value v(a,t) to each possible action a at each time t, and execute the oculomotor action with the highest predicted value each time. We present a striking evidence of such an intelligent neural control of human eyes in a laboratory task of visual search for a small target camouflaged by a natural-like stochastic texture, a task in which the value of fixating a given location naturally corresponds to the expected information gain about the unknown location of the target. Human searchers behave as if maintaining a map of beliefs (represented as probabilities) about the target location, updating their beliefs with visual data obtained on each fixation optimally using the Bayes Rule. On average, human eye movement patterns appear remarkably consistent with an intelligent strategy of moving eyes to maximize the expected information gain, but inconsistent with the strategy of always foveating the currently most likely location of the target (a prevalent intuition in the existing theories). We derive principled, simple, accurate, and robust mathematical formulas to compute belief and information value maps across the search area on each fixation (or time step). The formulas are exact expressions in the limiting cases of small amount of information extracted, which occurs when the number of potential target locations is infinite, or when the time step is vanishingly small (used for online control of fixation duration). Under these circumstances, the computation of information value map reduces to a linear filtering of beliefs on each time step, and beliefs can be maintained simply as running weighted averages. A model algorithm employing these simple computations captures many statistical properties of human eye movements in our search task. / text
114

Averaged evoked response and reflex blink to visual stimuli

Antinoro, Norla Marie Walser January 1968 (has links)
No description available.
115

RESILIENCE AND ATTENTIONAL BIASES: WHAT YOU SEE MAY BE WHAT YOU GET

Valcheff, Danielle 17 March 2014 (has links)
Research suggests that, during stress, resilient individuals use positive emotion regulation strategies and experience a greater number of positive emotions than those who are less resilient. Therefore, differences could be expected in attentional biases towards emotional stimuli based on resilience. The current study investigated attentional biases towards neutral, negative and positive images in response to varying levels of resilence and mood induction conditions (neutral, negative and positive). Sixty participants viewed a series of pre and post-mood induction slides in order to measure attentional biases to emotional stimuli. The study provided evidence for the presence of trait and state congruent attentional biases. More resilient individuals demonstrated an initial bias towards positive stimuli and once emotion was aroused, the bias was away from negative stimuli. Additionally, mood congruent attentional biases were observed for participants induced into positive and negative mood states. Implications as they apply to research and clinical practice are discussed.
116

Are There Age Differences in Shallow Processing of Text?

Burton, Christine Millicent 06 December 2012 (has links)
There is growing evidence that young adult readers frequently fail to create exhaustive textbased representations as they read. Although there has been a significant amount of research devoted to age-related effects on text processing, there has been little research concerning this so-called shallow processing by older readers. This dissertation uses eye tracking to explore age-related effects in shallow processing across different levels of text representations. Experiment 1 investigated shallow processing by older readers at the textbase level by inserting semantic anomalies into passages read by participants. Older readers frequently failed to report the anomalies, but no more frequently than did younger readers. The eye-fixation behaviour revealed that older readers detected some of the anomalies sooner than did younger readers, but had to allocate disproportionately more processing resources to looking back to the anomalies to achieve comparable levels of detection success as their younger counterparts. Experiment 2 examined age-related effects of shallow processing at the surface form by inserting syntactic anomalies into passages read by older and younger adults. Older readers were less likely to detect syntactic anomalies when first encountering them relative to younger readers and engaged in increased regressive fixations to the anomalies. However, older readers with high iii reading comprehension skill were able to use their familiarity with text content to increase their likelihood of syntactic anomaly detection. Experiment 3 investigated the role of aging on shallow processing of the temporal dimension of the situation model. No age-related differences reporting the anomalies were found. The eye-fixation behaviour revealed that older readers with high working memory capacity detected some anomalies sooner than did younger readers; however, they had to allocate increased processing resources looking back to the anomalies to achieve comparable levels of detection as younger readers. Together, the results demonstrate that older readers are susceptible to shallow processing, but no more so than younger readers when they can rely on their linguistic skill or their existing knowledge to help reduce processing demands. However, older readers appear to require additional processing time to achieve comparable levels of anomaly detection as younger readers.
117

Aspects of time-varying and nonlinear systems theory, with biological applications.

Korenberg, Michael John January 1972 (has links)
No description available.
118

Language learning : a study on cognitive style, lateral eye-movement and deductive vs. inductive learning of foreign language structures

Stieblich, Christel H. January 1983 (has links)
Note: / This study is a verification of Hartnett’s (1975) claim on correlation between cognitive style, lateral eye-movement, and success in deductive versus inductive foreign language learning. Subjects were 123 native English or French speaking students in two different types of beginners German language classes. Results show that subjects exhibiting a global cognitive style learn well with an inductive method and not as well with a deducti~e method. Students exhibiting an analytic cognitive style learn well with a deductive method but also well with an inductive method. The subjects’ cognitive styles were measured by the Group Embedded Figures Test. There is no correlation between handedness and cognitive style or language proficiency. If we assume that non-right-handers are less lateralized for language functions than right-handers, then the results suggest that cognitive style is independent of language lateralization. Results do not support the validity of lateral eye-movement as a measure of cognitive style. / Cette etude verifie 1e postulat de Hartnett (1975) selon lequel une correlation existe entre style cognitif, direction du regard et succes dans 1 ‘apprentissage deductif et inductif d’une langue etrangere. Les 123 sujets qui ont participe a cette experience avaient pour langue maternelle 1 ‘anglais ou le francais et s’initiaient a l'allemand selon deux methodes differentes d’enseignement. Les resultats montrent que les sujets manifestant un style cognitif global apprennent bien avec une methode inductive et moins bien avec une methode deductive. Les etudiants manifestant un style cognitif analytique apprennent bien avec une methode deductive, mais aussi bien avec une methode inductive. Les styles cognitifs des sujets ont ete determines par le “Group Embedded Figures Test". Il n’y a pas de correlation entre la preference manuelle et le style cognitif ou la competence linguistique. Si l'on suppose que les fonctions du langage sont moins lateralisees chez les non-droitiers que chez les droitiers, les resultats suggerent que le style cognitif est independant de la lateralisation du langage. Les resultats ne confirment pas la validite de la direction du regard pour determiner 1e style cognitif d'un individu.
119

Using Eye Movements to Investigate Insight Problem Solving

Ellis, Jessica J. 11 December 2012 (has links)
In four experiments on insight problem solving, we investigated the time course of the development of solution knowledge prior to response, as well as the impact of stimulus familiarity on task performance and eye movement measures. In each experiment, participants solved anagram problems while their eye movements were monitored. In Experiments 1a and 1b, each anagram problem consisted of a circular array of letters: a scrambled four-letter solution word containing three consonants and one vowel, and an additional randomly-placed distractor consonant. Viewing times on the distractor consonant compared to the solution consonants provided an online measure of knowledge of the solution. Viewing times on the distractor consonant and the solution consonants were indistinguishable early in the trial. In contrast, several seconds prior to the response, viewing times on the distractor consonant decreased in a gradual manner compared to viewing times on the solution consonants. Importantly, this pattern was obtained across both trials in which participants reported the subjective experience of insight and trials in which they did not. These findings are consistent with the availability of partial knowledge of the solution prior to such information being accessible to subjective phenomenal awareness. In Experiments 2 and 3, each anagram problem consisted of a centrally located three-letter string plus three additional individual letters located above and to the side of the central letter string. All the letters in the central letter string were members of the five-letter solution word, while one of the individual letters was a randomly placed distractor. In Experiment 2, we replicated our findings of the gradual development of solution knowledge using this more complex stimulus display. In Experiment 3, we manipulated the familiarity of the central letter string by presenting it either in the form of a three-letter word, or as a meaningless string of letters. Behavioural measures showed an overall negative impact of familiarity on task performance, while eye movement measures revealed a more complex pattern of effects, including both interference and facilitation. Critically, the effects of familiarity on problem solving did not interact with the development of solution knowledge prior to response.
120

The Relationship Between Bottom-Up Saliency and Gaze Behaviour During Audiovisual Speech Perception

Everdell, IAN 12 January 2009 (has links)
Face-to-face communication is one of the most natural forms of interaction between humans. Speech perception is an important part of this interaction. While speech could be said to be primarily auditory in nature, visual information can play a significant role in influencing perception. It is not well understood what visual information is important or how that information is collected. Previous studies have documented the preference to gaze at the eyes, nose, and mouth of the talking face, but physical saliency, i.e., the unique low-level features of the stimulus, has not been explicitly examined. Two eye-tracking experiments are presented to investigate the role of physical saliency in the guidance of gaze fixations during audiovisual speech perception. Experiment 1 quantified the physical saliency of a talking face and examined its relationship with the gaze behaviour of participants performing an audiovisual speech perception task and an emotion judgment task. The majority of fixations were made to locations on the face that exhibited high relative saliency, but not necessarily the maximally salient location. The addition of acoustic background noise resulted in a change in gaze behaviour and a decrease in correspondence between saliency and gaze behaviour, whereas changing the task did not alter this correspondence despite changes in gaze behaviour. Experiment 2 manipulated the visual information available to the viewer by using animated full-feature and point-light talking faces. Removing static information, such as colour, intensity, and orientation, from the stimuli elicited both a change in gaze behaviour and a decrease in correspondence between saliency and gaze behaviour. Removing dynamic information, particularly head motion, resulted in a decrease in correspondence between saliency and gaze behaviour without any change in gaze behaviour. The results of these experiments show that, while physical saliency is correlated with gaze behaviour, it cannot be the single factor determining the selection of gaze fixations. Interactions within and between bottom-up and top-down processing are suggested to guide the selection of gaze fixations during audiovisual speech perception. / Thesis (Master, Neuroscience Studies) -- Queen's University, 2008-12-18 13:10:01.694

Page generated in 0.0509 seconds