• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 27
  • 23
  • 16
  • 16
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 374
  • 374
  • 88
  • 73
  • 59
  • 48
  • 46
  • 44
  • 37
  • 37
  • 36
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Effect of Color Overlays on Reading Efficiency

Morrison, Rhonda 01 September 2011 (has links)
Reading is a skill that unlocks the doors of learning and success. It is commonly accepted that reading is a foundational skill that plays a major role in a child's academic success. The history of teaching reading includes many theories about the development of reading, the source of reading difficulties, and interventions for remediation. A large body of research has demonstrated that reading difficulties stem from a phonological basis and interventions that target this area are generally beneficial in helping improving reading skills (National Reading Panel, 2000; Shaywitz, 2003; Stanovich, 1986). However, there are some who even with extensive intervention continue to struggle to read. Helen Irlen (2005) proposed that these people may experience visual-perceptual distortions when reading high-contrast text (black on white background). Irlen claims that symptoms of this disorder, termed Scotopic Sensitivity or Irlen Syndrome, can be alleviated by the use of color overlays or filters (tinted glasses). Research into the existence of this syndrome and the effectiveness of the overlays and filters to remediate reading problems has been inconsistent and criticized for lacking scientific rigor and heavy reliance on subject report of improvement. The present study seeks to evaluate differences that may exist in eye movements and reading fluency when subjects diagnosed with IS read text with and without color overlays. Participants were screened with the Irlen Reading Perceptual Scale (IRPS) to determine whether or not they suffered from the syndrome. From this screening, participants chose an overlay reported to alleviate distortions or discomfort they experienced when reading. They were then asked to read 18 passages under three conditions--with a clear overlay, with their chosen overlay, and with a random overlay--while their eye movements were recorded. Results indicated that participants showed no improvement in eye movement or reading fluency when they read passages with an optimum (chosen) overlay verses a clear overlay or a random overlay.
152

Why is it Difficult to Search for Two Colors at Once? How Eye Movements Can Reveal the Nature of Representations During Multi-target Visual Search

Stroud, Michael John 01 May 2010 (has links)
Visual search consists of locating a known target amongst a field of distractors. Often times, observers must search for more than one object at once. Eye movements were monitored in a series of visual search experiments examining search efficiency and how color is represented in order to guide search for multiple targets. The results demonstrated that observers were very color selective when searching for a single color. However, when searching for two colors at once, the degree of similarity between the two target colors had varying effects on fixation patterns. Search for two very similar colors was almost as efficient as search for a single color. As this similarity between the targets deceased, search efficiency suffered, resulting in more fixations on objects dissimilar to both targets. In terms of representation, the results suggest that the guiding template or templates prevailed throughout search, and were relatively unaffected by the objects encountered. Fixation patterns revealed that two similarly colored objects may be represented as a single, unitary range containing the target colors as well as the colors in between in color space. As the degree of similarity between the targets decreased, the two targets were more likely to be represented as discrete separate templates.
153

The Role of Hearing in Central Cueing of Attention

Bonmassar, Claudia 09 December 2019 (has links)
Our ability to be active agents in the world depends on our cognitive system to collect complex multisensory information, i.e. information coming from different senses, and integrate it appropriately. One fundamental topic of interest in the study of cognition is to understand the consequences of deafness on the organization of brain functions, specifically when one sensory modality is either lost or the information coming from that sensory modality is limited. In my work I used the spatial cueing paradigm to study how visual attention and selection is affected by diverse grades of congenital or acquired deafness in different life stages. The goal of the first study was to validate an integrated approach of covert and overt orienting to study social and non-social cueing of attention in hearing adults. Specifically, I examined manual and oculomotor performance of hearing observers performing a peripheral discrimination task with uninformative social (gaze cue) and non-social cues (arrow cue). In Experiment 1 the discrimination task was easy and eye movements were not necessary, whereas in Experiment 2 they were instrumental in identifying the target. Validity effects on manual response time (RT) were similar for the two cues in Experiment 1 and in Experiment 2, though in the presence of eye movements, observers were overall slower to respond to the arrow cue compared to the gaze cue. Cue-direction had an effect on saccadic performance before the discrimination was presented and throughout the duration of the trial. Furthermore, I found evidence of a distinct impact of the type of cue on diverse oculomotor components. While saccade latencies were affected by whether the cue was social or not, saccade landing positions were not affected by cue-type. Critically, the manual validity effect was predicted by the landing position of the initial eye movement. This work suggests that the relationship between eye movements and attention is not straightforward. In hearing adults, in the presence of eye movements, saccade latency was related to the overall speed of manual response, while eye movements landing position was closely related to manual performance in response to the validity of the cues. In the second study, I used the above-mentioned approach to investigate the impact of early profound deafness on the oculomotor control and orienting of attention to social and non-social cues. Previous research on covert orienting to the periphery suggests that early deaf adults are less sensitive to uninformative gaze cues, though were equally or more affected by non-social arrow cues. The aim of this second study was to investigate whether spontaneous eye movement behavior helps explain the reduced contribution of this social cue in deaf adults. Twenty-five deaf and twenty-five age-matched hearing observers took part in the experiment. In both groups, the cueing effect on RT was comparable for the gaze- and arrow-cue, although deaf observers responded significantly slower than hearing controls. While deaf and hearing observers responded equally to the cue presented in isolation, deaf participants relied significantly more on eye movements than hearing controls once the discrimination target was presented. Saccade landing position in the deaf group was affected by validity but not by cue type while latency was not modulated by these factors. Saccade landing position was also strongly related to the magnitude of the validity effect on RT, such that the greater the difference in saccade landing position between invalid and valid trials, the greater the difference in manual RT between invalid and valid trials. This work suggests that the contribution of overt selection in central cueing of attention is more prominent in deaf adults and determines the manual performance. The increase in eye movements and overall slower responses in deaf observers may act as an adaptive strategy to balance the need for accuracy in a context where vision and visual attention are used to monitor the surrounding environment in the absence of auditory input. This tendency to emphasize accuracy of response at the cost of responding more slowly seems to allow them to maintain the same level of cue-driven performance as their hearing peers. In the third study I focused on partial hearing loss. Little is known on the consequences of pure presbycusis, which is usually associated with aging (Age-related Hearing Loss, ARHL). In this case, auditory information is still present, although linked to an amount of uncertainty regarding its usefulness. In this study I started to investigate the role of ARHL on cognition considering covert orienting of attention, selective attention and executive control. I compared older adults with and without mild to moderate presbycusis (26-60 dB) performing 1) a spatial cueing task with uninformative central cues (social vs. non-social cues), 2) a flanker task and 3) a neuropsychological assessment of attention. Notably, while hearing impaired individuals responded as equally fast as their normally hearing peers, they were characterized by reduced validity effects on spatial cueing of attention, though no additional group differences were found between the impact of social and non-social cues. Hearing impaired individuals also demonstrated diminished performance on the Montreal Cognitive Assessment (MoCA) and on tasks requiring divided attention and flexibility. Conversely, overall response times and flanker interference effects were comparable across groups. This work indicates that while response speed and response inhibition appear to be preserved following mild to moderate presbycusis, orienting of attention, divided attention and the ability to flexibly allocate attention, are more deteriorated in older adults with ARHL. These findings suggest that presbycusis might exacerbate the detrimental influences of aging on visual attention. Taken together, the findings of my research project highlight the different role hearing loss may play at different life stages. On the one hand, congenital and early deafness seems to induce cognitive and behavioral compensations, which may encompass oculomotor behavior as well; these changes occur progressively during development and may reflect experience-dependent plasticity. On the other hand, late-life compensations in vision and visual attention in older adults with presbycusis may not take place or do not effectively reduce the negative impact of the auditory impairment. Rather, my data suggest that in this population a deficit in audition may consequently lead to a deficit in visual attention. Future lines of research can aim to better characterize other aspects of attention in the aging population with presbycusis, e.g. peripheral visual attention and the relationship between covert and overt attention. Finally, future research may also consider intervention through early diagnosis and treatment by means of hearing aids, which can be beneficial to cognitive functions and might delay or even prevent cognitive decline in this population, in which sensory compensation may not be sufficient.
154

Human torsional eye movements in response to visual, mechanical and vestibular stimuli

Seidman, Scott Howard January 1993 (has links)
No description available.
155

Does vergence influence the vestibulo-ocular reflex in human subjects rotating in the dark?

Fajardo, Ann B. 17 December 2008 (has links)
In recent experiments involving acceleration stimuli, researchers instructed subjects to focus on a visual target while measuring the vestibulo-ocular reflex (VOR) in one eye. These experiments showed conclusively that the VOR is influenced by target distance. We, on the other hand, were interested in investigating the VOR of subjects accelerated in complete darkness. Specifically, we wished to determine the subject's vergence point, which cannot be accomplished using data obtained from only one eye. Hence, a binocular eye-tracking system that works in the dark was required. In the experiment described in this thesis, the subject was rotated in the dark on NAMRL's Coriolis Acceleration Platform. The position of each pupil center was tracked and recorded by two helmet-mounted infrared cameras connected to a computer-controlled data acquisition system. The position data were used to calculate the angles through which the eyes rotated, and then trigonometric principles were applied to construct the line of sight for each eye for any moment in time; the intersection of these two lines is the vergence point. With the NAMRL binocular eye-tracking system, an accelerating subject's vergence point can accurately be determined if it is less than 1. 5 meters away. The vergence data obtained from this experiment suggest that vergence distance does not exclusively drive the VOR in the dark. / Master of Science
156

Eye Movements and Hemodynamic Response during Emotional Scene Processing: Exploring the Role of Visual Perception in Intrusive Mental Imagery

Roldan, Stephanie Marie 05 June 2017 (has links)
Unwanted and distressing visual imagery is a persistent and emotionally taxing symptom characteristic of several mental illnesses, including depression, schizophrenia, and posttraumatic stress disorder. Intrusive imagery symptoms have been linked to maladaptive memory formation, abnormal visual cortical activity during viewing, gaze pattern deficits, and trait characteristics of mental imagery. Emotional valence of visual stimuli has been shown to alter perceptual processes that influence the direction of attention to visual information, which may result in enhanced attention to suboptimal and generalizable visual properties. This study tested the hypothesis that aberrant gaze patterns to central and peripheral image regions influence the formation of decontextualized visual details which may facilitate involuntary and emotionally negative mental imagery experiences following a stressful or traumatic event. Gaze patterns and hemodynamic response from occipital cortical locations were recorded while healthy participants (N = 39) viewed and imagined scenes with negative or neutral emotional valence. Self-report behavioral assessments of baseline vividness of visual imagery and various cognitive factors were combined with these physiological measures to investigate the potential relationship between visual perception and mental recreation of negative scenes. Results revealed significant effects of task and valence conditions on specific fixation measures and hemodynamic response patterns in ventral visual areas, which interacted with cognitive factors such as imagery vividness and familiarity. Findings further suggest that behaviors observed during mental imagery reveal processes related to representational formation over and above perceptual performance and may be applied to the study of disorders such as PTSD. / Ph. D. / Intrusive imagery describes the visual components of flashbacks that are common to mental disorders such as posttraumatic stress disorder (PTSD), obsessive-compulsive disorder, and schizophrenia. Several explanations for this symptom have been suggested, including incomplete memories, changes in visual brain structures and function, inappropriate viewing patterns, and an individual’s ability to imagine visual scenes in detail. The emotional tone of a scene has also been shown to affect viewing patterns, which may lead to attention being narrowly directed toward specific visual details while ignoring surrounding information. This study tested whether inappropriate viewing patterns to central and outer image regions in negative images influence narrow focus to emotional details, thereby allowing flashback-type imagery to occur following a traumatic or stressful event. Viewing patterns and blood flow in brain regions were measured while participants (<i>N</i> = 39) viewed and imagined scenes with negative or neutral emotional tone. Self-reported detail of voluntary mental imagery and other cognitive factors such as content familiarity and pleasantness were used to investigate a relationship between viewing and imagery of emotionally negative scenes. Results showed that certain cognitive factors as well as the type of visual task significantly affected particular eye movements and patterns of blood flow in visual regions of the brain. These measures interacted with cognitive factors such as imagery detail and content familiarity. Findings further suggest that behaviors observed during mental imagery reveal cognitive processes over and above those during viewing and may be useful in the study of disorders such as PTSD.
157

Factors Associated with Saccade Latency

Hardwick, David R., na January 2008 (has links)
Part of the aim of this thesis was to explore a model for producing very fast saccade latencies in the 80 to 120ms range. Its primary motivation was to explore a possible interaction by uniquely combining three independent saccade factors: the gap effect, target-feature-discrimination, and saccadic inhibition of return (IOR). Its secondary motivation was to replicate (in a more conservative and tightly controlled design) the surprising findings of Trottier and Pratt (2005), who found that requiring a high resolution task at the saccade target location speeded saccades, apparently by disinhibition. Trottier and Pratt’s finding was so surprising it raised the question: Could the oculomotor braking effect of saccadic IOR to previously viewed locations be reduced or removed by requiring a high resolution task at the target location? Twenty naïve untrained undergraduate students participated in exchange for course credit. Multiple randomised temporal and spatial target parameters were introduced in order to increase probability of exogenous responses. The primary measured variable was saccade latency in milliseconds, with the expectation of higher probability of very fast saccades (i.e. 80-120ms). Previous research suggested that these very fast saccades could be elicited in special testing circumstances with naïve participants, such as during the gap task, or in highly trained observers in non-gap tasks (Fischer & Weber, 1993). Trottier and Pratt (2005) found that adding a task demand that required naïve untrained participants to obtain a feature of the target stimulus (and to then make a discriminatory decision) also produced a higher probability of very fast saccade latencies. They stated that these saccades were not the same as saccade latencies previously referred to as express saccades produced in the gap paradigm, and proposed that such very fast saccades were normal. Carpenter (2001) found that in trained participants the probability of finding very fast saccades during the gap task increased when the horizontal direction of the current saccade continued in the same direction as the previous saccade (as opposed to reversing direction) – giving a distinct bimodality in the distribution of latencies in five out of seven participants, and likened his findings to the well known IOR effect. The IOR effect has previously been found in both manual key-press RT and saccadic latency paradigms. Hunt and Kingstone (2003) stated that there were both cortical top-down and oculomotor hard-wired aspects to IOR. An experiment was designed that included obtain-target-feature and oculomotor-prior-direction, crossed with two gap level offsets (0ms & 200ms-gap). Target-feature discrimination accuracy was high (97%). Under-additive main effects were found for each factor, with a three-way interaction effect for gap by obtain-feature by oculomotor-prior-direction. Another new three-way interaction was also found for anticipatory saccade type. Anticipatory saccades became significantly more likely under obtain-target-feature for the continuing oculomotor direction. This appears to be a similar effect to the increased anticipatory direction-error rate in the antisaccade task. These findings add to the saccadic latency knowledge base and in agreement with both Carpenter and Trottier and Pratt, laboratory testing paradigms can affect saccadic latency distributions. That is, salient (meaningful) targets that follow more natural oculomotor trajectories produce higher probability of very fast latencies in the 80-120ms range. In agreement with Hunt and Kingstone, there appears to be an oculomotor component to IOR. Specifically, saccadic target-prior-location interacts differently for obtain-target-feature under 200-ms gap than under 0ms-gap, and is most likely due predominantly to a predictive disinhibitory oculomotor momentum effect, rather than being due to the attentional inhibitory effect proposed for key-press IOR. A new interpretation for the paradigm previously referred to as IOR is offered that includes a link to the smooth pursuit system. Additional studies are planned to explore saccadic interactions in more detail.
158

Structured vs. unstructured scan path in static visual search performance

Sequeira, Eric G. January 1979 (has links)
Call number: LD2668 .T4 1979 S46 / Master of Science
159

The eyes as a window to the mind: inferring cognitive state from gaze patterns

Boisvert, Jonathan 22 March 2016 (has links)
In seminal work, Yarbus examined the characteristic scanpaths that result when viewing an image, observing that scanpaths varied significantly depending on the question posed to the observer. While early efforts examining this hypothesis were equivocal, it has since been established that aspects of an observer’s assigned task may be inferred from their gaze. In this thesis we examine two datasets that have not previously been considered involving prediction of task and observer sentiment respectively. The first of these involves predicting general tasks assigned to observers viewing images, and the other predicting subjective ratings recorded after viewing advertisements. The results present interesting observations on task groupings and affective dimensions of images, and the value of various measurements (gaze or image based) in making these predictions. Analysis also demonstrates the importance of how data is partitioned for predictive analysis, and the complementary nature of gaze specific and image derived features. / May 2016
160

Coordinating speech-related eye movements between comprehension and production

Kreysa, Helene January 2009 (has links)
Although language usually occurs in an interactive and world-situated context (Clark, 1996), most research on language use to date has studied comprehension and production in isolation. This thesis combines research on comprehension and production, and explores the links between them. Its main focus is on the coordination of visual attention between speakers and listeners, as well as the influence this has on the language they use and the ease with which they understand it. Experiment 1 compared participants’ eye movements during comprehension and production of similar sentences: in a syntactic priming task, they first heard a confederate describe an image using active or passive voice, and then described the same kind of picture themselves (cf. Branigan, Pickering, & Cleland, 2000). As expected, the primary influence on eye movements in both tasks was the unfolding sentence structure. In addition, eye movements during target production were affected by the structure of the prime sentence. Eye movements in comprehension were linked more loosely with speech, reflecting the ongoing integration of listeners’ interpretations with the visual context and other conceptual factors. Experiments 2-7 established a novel paradigm to explore how seeing where a speaker was looking during unscripted production would facilitate identification of the objects they were describing in a photographic scene. Visual coordination in these studies was created artificially through an on-screen cursor which reflected the speaker’s original eye movements (cf. Brennan, Chen, Dickinson, Neider, & Zelinsky, 2007). A series of spatial and temporal manipulations of the link between cursor and speech investigated the respective influences of linguistic and visual information at different points in the comprehension process. Implications and potential future applications are discussed, as well as the relevance of this kind of visual cueing to the processing of real gaze in face-to-face interaction.

Page generated in 0.0658 seconds