• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 27
  • 23
  • 16
  • 16
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 374
  • 374
  • 88
  • 73
  • 59
  • 48
  • 46
  • 44
  • 37
  • 37
  • 36
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Tracking the invisible requires prediction and internal models

Orban de Xivry, Jean-Jacques 14 December 2007 (has links)
In order to grasp an object in their visual field, humans orient their visual axis to targets of interest. While scanning their environment, humans perform multiple saccades (rapid eye movements that correct for a position error between eye and target) to align their visual axis with objects of interest. Humans are also able to track objects that move in their environment by means of smooth pursuit eye movements (slow eye movements that correct for any velocity error between eye and target, i.e. for any retinal slip). The appearance of a moving stimulus in the environment elicits smooth pursuit eye movements with a latency of around 100ms. Accordingly, the smooth pursuit system accounts for a change in the trajectory of a moving target with a similar delay. Due to this delay, the oculomotor system needs to develop strategies to avoid the build up of position error during tracking of a moving target. To do so, the oculomotor system uses prediction to try and anticipate the future target trajectory. However, this strategy is limited to conditions where target trajectory is predictable. Otherwise, primates have to combine pursuit and saccades in visual tracking of unpredictable moving targets to avoid large position error. This thesis focuses on both the prediction mechanisms and the interactions between saccades and pursuit. In order to investigate prediction mechanisms, we asked human subjects to pursue a moving target when it was transiently occluded. During occlusions, subjects continued to pursue the invisible target. This thesis demonstrates that this predictive pursuit response is based on a dynamic internal representation of target motion, i.e. a representation that evolves with time. This internal representation could be either built up by repetition of the same target motion or extrapolated on the basis of the pre-occlusion target motion. In addition, it is shown that during occlusions, saccades are adjusted in order to account for the large variability of the smooth pursuit response. As a consequence, it shows that the smooth pursuit command is used by internal models in order to predict future smooth pursuit response. These results demonstrate that both prediction and internal models are necessary to track the invisible and the visible.
92

Saccadic eye movements and pause/articulation components during a letter naming speed task: Children with and without dyslexia

Al Dahhan, Noor 27 September 2013 (has links)
Naming speed (NS) tasks that measure how quickly and accurately participants can name visual stimuli (e.g., letters) are commonly used to predict reading ability. However, the link between NS and reading is poorly understood. Three methods were used to investigate how NS relates to reading and what cognitive processes are involved: (a) changing stimulus composition to emphasize phonological and/or visual aspects (Compton, 2003); (b) decomposing NS times into pause and articulation components; and (c) analyzing eye movements during a NS task. Participants were in three groups: dyslexics (aged 9, 10), chronological-age (CA) controls (age 9, 10), and reading-level (RL) controls (aged 6, 7). We used a letter NS task and three variants that were either phonologically and/or visually confusing while subjects’ eye movements and articulations were recorded, and examined how these manipulations influenced NS performance and eye movements. For all groups, NS manipulations were associated with specific patterns of behaviour and saccadic performance, reflecting differential contributions of NS to reading. RL controls were less efficient, made more errors, saccades and regressions, and made longer fixation durations, articulation times, and pause times than CA controls. Dyslexics consistently scored in between controls, except for the number of saccades and regressions in which they made more than both control groups. Overall there were clear developmental changes in NS performance, NS components, and eye movements in controls from ages 6 to 10 that appear to occur more slowly for dyslexics. Furthermore, pause time and fixation duration were key features in the NS-reading relationship, and increasing visual similarity of the letter matrix had the greatest effect on performance for all subjects. This latter result was demonstrated by the decrease in efficiency and eye-voice span, increase in naming errors, saccades, and regressions, and longer pause times and fixation durations found for all subjects. We conclude that NS is related to reading via fixation durations and pause times; longer fixation durations reflect the greater amount of time needed to acquire visual/orthographic information from stimuli, and longer pause times in children with dyslexia reflect the greater amount of time needed to prepare to respond to stimuli. / Thesis (Master, Neuroscience Studies) -- Queen's University, 2013-09-26 12:24:53.951
93

Non-auditory Influences on the Auditory Periphery

Gruters, Kurtis G. January 2016 (has links)
<p>Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.</p> / Dissertation
94

Insect-Like Organization of the Stomatopod Central Complex: Functional and Phylogenetic Implications

Thoen, Hanne H., Marshall, Justin, Wolff, Gabriella H., Strausfeld, Nicholas J. 07 February 2017 (has links)
One approach to investigating functional attributes of the central complex is to relate its various elaborations to pancrustacean phylogeny, to taxon-specific behavioral repertoires and ecological settings. Here we review morphological similarities between the central complex of stomatopod crustaceans and the central complex of dicondylic insects. We discuss whether their central complexes possess comparable functional properties, despite the phyletic distance separating these taxa, with mantis shrimp (Stomatopoda) belonging to the basal branch of Eumalacostraca. Stomatopods possess the most elaborate visual receptor system in nature and display a fascinating behavioral repertoire, including refined appendicular dexterity such as independently moving eyestalks. They are also unparalleled in their ability to maneuver during both swimming and substrate locomotion. Like other pancrustaceans, stomatopods possess a set of midline neuropils, called the central complex, which in dicondylic insects have been shown to mediate the selection of motor actions for a range of behaviors. As in dicondylic insects, the stomatopod central complex comprises a modular protocerebral bridge (PB) supplying decussating axons to a scalloped fan-shaped body (FB) and its accompanying ellipsoid body (EB), which is linked to a set of paired noduli and other recognized satellite regions. We consider the functional implications of these attributes in the context of stomatopod behaviors, particularly of their eyestalks that can move independently or conjointly depending on the visual scene.
95

Dissociating eye-movements and comprehension during film viewing

Hutson, John January 1900 (has links)
Master of Science / Department of Psychological Sciences / Lester Loschky / Film is a ubiquitous medium. However, the process by which we comprehend film narratives is not well understood. Reading research has shown a strong connection between eye-movements and comprehension. In four experiments we tested whether the eye-movement and comprehension relationship held for films. This was done by manipulating viewer comprehension by starting participants at different points in a film, and then tracking their eyes. Overall, the manipulation created large differences in comprehension, but only found small difference in eye-movements. In a condition of the final experiment, a task manipulation was designed to prioritize different stimulus features. This task manipulation created large differences in eye-movements when compared to participants freely viewing the clip. These results indicate that with the implicit task of narrative comprehension, top-down comprehension processes have little effect on eye-movements. To allow for strong, volitional top-down control of eye-movements in film, task manipulations need to make features that are important to comprehension irrelevant to the task.
96

A study of the temporal relationship between eye actions and facial expressions

Rupenga, Moses January 2017 (has links)
A dissertation submitted in ful llment of the requirements for the degree of Master of Science in the School of Computer Science and Applied Mathematics Faculty of Science August 15, 2017 / Facial expression recognition is one of the most common means of communication used for complementing spoken word. However, people have grown to master ways of ex- hibiting deceptive expressions. Hence, it is imperative to understand di erences in expressions mostly for security purposes among others. Traditional methods employ machine learning techniques in di erentiating real and fake expressions. However, this approach does not always work as human subjects can easily mimic real expressions with a bit of practice. This study presents an approach that evaluates the time related dis- tance that exists between eye actions and an exhibited expression. The approach gives insights on some of the most fundamental characteristics of expressions. The study fo- cuses on nding and understanding the temporal relationship that exists between eye blinks and smiles. It further looks at the relationship that exits between eye closure and pain expressions. The study incorporates active appearance models (AAM) for feature extraction and support vector machines (SVM) for classi cation. It tests extreme learn- ing machines (ELM) in both smile and pain studies, which in turn, attains excellent results than predominant algorithms like the SVM. The study shows that eye blinks are highly correlated with the beginning of a smile in posed smiles while eye blinks are highly correlated with the end of a smile in spontaneous smiles. A high correlation is observed between eye closure and pain in spontaneous pain expressions. Furthermore, this study brings about ideas that lead to potential applications such as lie detection systems, robust health care monitoring systems and enhanced animation design systems among others. / MT 2018
97

Comparison of visual performance with operational fatigue level based on eye tracking model

Cong, Yu Fang January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Electromechanical Engineering
98

Eye fixations during encoding of familiar and unfamiliar language

Unknown Date (has links)
This study examines gaze patterns of monolinguals and bilinguals encoding speech in familiar and unfamiliar languages. In condition 1 English monolinguals viewed videos in familiar and unfamiliar languages (English and Spanish or Icelandic). They performed a task to ensure encoding: on each trial, two videos of short sentences were presented, followed by an audio-only recording of one of those sentences. Participants choose whether the audio-clip matched the first or second video. Participants gazed significantly longer at speaker's mouths when viewing unfamiliar languages. In condition 2 Spanish-English bilingual's viewed English and Spanish, no difference was found between the languages. In condition 3 the task was removed, English monolinguals viewed 20 English and 20 Icelandic videos, no difference in the gaze patterns was found, suggesting this phenomenon relies on encoding. Results indicate people encoding unfamiliar speech attend to the mouth presumably to extract more accurate audiovisually invariant and highly salient speech information. / by Lauren Wood Mavica. / Thesis (M.A.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
99

A psychological study of reading comprehension in Chinese using the moving window and eye-monitoring techniques. / Paradigms in comprehension

January 1998 (has links)
Lau Wing Yin, Verena. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 74-78). / Abstract also in Chinese. / Acknowledgments --- p.2 / Abstract in English --- p.3 / Abstract in Chinese --- p.4 / Introduction --- p.5 / Major features of the Chinese language and processes in Chinese reading comprehension / Different paradigms in Chinese reading comprehension / Research questions of the present study / Research design of the present study / Experiment1 --- p.24 / Experiment2 --- p.32 / Experiment3 --- p.39 / General Discussion --- p.57 / Conclusion --- p.73 / References --- p.74 / Appendix A --- p.79 / Appendix B --- p.84
100

Eye movements and driving : insights into methodology, individual differences and training

Mackenzie, Andrew K. January 2016 (has links)
Driving is a complex visuomotor task, and the study of eye movements can provide interesting and detailed insights into driving behaviour. The aim of this thesis was to understand (a) what methods are useful to assess driving behaviour, (b) the reasons we observe differences in eye movements when driving, and (c) offer a possible visual training method. The first experiment compared drivers' eye movements and hazard perception performance in an active simulated driving task and a passive video driving task. A number of differences were found, including an extended horizontal and vertical visual search and faster response to the hazards in the video task. It was concluded that when measuring driving behaviour in an active task, vision, attention and action interact in a complex manner that is reflected in a specific pattern of eye movements that is different to when driving behaviour is measured using typical video paradigms. The second experiment investigated how cognitive functioning may influence eye movement behaviour when driving. It was found that those with better cognitive functioning exhibited more efficient eye movement behaviour than those with poorer cognitive functioning. The third experiment compared the eye movement and driving behaviour of an older adult population and a younger adult population. There were no differences in the eye movement behaviour. However, the older adults drove significantly slower, suggesting attentional compensation. The final experiment investigated the efficacy of using eye movement videos as a visual training tool for novice drivers. It was found that novice drivers improved their visual search strategy when driving after viewing videos of an expert driver's eye movements. The results of this thesis helps to provide insights into how the visual system is used for a complex behaviour such as driving. It also furthers the understanding of what may contribute to, and what may prevent, road accidents.

Page generated in 0.075 seconds