• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 296
  • 89
  • 46
  • 26
  • 15
  • 13
  • 9
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 637
  • 150
  • 114
  • 101
  • 83
  • 73
  • 68
  • 59
  • 58
  • 56
  • 55
  • 55
  • 55
  • 48
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Unveiling Objectification: The Gaze and its Silent Power in the Novels of Frances Burney

Wingfield, Jennifer Joanne 09 June 2006 (has links)
This thesis seeks to portray how an objectifying intra-diegetic gaze influences and constructs the plot devices Frances Burney uses in her four novels: Evelina, Cecilia, Camilla, and The Wanderer. Burney creates a literary reality within her four novels’ narratives and breaks that reality down with the influence of the gazes and judgments of her novels’ characters upon each of her heroines. The gaze is an almost microscopic examination that objectifies and depersonalizes all of Burney’s heroines. Burney shows how the gaze shifts perspectives and manipulates that which it objectifies. Burney places her audience and her heroines into unfamiliar situations and then she shows the costs and benefits of reasserting one’s gaze. This thesis will show how Burney portrays the power of objectification in her novels upon her heroines, and the consequences that arise from the tensions of bombarding social gazes in all their duplicitous forms.
212

Contributions of Central and Peripheral Vision to the Control of Reach-to-Grasp Reactions Evoked by Unpredictable Balance Perturbation

King, Emily Catherine 14 July 2009 (has links)
This thesis presents two studies that investigate how vision is used to control rapid, compensatory reach-to-grasp reactions. Compensatory grasping reactions were evoked in healthy young adults via unpredictable translations of large platforms on which the subjects stood or walked. The first study tracked natural gaze behaviour during responses to unexpected balance perturbations. It provided evidence that, unlike with voluntary movements, the eyes do not lead the hand during balance recovery – subjects relied on ‘stored’ information from central vision, continuously-available peripheral vision, or a combination of these sources to guide the hand. The second study investigated the efficacy of reliance on peripheral vision to guide rapid reach-to-grasp balance-recovery reactions. Peripheral vision was found to guide reach-to-grasp responses with sufficient accuracy to achieve a functional grasp of a relatively small handhold; however, peripherally-guided movements were slower when the handhold was in the extreme periphery.
213

Modulation of Gaze-oriented Attention with Facial Expressions: ERP Correlates and Influence of Autistic Traits

Lassalle, Amandine 09 September 2013 (has links)
The direction in which another is looking at triggers a spontaneous orienting of attention towards gaze direction in the viewer. However, whether the facial expression displayed by the gazing individual modulates this attention orienting is unclear. In this thesis, the modulation of gaze-oriented attention with facial expressions was explored in non-anxious individuals at the behavioral level and at the neural level using Event-Related Potentials (ERP). In the gaze-cueing paradigm used, a dynamic face cue averting gaze and expressing an emotion was presented, followed by a lateral, to-be-localized target. At the behavioral level, a faster response to targets appearing at the gazed-at location (congruent targets) than to targets appearing opposite to the gazed-at location (incongruent targets) was observed (Chapters 3-5). This so-called Gaze Orienting Effect (GOE) was enhanced with fearful, angry and surprised expressions relative to neutral and happy expressions and was driven by emotional differences in response speed to congruent targets (Chapters 3-5). These effects could not be attributed to better discrimination of those emotions when presented with an averted gaze (Chapter 2). These results confirm the impact of fear and surprise on gaze-oriented attention in non-anxious individuals and demonstrate, for the first time, a similar impact for angry expressions. All the emotions enhancing the GOE signal an evolutionary relevant stimulus in the periphery, are threat-related and carry a negative valence, which suggests that one of these attributes (or all combined) is driving the emotional modulation of gaze-oriented attention (surprise is treated like fear in the context of fearful expressions). In Chapter 4, the effect of the dynamic cue sequence on these GOE modulations was investigated. An emotional modulation of the GOE was found only when the gaze shift preceded the emotional expression, but not when the emotion was expressed before gaze shift or when expression and gaze shift were simultaneous. These results highlight the importance of using a sequence closer to real life situations (we usually orient attention before reacting to an object in the environment) in studying the modulation of the GOE with emotions. At the neural level, we investigated the ERPs associated with gaze-oriented attention at target presentation and at cue presentation (Chapters 3 and 5). Confirming previous reports, the amplitude of a target-triggered P1 ERP component was larger in the congruent than in the incongruent condition, reflecting enhanced processing of gaze-congruent targets. In addition, cue-triggered ERPs previously observed in response to arrow cues, were investigated. An Early Directing Attention Negativity (EDAN) and an Anterior Directing Attention Negativity (ADAN) were found, indexing respectively attention-orienting to the cued location and maintenance of attention at the cued location. This is the first study to report both EDAN and ADAN components in response to gaze cues. These results show clear markers of attention orienting by gaze at the neural level, during both cue and target processing. Neither EDAN nor ADAN was modulated by emotion. The congruency effect on P1 was enhanced for fearful, surprised and happy faces compared to neutral faces in Chapter 3 but no differences between the emotions were found in Chapter 5. Thus, the emotional modulation of the brain processes involved in gaze-oriented attention is very weak and protracted or occurs mainly between target onset and response to target. The relationships between participants’ autistic traits and their emotional modulation of gaze-oriented attention were also investigated. Results showed a negative correlation with the GOE to happy upright faces and with the P1 congruency effect, which suggests that individuals with more severe autistic traits are less sensitive to the impact of social emotions like joy. The implication of these results for attention orienting in general and for individuals with Autism Spectrum Disorder is discussed. Together, the findings reported in this thesis clarify the behavioral and neural processes involved in gaze oriented attention and its modulation by facial expression in addition to demonstrating a relationship between gaze oriented attention, its modulation with social emotions and autistic traits.
214

Visual Scanning of Dynamic Affective Stimuli in Autism Spectrum Disorders

McManus, Susan M. 01 August 2012 (has links)
The accurate integration of audio-visual emotion cues is critical for social interactions and requires efficient processing of facial cues. Gaze behavior of typically developing (TD) individuals and individuals with autism spectrum disorders (ASD) was measured via eye-tracking during the perception of dynamic audio-visual emotion (DAVE) stimuli. This study provides information about the regions of the face sampled during an emotion perception task that is relatively more complex than those used in previous studies, providing both bimodal (auditory and visual) and dynamic (biological motion) cues. Results indicated that the ASD group was less accurate at emotion detection and demonstrated less of a visual-affective bias than TD individuals. Both groups displayed similar fixation patterns across regions during the perception of congruent audio-visual stimuli. However, between-group analyses revealed that fixation patterns differed significantly by facial regions during the perception of both congruent and incongruent movies together. In addition, fixation duration to critical regions (i.e., face, core, eyes) was negatively correlated with measures of ASD symptomatology and social impairment. Findings suggest weaknesses in the early integration of audio-visual information, automatic perception of emotion, and efficient detection of affective conflict in individuals with ASD. Implications for future research and social skills intervention programs are discussed.
215

A robotic camera platform for evaluation of biomimetic gaze stabilization using adaptive cerebellar feedback / Robotplattform för utvärdering av adaptiv bildstabilisering av kamera

Landgren, Axel January 2010 (has links)
This thesis describes the development of a robotic platform for evaluation of gaze stabilization algorithms built for the Sensorimotor Systems Laboratory at the University of British Columbia. The primary focus of the work was to measure the performance of a biomimetic vestibulo-ocular reflex controller for gaze stabilization using cerebellar feedback. A flexible robotic system was designed and built in order to run reproducible test sequences at high speeds featuring three dimensional linear movement and rotation around the vertical axis. On top of the robot head a 1 DOF camera head can be independently controlled by a stabilization algorithm implemented in Simulink. Vestibular input is provided by a 3-axis accelerometer and a 3-axis gyroscope. The video feed from the camera head is fed into a workstation computer running a custom image processing program which evaluates both the absolute and relative movement of the images in the sequence. The absolute angles of tracked regions in the image are continuously returned, as well as the movement of the image sequence across the sensor in full 3 DOF camera rotation. Due to dynamic downsampling and noise suppression algorithms very good performance was reached, enabling retinal slip estimation at 720 degrees per second. Two different controllers were implemented, one adaptive open loop controller similar to Dean et al.’s work[12] and one reference implementation using closed loop control and optimal linear estimation of reference angles. A sequence of tests were run in order to evaluate the performance of the two algorithms. The adaptive controller was shown to offer superior performance, dramatically reducing the movement of the image for all test sequences, while also offering better performance as it was tuned over time.
216

Determinants And Strategies For The Alternate Foot Placement

Moraes, Renato January 2005 (has links)
Undesirable landing area (e. g. , a hole, a fragment of glass, a water puddle, etc) creates the necessity for an alternate foot placement planning and execution. Previous study has proposed that three determinants are used by the central nervous system (CNS) for planning an alternate foot placement: minimum foot displacement, stability and maintenance of forward progression. However, validation of these determinants is lacking. Therefore, the general purpose of the series of studies presented here is to validate and test the generality of the decision algorithm of alternate foot placement selection developed previously. The first study was designed to validate the use of a virtual planar obstacle paradigm and the economy assumption behind minimum foot displacement determinant. Participants performed two blocks of trials. In one block, they were instructed to avoid stepping in a virtual planar obstacle projected in the screen of a LCD monitor embedded in the ground. In another block, they were instructed to avoid stepping in a real hole present in walkway. Behavioral response was unaffected by the presence of a real hole. In addition, it was suggested that minimum foot displacement results in minimum changes in EMG activity which validates the economy determinant. The second study was proposed to validate the stability determinant. Participants performed an avoidance task under two conditions: free and forced. In the free condition participants freely chose where to land in order to avoid stepping in a virtual obstacle. In the forced condition, a green arrow was projected over the obstacle indicating the direction of the alternate foot placement. The data from the free condition was used to determine the preferred alternate foot placement whereas the data from the forced condition was used to assess whole body stability. It was found that long and lateral foot placements are preferred because they result in a more stable behavior. The third study was designed to validate the alternate foot placement model in a more complex terrain. Participants were required to avoid stepping in two virtual planar obstacles placed in sequence. It was found that participants used the strategy of planning the avoidance movement globally and additional determinants were used. One of the additional determinants was implementation feasibility. In the third study, gaze behavior was also monitored and two behaviors emerged from this data. One sub-group of participants fixated on the area stepped during adaptive step, whereas another sub-group anchor their gaze in a spot ahead of the area-to-be avoided and used peripheral vision for controlling foot landing. In summary, this thesis validates the three determinants for the alternate foot placement planning model and extends the previous model to more complex terrains.
217

Upper and lower visual field differences : an investigation of the gaze cascade effect

Burkitt Hiebert, Jennifer Ann 08 April 2010 (has links)
The purpose of the current thesis was to investigate the role of gaze direction, when making preference decisions. Previous research has reported a progressive gaze bias towards the preferred stimuli as participants near a decision, termed the gaze cascade effect (Shimojo, Simion, Shimojo & Scheir, 2003). The gaze cascade effect is strongest during the final 1500 msec prior to decision (Shimojo et al.). Previous eye-tracking research has displayed natural viewing biases towards the upper visual field. However, previous investigations have not investigated the impact of image placement on the gaze cascade effect. Study 1 investigated the impact of presenting stimuli vertically on the gaze cascade effect. Results indicated that natural scanning biases towards the upper visual field impacted the gaze cascade effect. The gaze cascade effect was reliably seen only when the preferred image was presented in the upper visual field. Using vertically paired stimuli study 2 investigated the impact of choice difficulty on the gaze cascade effect. Similar to study 1 the gaze cascade effect was only reliably seen when the preferred image was presented in the upper visual field. Additionally choice difficulty impacted the gaze cascade effect where easy decisions displayed a larger gaze cascade effect than hard decisions. Study 3 investigated if the gaze cascade effect is unique to preference decisions or present during all visual decisions. Judgments of concavity using perceptually ambiguous spheres were used and no gaze cascade effect was observed. Study 3 indicated that the gaze cascade effect is unique to preference decisions. Results of the current experiments indicate the gaze cascade effect is qualified by the spatial layout of the stimuli and choice difficulty. Results of the current experiments are consistent with previous eye-tracking research demonstrating biases towards the upper visual field and offering support for Prevics theory on how we interact in visual space.
218

Blickbewegungen in der computermediierten Kooperation

Müller, Romy 31 July 2012 (has links) (PDF)
Mit der wachsenden Notwendigkeit zur Zusammenarbeit von Personen an unterschiedlichen Standorten gewinnt eine effektive Gestaltung technisch mediierter Kommunikation an Bedeutung. Ein wesentliches Problem liegt dabei darin, nonverbale Kommunikationsinhalte so zu übertragen, dass klare Bezüge zwischen der Aufmerksamkeit des Partners und den gemeinsamen Arbeitsobjekten hergestellt werden können. Da Blickbewegungen einen räumlich und zeitlich hochauflösenden Zugang zu Aufmerksamkeitsprozessen ermöglichen, kann ihre Übertragung als Cursor auf dem Bildschirm des Partners zu Verbesserungen im gegenseitigen Verständnis und damit auch der kooperativen Leistung führen. Eine detaillierte Untersuchung der Wirkweise von Blickfeedback und vor allem ein kritischer Vergleich mit herkömmlichen Formen der Cursorübertragung stehen jedoch noch aus. In drei Studien mit insgesamt sechs Experimenten wurde in dieser Dissertation untersucht, wie sich eine Blickübertragung auf den Prozess der technisch mediierten Kommunikation auswirken kann. In der ersten Studie nutzten Personen ihren Blick zur Kommunikation von Bildinhalten. Es wurde geprüft, wie sich Blickparameter im Rahmen einer solchen intentional-kommunikativen Verwendung von Blickbewegungen unterscheiden, die lediglich der Aufnahme von Informationen dienen. Dieser Vergleich wurde bei freier Bildbetrachtung sowie im Rahmen einer restriktiver definierten Aufgabe durchgeführt, in der zu beachtende Bereiche vorab definiert waren. Die zweite Studie kontrastiert im Kontext von Puzzleaufgaben die Übertragung des Partnerblickes mit einer rein verbalen Interaktion und der Rückmeldung seiner Mausbewegungen. Während die Interaktivität zwischen den Partnern variiert wurde, standen sowohl Aufgabenleistung als auch der kommunikative Prozess an sich im Fokus der Untersuchungen. Zu diesem Zwecke wurden verbale Äußerungen der Partner, einzelne Handlungen auf dem Weg zur Lösung und Parameter der Blickbewegungen betrachtet. In der dritten Studie wurde der übertragene Blick genutzt, um mithilfe eines beweglichen Fensters diejenigen Bildbereiche sichtbar zu machen, die der Partner zur Lösung benötigte. Blickübertragung wurde auch hier mit dem Mauszeigen verglichen. Dabei wurde die Sichtbarkeit aufgabenrelevanter Objekte für den fensterverschiebenden Assistenten variiert und geprüft, wie sich dies auf die Koordination gemeinsamer Handlungen unter Verwendung beider Cursortypen auswirkte. Insgesamt zeigen die Ergebnisse, dass eine kommunikative Nutzung von Blickbewegungen in visuell-räumlichen Aufgaben zu Leistungsverbesserungen im Vergleich zur rein sprachlichen Kommunikation führen kann. Verglichen mit der Mausübertragung geht Blickübertragung mit einer geringeren Sicherheit über die Cursorintention und die damit verbundene Handlungsrelevanz des Blickes einher. Dieses Problem besteht vor allem in interaktiven, weniger strukturierten Aufgaben und in Situationen, in denen der Partnerblick nicht zu den Objekten in Bezug gesetzt werden kann, auf die er sich bezieht. Anhand der Ergebnisse werden Potentiale und Schwierigkeiten in der Übertragung von Blickbewegungen diskutiert. Es werden Vorschläge unterbreitet, in welchen Kontexten ihr Einsatz zur Verbesserung der technisch mediierten Kommunikation sinnvoll sein kann und was bei der Gestaltung solcher Anwendungen beachtet werden sollte.
219

What Do We Know About Joint Attention in Shared Book Reading? An Eye-tracking Intervention Study

Guo, Jia January 2011 (has links)
<p>Joint attention is critical for social learning activities such as parent-child shared book reading. However, there is a potential disassociation of attention when the adult reads texts while the child looks at pictures. I hypothesize that the lack of joint attention limits children's opportunity to learn print-related skills. The current study tests the hypothesis with interventions that enhance real-time joint attention. Eye movements of parents and children were simultaneously tracked when they read books together on computer screens. I also provided real-time feedback to the parent regarding where the child was looking, and vice versa. Changes of dyads' reading behaviors before and after the joint attention intervention were measured from both eye movements and video records. Baseline data showed little joint attention in parent-child shared book reading. The real-time attention feedback significantly increased the joint attention and children's print-related learning. These findings supported my hypothesis that engaging in effective joint attention is critical for children to acquire knowledge and skills during shared reading and other collaborative learning activities.</p> / Dissertation
220

Computer Simulation And Implementation Of A Visual 3-d Eye Gaze Tracker For Autostreoscopic Displays

Ince, Kutalmis Gokalp 01 January 2010 (has links) (PDF)
In this thesis, a visual 3-D eye gaze tracker is designed and implemented to tested via computer simulations and on an experimental setup. Proposed tracker is designed to examine human perception on autostereoscopic displays when the viewer is 3m away from such displays. Two different methods are proposed for calibrating personal parameters and gaze estimation, namely line of gaze (LoG) and line of sight (LoS) solutions. 2-D and 3-D estimation performances of the proposed system are observed both using computer simulations and the experimental setup. In terms of 2-D and 3-D performance criteria, LoS solution generates slightly better results compared to that of LoG on experimental setup and their performances are found to be comparable in simulations. 2-D estimation inaccuracy of the system is obtained as smaller than 0.5&deg / during simulations and approximately 1&deg / for the experimental setup. 3-D estimation inaccuracy of the system along x- and y-axis is obtained as smaller than 2&deg / during the simulations and the experiments. However, estimation accuracy along z-direction is significantly sensitive to pupil detection and head pose estimation errors. For typical error levels, 20cm inaccuracy along z-direction is observed during simulations, whereas this inaccuracy reaches 80cm in the experimental setup.

Page generated in 0.4349 seconds