• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Measuring gaze angle changes to maintain fixation upon a small target during motion: 3D motion tracking versus wearable eye-tracker

Rubio Barañano, Alejandro, Barrett, Brendan T., Buckley, John 18 September 2024 (has links)
Yes / Recently we demonstrated how changes in gaze angle can be determined without an eye-tracker. The approach uses 3D motion-capture, to track the viewed target in the head’s reference frame and assumes head or target movement causes a gaze-angle change. This study determined the validity of this “assumed-gaze” method. Participants read information presented on a smartphone whilst walking. Changes in gaze angles were simultaneously assessed with an eye-tracker and our assumed-gaze method. The spatial and temporal agreement of the assumed-gaze approach with the eye-tracker were ~1deg and ~0.02s, respectively, and spatial congruence indicated the direction of changes in the assumed-gaze angle were in accordance with those determined with the eye tracker for ~81% of the time. Findings indicate that when the head is moving and gaze is continually directed to a small target, our assumed-gaze approach can determine changes in gaze angle with comparable precision to a wearable eye-tracker / Alejandro Rubio Baranano ˜ was funded by a UK College of Optometrists PhD studentship
2

Crossmodal Interference During Selective Attention to Spatial Stimuli: Evidence for a Stimulus-Driven Mechanism Underlying the Modality-Congruence Visual Dominance Effect

Linda Tomko (7907639) 25 July 2024 (has links)
<p dir="ltr">Many tasks require processing, filtering, and responding to information from multiple sensory modalities. Crossmodal interactions are common and visual dominance often arises with incongruent sensory information. Past studies have shown that visual dominance tends to be strong in spatial tasks. Experiments in a crossmodal attention switching paradigm with physical-spatial stimuli (e.g., stimuli in left and right locations) have demonstrated a robust visual dominance congruence pattern with conflicting visual-spatial information impairing responses to auditory-spatial stimuli, but conflicting auditory-spatial information having less impact on visual-spatial processing. Strikingly, this pattern does not occur with verbal-spatial stimuli (e.g., the words LEFT and RIGHT as stimuli). In the present study, experiments were conducted to systematically examine the occurrence and underlying basis of this distinction. Participants were presented with either verbal-spatial or physical-spatial stimuli, simultaneously in the visual and auditory modalities, and were to selectively attend and respond to the location of the cued modality. An initial experiment replicated previously reported effects, with similar patterns of crossmodal congruence effects for visual and auditory verbal-spatial stimuli. Three further experiments directly compared crossmodal congruence patterns for physical-spatial and verbal-spatial stimuli across varying attentional conditions. Intermixing verbal and physical spatial stimulus sets did not meaningfully alter the distinct congruence patterns compared to when the sets were blocked, and biasing attention to verbal-spatial processing amplified the modality-congruence interaction for physical-spatial stimuli. Together, the consistent findings of the modality-congruence interaction showing visual dominance for physical-spatial stimuli but not for verbal-spatial stimuli suggests that the effect is driven by the particular spatial sets based on their sensory properties rather than endogenous attentional mechanisms.</p>

Page generated in 0.0738 seconds