Spelling suggestions: "subject:"field off view"" "subject:"field off wiew""
1 |
Data acquisition and real-time signal processing in Positron Emission TomographyLamwertz, Leonid 23 August 2013 (has links)
OpenPET was developed to be a scalable and flexible design for data acquisition and signal processing in Positron Emission Tomography (PET) systems. The OpenPET hardware design is mature, but the control software and firmware need further development. In this thesis we developed a software application to connect a host PC with an OpenPET system. We also developed data acquisition firmware that allows data transfer to the host PC. A novel design for an OpenPET coincidence detection processor was proposed, with its basic functionality implemented and validated. A novel method to process PET events in real time was also introduced and validated using simulated data. The feasibility of implementation of this method using Field Programmable Gate Arrays (FPGAs) was demonstrated for our OpenPET system.
|
2 |
Data acquisition and real-time signal processing in Positron Emission TomographyLamwertz, Leonid 23 August 2013 (has links)
OpenPET was developed to be a scalable and flexible design for data acquisition and signal processing in Positron Emission Tomography (PET) systems. The OpenPET hardware design is mature, but the control software and firmware need further development. In this thesis we developed a software application to connect a host PC with an OpenPET system. We also developed data acquisition firmware that allows data transfer to the host PC. A novel design for an OpenPET coincidence detection processor was proposed, with its basic functionality implemented and validated. A novel method to process PET events in real time was also introduced and validated using simulated data. The feasibility of implementation of this method using Field Programmable Gate Arrays (FPGAs) was demonstrated for our OpenPET system.
|
3 |
Ergonomic Magnification Method for Reading With and Without Display Size ConstraintWong, Natalie January 2021 (has links)
No description available.
|
4 |
Spatial perception on perspective displays as a function of field-of-view and virtual environment enhancements based on visual momentum techniquesNeale, Dennis Clay 31 January 2009 (has links)
This study investigated perceptual and cognitive issues relating to manipulations in geometric field-of-view (GFOV) in perspective displays and the effects of incorporating virtual environment enhancements in the interface based on visual momentum (VM) techniques. Geometric field-of view determines the field-of-view (FOV) for perspective displays. Systematic errors in size and distance have been shown to occur in perspective displays as the result of changes in the GFOV. Furthermore, as humans' normal FOV becomes restricted, their ability to acquire spatial information is reduced resulting in a incomplete formulation and representation of the visual world. The magnitude of the resulting biases increase as task difficulty increases. It was predicted that as VM increases in the interface, the ability to overcome problems associated with restricted FOVs will also increase.
Sixty participants who were pre-tested for spatial ability were required to navigate through a virtual office building while estimating space dimensions and performing spatial orientation and representation tasks. A 3 x 2 x 2 mixed-subjects design compared three levels of GFOV, two levels of VM, and two levels of Difficulty.
The results support the hypothesis that 60° is the optimum GFOV for perspective displays. VM increased accuracy for space dimension estimates, reduced direction judgment errors, improved distance estimates when task difficulty was increased, improved participants' cognitive maps, and reduced the error for reconstructing the spatial layout of objects in a virtual space. The results also support the hypothesis that wider FOVs are needed to accurately perform spatial orientation and representation tasks in virtual environments. Spatial ability was also shown to influence performance on some of the tasks in this experiment.
This study effectively demonstrates that the spatial characteristics of architectural representations in perspective displays are not always accurately perceived. There is a clear tradeoff for setting GFOV in perspective displays: A 60° GFOV is necessary for perceiving the basic characteristics of space accurately; however, if spatial orientation and representation are important, a 90° FOV or larger is required. To balance this tradeoff if symbolic enhancements are included in the virtual environment, such as VM techniques, larger FOVs are less of a concern. / Master of Science
|
5 |
Low Field-Of-View CT in the Evaluation of Acute Appendicitis in the Pediatric PopulationFeller, Fionna 26 February 2018 (has links)
A Thesis submitted to The University of Arizona College of Medicine - Phoenix in partial fulfillment of the requirements for the Degree of Doctor of Medicine.
|
6 |
Low Field-Of-View CT in the Evaluation of Acute Appendicitis in the Pediatric PopulationFeller, Fionna 30 March 2018 (has links)
A Thesis submitted to The University of Arizona College of Medicine - Phoenix in partial fulfillment of the requirements for the Degree of Doctor of Medicine. / CT abdomen and pelvis is a widely-used imaging modality used in the evaluation of appendicitis but it carries risks of radiation. A recent retrospective review localizes all appendices (both normal and abnormal) below the level of the L1 vertebral body, obviating the need to scan superior to that level.
This study is a retrospective review of prospectively-collected data from 171 consecutive pediatric patients presenting with clinical suspicion of acute appendicitis and undergoing “low FOV CT.” The low FOV CT uses the L1 vertebral body as the superior aspect of the exam instead of the of the dome of the diaphragm as in standard CT.
|
7 |
The effect of apparent distance on visual spatial attention in simulated driving / Apparent Distance and Attention in Simulated DrivingJiali, Song January 2021 (has links)
Much about visual spatial attention has been learned from studying how observers respond
to two-dimensional stimuli. Less is known about how attention varies along the
depth axis. Most of the work on the effect of depth on spatial attention manipulated
binocular disparity defined depth, and it is less clear how monocular depth cues affect
spatial attention. This thesis investigates the effect of target distance on peripheral
detection in a virtual three-dimensional environment that simulated distance using pictorial
and motion cues. Participants followed a lead car at a constant distance actively
or passively, while travelling along a straight trajectory. The horizontal distribution of
attention was measured using a peripheral target detection task. Both car-following and
peripheral detection were tested alone under focussed attention, and simultaneously under
divided attention. Chapter 2 evaluated the effect of target distance and eccentricity
on peripheral detection. Experiment 1 found an overall near advantage that increased at
larger eccentricities. Experiment 2 examined the effect of anticipation on target detection
and found that equating anticipation across distances drastically reduced the effect
of distance in reaction time, but did not affect accuracy. Experiments 3 and 4 examined
the relative contributions of pictorial cues on the effect of target distance and found that
the background texture that surrounded the targets could explain the main effect of distance
but could not fully account for the interaction between distance and eccentricity.
Chapter 3 extended the findings of Chapter 2 and found that the effect of distance on
peripheral detection in our conditions was non-monotonic and did not depend on fixation
distance. Across chapters, dividing attention between the central car-following and
peripheral target detection tasks consistently resulted in costs for car-following, but not
for peripheral detection. This work has implications for understanding spatial attention
and design of advanced driver assistance systems. / Dissertation / Doctor of Science (PhD) / Our visual world is complex and dynamic, and spatial attention enables us to focus
on certain relevant locations of our world. However, much of what we know about
spatial attention has been studied in the context of a two-dimensional plane, and less
is known about how it varies in the third dimension: depth. This thesis aims to better
understand how spatial attention is affected by depth in a virtual three-dimensional
environment, particularly in a driving context. Generally, driving was simulated using
a car-following task, spatial attention was measured in a task that required detecting
targets appearing at different depths indicated by cues perceivable with one eye. The
results of this work add to the literature that suggests that spatial attention is affected
by depth and contributes to our understanding of how attention may be allocated in
space. Additionally, this thesis may have implications for the design of in-car warning
systems.
|
8 |
The impact of an auditory task on visual processing:implications for cellular phone usage while drivingCross, Ginger Wigington 03 May 2008 (has links)
Previous research suggests that cellular phone conversations or similar auditory/conversational tasks lead to degradations in visual processing. Three contemporary theories make different claims about the nature of the degradation that occurs when we talk on a cellular phone. We are either: (a) disproportionately more likely to miss objects located in the most peripheral areas of the visual environment due to a reduction in the size of the attentional window or functional field of view (Atchley & Dressel, 2004); (b) more likely to miss objects from all areas of the visual environment (even at the center of fixation) because attention is withdrawn from the roadway, leading to inattention blindness or general interference (Strayer & Drews, 2006; Crundall, Underwood, & Chapman, 1999; 2002), or (c) more likely to miss objects that are located on the side of the visual environment contralateral to the cellular phone message due to crossmodal links in spatial attention (Driver & Spence, 2004). These three theories were compared by asking participants to complete central and peripheral visual tasks (i.e., a measure of the functional field of view) in isolation and in combination with an auditory task. During the combined visual/auditory task, peripheral visual targets could appear on the same side as auditory targets or on the opposite side. When the congruency between auditory and visual target locations was not considered (as is typical in previous research), the results were consistent with the general interference/inattention blindness theory, but not the reduced functional field of view theory. Yet, when congruency effects were considered, the results support the theory that crossmodal links affect the spatial allocation of attention: Participants were better at detecting and localizing visual peripheral targets and at generating words for the auditory task if attention was directed to the same location in both modalities.
|
9 |
A Geometric Framework For Vision Modeling In Digital Human Models Using 3D Tessellated Head ScansVinayak, * 01 1900 (has links) (PDF)
The present work deals with the development of a computational geometric framework for vision modeling for performing visibility and legibility analyses in Digital Human Modeling (DHM) using the field-of-view (FoV), estimated geometrically from 3D tessellated head scans. DHM is an inter-disciplinary area of research with the prime objective of evaluating a product, job or environment for intended users through computer-based simulations. Vision modeling in the existing DHM’s has been primarily addressed through FoV modeling using right circular cones (RCC). Perimetry literature establishes that the human FoV is asymmetric due to unrestricted zygomatic vision and restrictions on the nasal side of the face. This observation is neither captured by the simplistic RCC models in DHM, nor rigorously studied in vision literature. Thus, the RCC models for FoV are inadequate for rigorous simulations and the accurate modeling of FoV is required in DHM. The computational framework developed in this work considers three broad components namely, the geometric estimation and representation of FoV, visibility and statistical visibility, and legibility of objects in a given environment.
A computational geometric method for estimating FoV from 3D laser-scanned models of the human head is presented in this work. The strong one-to-one similarity between computed and clinically perimetry maps establishes that the FoV can be geometrically computed using tessellated head models, without necessarily going through the conventional interaction based clinical procedures. The algorithm for FoV computation is extended to model the effect of gaze-direction on the FoV resulting in binocular FoV. A novel unit-cube scheme is presented for robust, efficient and accurate modeling of FoV. This scheme is subsequently used to determine the visibility of 3D tessellated objects for a given FoV. In order to carry out population based visibility studies, the statistical modeling FoV and generation of percentile-based FoV curves are introduced for a given population of FoV curves. The percentile data thus generated was not available in the current ergonomics or perimetry literature. Advanced vision analysis involving character-legibility is demonstrated using the unit-cube with an improved measure to incorporate the effect of character-thickness on its legibility.
|
10 |
The spatiotemporal dynamics of visual attention during real-world event perceptionRinger, Ryan January 1900 (has links)
Doctor of Philosophy / Department of Psychological Sciences / Lester Loschky / Everyday event perception requires us to perceive a nearly constant stream of dynamic information. Although we perceive these events as being continuous, there is ample evidence that we “chunk” our experiences into manageable bits (Zacks & Swallow, 2007). These chunks can occur at fine and coarse grains, with fine event segments being nested within coarse-grained segments. Individual differences in boundary detection are important predictors for subsequent memory encoding and retrieval and are relevant to both normative and pathological spectra of cognition. However, the nature of attention in relation to event structure is not yet well understood. Attention is the process which suppresses irrelevant information while facilitating the extraction of relevant information. Though attentional changes are known to occur around event boundaries, it is still not well understood when and where these changes occur. A newly developed method for measuring attention, the Gaze-Contingent Useful Field of View Task (GC-UFOV; Gaspar et al., 2016; Ringer, Throneburg, Johnson, Kramer, & Loschky, 2016; Ward et al., 2018) provides a means of measuring attention across the visual field (a) in simulated real-world environments and (b) independent of eccentricity-dependent visual constraints. To measure attention, participants performed the GC-UFOV task while watching pre-segmented videos of everyday activities (Eisenberg & Zacks, 2016; Sargent et al., 2013). Attention was probed from 4 seconds prior to 6 seconds after coarse, fine, and non-event boundaries. Afterward, participants’ memories for objects and event order were tested, followed by event segmentation. Attention was predicted to either become impaired (attentional impairment hypothesis), or it was predicted to be broadly distributed at event boundaries and narrowed at event middles (the ambient-to-focal shift hypothesis). The results showed marginal evidence for both attentional impairment and ambient-to-focal shift hypotheses, however model fitness was equal for both models. The results of this study were then used to develop a proposed program of research to further explore the nature of attention during event perception, as well as the ability of these two hypotheses to explain the relationship between attention and memory during real-world event perception.
|
Page generated in 0.0463 seconds