• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Visual Flow Display for Pilot Spatial Orientation

Eriksson, Lars January 2009 (has links)
Pilot spatial disorientation (SD) is a significant cause of incidents and fatal accidents in aviation. The pilot is susceptible to SD especially in low visibility when the visual system is deprived of information from outside the cockpit. This thesis presents the notion of visual flow displays as enhancement of symbology on flight displays primarily in low visibility for improved support of the pilot’s spatial orientation (SO) and control actions. In Studies I and II, synthetic visual flow of forward ego-motion was presented on displays and postural responses were used as measures of display effectiveness in determining SO. The visual flow significantly affected SO, and although the increased stimulation of the visual periphery from a width of 45° to about 105° increased the effects there was no further effect at a width of about 150° (Studies I and II). Studies I and II also showed that omitting 20°- or 30°-wide central fields of view from the visual flow either reduced or not reduced the effects. Further, although inconclusive, Study II may indicate that horizon symbology in central visual field may enhance the effects of peripheral visual flow. The appropriate integration of peripheral visual flow with the head-up display symbology of the Gripen aircraft was presented. Acceleration in a human centrifuge was used in Study III to investigate the effects of synthetic visual flow on the primarily vestibular-dependent somatogravic illusion of pitch-up. Two experiments revealed a reduced illusion with the visual flow. The results of Experiment 2 showed the visual flow scene not only reduced the illusion compared with a darkness condition but also compared with the visual scene without visual flow. Thus, similar to the main findings of Studies I and II, synthetic visual flow can significantly affect SO and supports the visually dependent SO system in an essential manner.
2

THE EFFECTS OF ALTERNATE-LINE SHADING ON VISUAL SEARCH IN GRID-BASED GRAPHIC DESIGNS

Lee, Michael P 01 January 2014 (has links)
Objective: The goal of this research was to determine whether alternate-line shading (zebra-striping) of grid-based displays affects the strategy (i.e., “visual flow”) and efficiency of serial search. Background: Grids, matrices, and tables are commonly used to organize information. A number of design techniques and psychological principles are relevant to how viewers’ eyes can be guided through such visual works. One common technique for grids, “zebra-striping,” is intended to guide eyes through the design, or “create visual flow” by alternating shaded and unshaded rows or columns. Method: 13 participants completed a visual serial search task. The target was embedded in a grid that had 1) no shading, 2) shading of alternating rows, or 3) shading of alternating columns. Response times and error rates were analyzed to determine search strategy and efficiency. Results: Our analysis found evidence supporting a weak effect of shading on search strategy. The direction of shading had an impact on which parts of the grid were responded to most rapidly. However, a left-to-right reading bias and middle-to-outside edge effect were also found. Overall performance was reliably better when the grid had no shading. Exploratory analyses suggest individual differences may be a factor. Conclusion: Shading seems to create visual flow that is relatively weak compared to search strategies related to the edge effect or left-to-right reading biases. In general, however, the presence of any type of shading reduced search performance. Application: Designers creating a grid-based display should not automatically assume that shading will change viewers search strategies. Furthermore, although strategic shading may be useful for tasks other than that studied here, our current data indicate that shading can actually be detrimental to visual search for complex (i.e., conjunctive) targets.
3

Visual Flow Analysis and Saliency Prediction

Srinivas, Kruthiventi S S January 2016 (has links) (PDF)
Nowadays, we have millions of cameras in public places such as traffic junctions, railway stations etc., and capturing video data round the clock. This humongous data has resulted in an increased need for automation of visual surveillance. Analysis of crowd and traffic flows is an important step towards achieving this goal. In this work, we present our algorithms for identifying and segmenting dominant ows in surveillance scenarios. In the second part, we present our work aiming at predicting the visual saliency. The ability of humans to discriminate and selectively pay attention to few regions in the scene over the others is a key attentional mechanism. Here, we present our algorithms for predicting human eye fixations and segmenting salient objects in the scene. (i) Flow Analysis in Surveillance Videos: We propose algorithms for segmenting flows of static and dynamic nature in surveillance videos in an unsupervised manner. In static flows scenarios, we assume the motion patterns to be consistent over the entire duration of video and analyze them in the compressed domain using H.264 motion vectors. Our approach is based on modeling the motion vector field as a Conditional Random Field (CRF) and obtaining oriented motion segments which are merged to obtain the final flow segments. This approach in compressed domain is shown to be both accurate and computationally efficient. In the case of dynamic flow videos (e.g. flows at a traffic junction), we propose a method for segmenting the individual object flows over long durations. This long-term flow segmentation is achieved in the framework of CRF using local color and motion features. We propose a Dynamic Time Warping (DTW) based distance measure between flow segments for clustering them and generate representative dominant ow models. Using these dominant flow models, we perform path prediction for the vehicles entering the camera's field-of-view and detect anomalous motions. (ii) Visual Saliency Prediction using Deep Convolutional Neural Networks: We propose a deep fully convolutional neural network (CNN) - DeepFix, for accurately predicting eye fixations in the form of saliency maps. Unlike classical works which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts saliency map in an end-to-end manner. DeepFix is designed to capture visual semantics at multiple scales while taking global context into account. Generally, fully convolutional nets are spatially invariant which prevents them from modeling location dependent patterns (e.g. centre-bias). Our network overcomes this limitation by incorporating a novel Location Biased Convolutional layer. We experimentally show that our network outperforms other recent approaches by a significant margin. In general, human eye fixations correlate with locations of salient objects in the scene. However, only a handful of approaches have attempted to simultaneously address these related aspects of eye fixations and object saliency. In our work, we also propose a deep convolutional network capable of simultaneously predicting eye fixations and segmenting salient objects in a unified framework. We design the initial network layers, shared between both the tasks, such that they capture the global contextual aspects of saliency, while the deeper layers of the network address task specific aspects. Our network shows a significant improvement over the current state-of-the-art for both eye fixation prediction and salient object segmentation across a number of challenging datasets.

Page generated in 0.0655 seconds