• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 24
  • 10
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 2
  • Tagged with
  • 134
  • 134
  • 134
  • 34
  • 28
  • 20
  • 19
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

An anamorphic imaging model to correct geometric distortion in planar holographic stereograms /

Rainsdon, Michael Darwin. January 1900 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1989. / Spine title: Correcting geometric distortion in holographic stereograms. Includes bibliographical references (leaves 37-39).
82

Aircraft position estimation using lenticular sheet generated optical patterns

Barbieri, Nicholas P. January 2008 (has links)
Thesis (M. S.)--Aerospace Engineering, Georgia Institute of Technology, 2008. / Committee Chair: Eric Feron; Committee Member: Eric Johnson; Committee Member: Jerry Seitzman.
83

Contour integration and interpolation geometry, phenomenology, and multiple inputs /

Hilger, James Daniel, January 2009 (has links)
Thesis (Ph. D.)--UCLA, 2009. / Vita. Description based on print version record. Includes bibliographical references (leaves 300-309).
84

Lucas processor array design and applications.

Svensson, Bertil. January 1983 (has links)
Thesis (Ph. D.)--University of Lund. / Includes bibliographical references (p. 234-238).
85

Scheduling optical packet switches with reconfiguration delay /

Li, Xin. January 2005 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2005. / Includes bibliographical references (leaves 111-120). Also available in electronic version.
86

Visual attention: saliency detection and gaze estimation

Peng, Qinmu 28 August 2015 (has links)
Visual attention is an important characteristic in the human vision system, which is capable of allocating the cognitive resources to the selected information. Many researchers are attracted to the study of this mechanism in the human vision system and have achieved a wide range of successful applications. Generally, there are two tasks encountered in the visual attention research including visual saliency detection and gaze estimation. The former is normally described as distinctiveness or prominence as a result of a visual stimulus. Given images or videos as input, saliency detection methods try to simulate the mechanism of human vision system, predicting and locating the salient parts in them. While the later involves physical device to track the eye movement and estimate the gaze points. As for saliency detection, it is an effective technique for studying and mimicking the mechanism of the human vision system. Most of saliency models can predict the visual saliency with the boundary or the rough location of the true salient object, but miss the appearance or shape information. Besides, they pay little attention to the image quality problem such as low-resolution or noises. To handle these problems, in this thesis, we propose to model the visual saliency from local and global perspectives for better detection of the visual saliency. The combination of the local and global saliency scheme employing different visual cues can make fully use of their respective advantages to compute the saliency. Compared with existing models, the proposed method can provide better saliency with more appearance and shape information, and can work well even in the low-resolution or noisy images. The experimental results demonstrate the superiority of the proposed algorithm. Next, video saliency detection is another issue for the visual saliency computation. Numerous works have been proposed to extract the video saliency for the tasks of object detection. However, one might not be able to obtain desirable saliency for inferring the region of foreground objects when the video presents low contrast or complicated background. Thus, this thesis develops a salient object detection approach with less demanding assumption, which gives higher detection performance. The method computes the visual saliency in each frame using a weighted multiple manifold ranking algorithm. It then computes motion cues to estimate the motion saliency and localization prior. By adopting a new energy function, the data term depends on the visual saliency and localization prior; and the smoothness term depends on the constraint in time and space. Compared to existing methods, our approach automatically segments the persistent foreground object while preserving the potential shape. We apply our method to challenging benchmark videos, and show competitive or better results than the existing counterparts. Additionally, to address the problem of gaze estimation, we present a low cost and efficient approach to obtain the gaze point. As opposed to eye gaze estimation techniques requiring specific hardware, e.g. infrared high-resolution camera and infrared light sources, as well as a cumbersome calibration process. We concentrate on visible-imaging and present an approach for gaze estimation using a web camera in a desktop environment. We combine intensity energy and edge strength to locate the iris center and utilize the piecewise eye corner detector to detect the eye corner. To compensate for head movement causing gaze error, we adopt a sinusoidal head model (SHM) to simulate the 3D head shape, and propose an adaptive weighted facial features embedded in the pose from the orthography and scaling with iterations algorithm (AWPOSIT), whereby the head pose can be estimated. Consequently, the gaze estimation is obtained by the integration of the eye vector and head movement information. The proposed method is not sensitive to the light conditions, and the experimental results show the efficacy of the proposed approach
87

Calibrating the photographic reproduction of colour digital images

Heiss, Detlef Guntram January 1985 (has links)
Colour images can be formed by the combination of stimuli in three primary colours. As a result, digital colour images are typically represented as a triplet of values, each value corresponding to the stimulus of a primary colour. The precise stimulus that the eye receives as a result of any particular triplet of values depends on the display device or medium used. Photographic film is one such medium for the display of colour images. This work implements a software system to calibrate the response given to a triplet of values by an arbitrary combination of film recorder and film, in terms of a measurable film property. The implemented system determines the inverse of the film process numerically. It is applied to calibrate the Optronics C-4500 colour film writer of the UBC Laboratory for Computational Vision. Experimental results are described and compared in order to estimate the expected accuracy that can be obtained with this device using commercially available film processing. / Science, Faculty of / Computer Science, Department of / Graduate
88

A relational picture editor /

Düchting, Bernhard. January 1983 (has links)
No description available.
89

A rule-based expert system for image segmentation /

Nazif, Ahmed M. January 1983 (has links)
No description available.
90

Techniques for the generation of three dimensional data for use in complex image synthesis /

Carlson, Wayne Earl January 1982 (has links)
No description available.

Page generated in 0.2307 seconds