1 |
Feature-based stereo vision on a mobile platformHuynh, Du Quan January 1994 (has links)
It is commonly known that stereopsis is the primary way for humans to perceive depth. Although, with one eye, we can still interact very well with our environment and do very highly skillful tasks by using other visual cues such as occlusion and motion, the resultant e ect of the absence of stereopsis is that the relative depth information between objects is essentially lost (Frisby,1979). While humans fuse the images seen by the left and right eyes in a seemingly easy way, the major problem - the correspondence of features - that needs to be solved in all binocular stereo systems of machine vision is not trivial. In this thesis, line segments and corners are chosen to be the features to be matched because they typically occur at object boundaries, surface discontinuities, and across surface markings. Polygonal regions are also selected since they are known to be well-configured and are, very often, associated with salient structures in the image. The use of these high level features, although helping to diminish matching ambiguities, does not completely resolve the matching problem when the scene contains repetitive structures. The spatial relationships between the feature matching pairs enforced in the stereo matching process, as proposed in this thesis, are found to provide even stronger support for correct feature matching pairs and, as a result, incorrect matching pairs can be largely eliminated. Getting global and salient 3D structures has been an important prerequisite for environmental modelling and understanding. While research on postprocessing the 3D information obtained from stereo has been attempted (Ayache and Faugeras, 1991), the strategy presented in this thesis for retrieving salient 3D descriptions is propagating the prominent information extracted from the 2D images to the 3D scene. Thus, the matching of two prominent 2D polygonal regions yields a prominent 3D region, and the inter-relation between two 2D region matching pairs is passed on and taken as a relationship between two 3D regions. Humans, when observing and interacting with the environment do not confine themselves to the observation and then the analysis of a single image. Similarly stereopsis can be vastly improved with the introduction of additional stereo image pairs. Eye, head, and body movements provide essential mobility for an active change of viewpoints, the disocclusion of occluded objects, the avoidance of obstacles, and the performance of any necessary tasks on hand. This thesis presents a mobile stereo vision system that has its eye movements provided by a binocular head support and stepper motors, and its body movements provided by a mobile platform, the Labmate. With a viewer centred coordinate system proposed in this thesis the computation of the 3D information observed at each individual viewpoint, the merging of the 3D in formation at consecutive viewpoints for environmental reconstruction, and strategies for movement control are discussed in detail.
|
2 |
Integrating depth and intensity information for vision-based head trackingKatta, Pradeep. January 2008 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2008. / "August, 2008." Includes bibliographical references (leaves 47-51). Online version available on the World Wide Web.
|
3 |
Using neural networks for three-dimensional measurement in stereo vision systems /Tien, Fang-Chih, January 1996 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 1996. / Typescript. Vita. Includes bibliographical references (leaves 187-202). Also available on the Internet.
|
4 |
Using neural networks for three-dimensional measurement in stereo vision systemsTien, Fang-Chih, January 1996 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 1996. / Typescript. Vita. Includes bibliographical references (leaves 187-202). Also available on the Internet.
|
5 |
Stereo camera calibrationO'Kennedy, Brian James 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: We present all the components needed for a fully-fledged stereo vision system, ranging
from object detection through camera calibration to depth perception. We propose an
efficient, automatic and practical method to calibrate cameras for use in 3D machine
vision metrology. We develop an automated stereo calibration system that only requires
a series of views of a manufactured calibration object in unknown positions. The system is
tested against real and synthetic data, and we investigate the robustness of the proposed
method compared to standard calibration practice.
All the aspects of 3D stereo reconstruction is dealt with and we present the necessary
algorithms to perform epipolar rectification on images as well as solving the correspondence
and triangulation problems.
It was found that the system performs well even in the presence of noise, and calibration
is easy and requires no specialist knowledge. / AFRIKAANSE OPSOMMING: Ons beskryf al die komponente van 'n omvattende stereo visie sisteem. Die kern van die
sisteem is 'n effektiewe, ge-outomatiseerde en praktiese metode om kameras te kalibreer
vir gebruik in 3D rekenaarvisie.
Ons ontwikkel 'n outomatiese, stereo kamerakalibrasie sisteem wat slegs 'n reeks beelde
van 'n kalibrasie voorwerp in onbekende posisies vereis. Die sisteem word getoets met reële
en sintetiese data, en ons vergelyk die robuustheid van die metode met die standaard
algoritmes.
Al die aspekte van die 3D stereo rekonstruksie word behandel en ons beskryf die
nodige algoritmes om epipolêre rektifikasie op beelde te doen sowel as metodes om die
korrespondensie- en diepte probleme op te los.
Ons wys dat die sisteem goeie resultate lewer in die aanwesigheid van ruis en dat
kamerakalibrasie outomaties kan geskied sonder dat enige spesialis kennis benodig word.
|
6 |
The Omnidirectional Acquisition of Stereoscopic Images of Dynamic ScenesGurrieri, Luis E. 16 April 2014 (has links)
This thesis analyzes the problem of acquiring stereoscopic images in all gazing directions
around a reference viewpoint in space with the purpose of creating stereoscopic panoramas
of non-static scenes. The generation of immersive stereoscopic imagery suitable to stimulate
human stereopsis requires images from two distinct viewpoints with horizontal parallax in
all gazing directions, or to be able to simulate this situation in the generated imagery. The
available techniques to produce omnistereoscopic imagery for human viewing are not suitable
to capture dynamic scenes stereoscopically. This is a not trivial problem when considering
acquiring the entire scene at once while avoiding self-occlusion between multiple cameras.
In this thesis, the term omnidirectional refers to all possible gazing directions in azimuth
and a limited set of directions in elevation. The acquisition of dynamic scenes restricts the
problem to those techniques suitable for collecting in one simultaneous exposure all the necessary visual information to recreate stereoscopic imagery in arbitrary gazing directions.
The analysis of the problem starts by defining an omnistereoscopic viewing model for
the physical magnitude to be measured by a panoramic image sensor intended to produce
stereoscopic imagery for human viewing. Based on this model, a novel acquisition model is
proposed, which is suitable to describe the omnistereoscopic techniques based on horizontal stereo. From this acquisition model, an acquisition method based on multiple cameras
combined with the rendering by mosaicking of partially overlapped stereoscopic images is
identified as a good candidate to produce omnistereoscopic imagery of dynamic scenes.
Experimental acquisition and rendering tests were performed for different multiple-camera
configurations. Furthermore, a mosaicking criterion between partially overlapped stereoscopic
images based on the continuity of the perceived depth and the prediction of the location and
magnitude of unwanted vertical disparities in the final stereoscopic panorama are two main
contributions of this thesis. In addition, two novel omnistereoscopic acquisition and rendering
techniques were introduced.
The main contributions to this field are to propose a general model for the acquisition of
omnistereoscopic imagery, to devise novel methods to produce omnistereoscopic imagery, and
more importantly, to contribute to the awareness of the problem of acquiring dynamic scenes
within the scope of omnistereoscopic research.
|
7 |
The Omnidirectional Acquisition of Stereoscopic Images of Dynamic ScenesGurrieri, Luis E. January 2014 (has links)
This thesis analyzes the problem of acquiring stereoscopic images in all gazing directions
around a reference viewpoint in space with the purpose of creating stereoscopic panoramas
of non-static scenes. The generation of immersive stereoscopic imagery suitable to stimulate
human stereopsis requires images from two distinct viewpoints with horizontal parallax in
all gazing directions, or to be able to simulate this situation in the generated imagery. The
available techniques to produce omnistereoscopic imagery for human viewing are not suitable
to capture dynamic scenes stereoscopically. This is a not trivial problem when considering
acquiring the entire scene at once while avoiding self-occlusion between multiple cameras.
In this thesis, the term omnidirectional refers to all possible gazing directions in azimuth
and a limited set of directions in elevation. The acquisition of dynamic scenes restricts the
problem to those techniques suitable for collecting in one simultaneous exposure all the necessary visual information to recreate stereoscopic imagery in arbitrary gazing directions.
The analysis of the problem starts by defining an omnistereoscopic viewing model for
the physical magnitude to be measured by a panoramic image sensor intended to produce
stereoscopic imagery for human viewing. Based on this model, a novel acquisition model is
proposed, which is suitable to describe the omnistereoscopic techniques based on horizontal stereo. From this acquisition model, an acquisition method based on multiple cameras
combined with the rendering by mosaicking of partially overlapped stereoscopic images is
identified as a good candidate to produce omnistereoscopic imagery of dynamic scenes.
Experimental acquisition and rendering tests were performed for different multiple-camera
configurations. Furthermore, a mosaicking criterion between partially overlapped stereoscopic
images based on the continuity of the perceived depth and the prediction of the location and
magnitude of unwanted vertical disparities in the final stereoscopic panorama are two main
contributions of this thesis. In addition, two novel omnistereoscopic acquisition and rendering
techniques were introduced.
The main contributions to this field are to propose a general model for the acquisition of
omnistereoscopic imagery, to devise novel methods to produce omnistereoscopic imagery, and
more importantly, to contribute to the awareness of the problem of acquiring dynamic scenes
within the scope of omnistereoscopic research.
|
Page generated in 0.0874 seconds