• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 23
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 183
  • 183
  • 64
  • 34
  • 30
  • 26
  • 26
  • 25
  • 24
  • 24
  • 22
  • 21
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Wavelet transforms for stereo imaging

Shi, Fangmin January 2002 (has links)
Stereo vision is a means of obtaining three-dimensional information by considering the same scene from two different positions. Stereo correspondence has long been and will continue to be the active research topic in computer vision. The requirement of dense disparity map output is great demand motivated by modern applications of stereo such as three-dimensional high-resolution object reconstruction and view synthesis, which require disparity estimates in all image regions. Stereo correspondence algorithms usually require significant computation. The challenges are computational economy, accuracy and robustness. While a large number of algorithms for stereo matching have been developed, there still leaves the space for improvement especially when a new mathematical tool such as wavelet analysis becomes mature. The aim of the thesis is to investigate the stereo matching approach using wavelet transform with a view to producing efficient and dense disparity map outputs. After the shift invariance property of various wavelet transforms is identified, the main contributions of the thesis are made in developing and evaluating two wavelet approaches (the dyadic wavelet transform and complex wavelet transform) for solving the standard correspondence problem. This comprises an analysis of the applicability of dyadic wavelet transform to disparity map computation, the definition of a waveletbased similarity measure for matching, the combination of matching results from different scales based on the detectable minimum disparity at each scale and the application of complex wavelet transform to stereo matching. The matching method using the dyadic wavelet transform is through SSD correlation comparison and is in particular detailed. A new measure using wavelet coefficients is defined for similarity comparison. The approach applying a dual tree of complex wavelet transform to stereo matching is formulated through phase information. A multiscale matching scheme is applied for both the matching methods. Imaging testing has been made with various synthesised and real image pairs. Experimental results with a variety of stereo image pairs exhibit a good agreement with ground truth data, where available, and are qualitatively similar to published results for other stereo matching approaches. Comparative results show that the dyadic wavelet transform-based matching method is superior in most cases to the other approaches considered.
32

An evaluation of local two-frame dense stereo matching algorithms

Van der Merwe, Juliaan Werner 06 June 2012 (has links)
M. Ing. / The process of extracting depth information from multiple two-dimensional images taken of the same scene is known as stereo vision. It is of central importance to the field of machine vision as it is a low level task required for many higher level applications. The past few decades has witnessed the development of hundreds of different stereo vision algorithms. This has made it difficult to classify and compare the various approaches to the problem. In this research we provide an overview of the types of approaches that exist to solve the problem of stereo vision. We focus on a specific subset of algorithms, known as local stereo algorithms. Our goal is to critically analyse and compare a representative sample of local stereo algorithm in terms of both speed and accuracy. We also divide the algorithms into discrete interchangeable components and experiment to determine the effect that each of the alternative components has on an algorithm’s speed and accuracy. We investigate even further to quantify and analyse the effect of various design choices within specific algorithm components. Finally we assemble all of the knowledge gained through the experimentation to compose and optimise a novel algorithm. The experimentation highlighted the fact that by far the most important component of a local stereo algorithm is the manner in which it aggregates matching costs. All of the top performing local stereo algorithms dynamically define the shape of the windows over which the matching costs are aggregated. This is done in a manner that aims to only include pixels in a window that is likely to be at the same depth as the depth of the centre pixel of the window. Since the depth is unknown, the cost aggregation techniques use colour and proximity information to best guess whether pixels are at the same depth when defining the shape of the aggregation windows. Local stereo algorithms are usually less accurate than global methods but they are supposed to be faster and more parallelisable. These cost aggregation techniques result in very accurate depth estimates but unfortunately they are also very expensive computationally. We believe the focus of local stereo algorithm development should be speed. Using the experimental results we developed an algorithm that achieves accuracies in the same order of magnitude as the state-of-the-art algorithms while reducing the computation time by over 50%.
33

Road Pothole Detection System Based on Stereo Vision

Li, Yaqi 31 August 2018 (has links)
No description available.
34

Road Distress Analysis using 2D and 3D Information

Bao, Guanqun January 2010 (has links)
No description available.
35

Stereovision Correction Using Modal Analysis

Lanier, Prather Jonathan 23 April 2010 (has links)
Presently, aerial photography remains a popular method for surveillance of landscapes, and its uses continually grow as it is used to monitor trends in areas such as plant distribution and urban construction. The use of computer vision, or more specifically stereo vision, is one common method of gathering this information. By mounting a stereo vision system on the wings of an unmanned aircraft it becomes very useful tool. This technique however, becomes less accurate as stereo vision baselines become longer, aircraft wing spans are increased, and aircraft wings become increasingly flexible. Typically, ideal stereo vision systems involve stationary cameras with parallel fields of view. For an operational aircraft with a stereo vision system installed, stationary cameras can not be expected because the aircraft will experience random atmospheric turbulence in the form of gusts that will excite the dominate frequencies of the aircraft. A method of stereo image rectification has been developed for cases where cameras that will be allowed to deflect on the wings of an fixed wing aircraft that is subjected to random excitation. The process begins by developing a dynamic model the estimates the behavior of a flexible stereo vision system and corrects images collected at maximum deflection. Testing of this method was performed on a flexible stereo vision system subjected to resonance excitation where a reduction in stereo vision distance error is shown. Successful demonstration of this ability is then repeated on a flying wing aircraft by the using a modal survey to understand its behavior. Finally, the flying wing aircraft is subjected to random excitation and a least square fit of the random excitation signal is used to determine points of maximum deflection suitable for stereo image rectification. Using the same techniques for image rectification in resonance excitation, significant reductions in stereo distance errors are shown. / Master of Science
36

Stereo Vision Based Aerial Mapping Using GPS and Inertial Sensors

Sharkasi, Adam Tawfik 03 June 2008 (has links)
The robotics field has grown in recent years to a point where unmanned systems are no longer limited by their capabilities. As such, the mission profiles for unmanned systems are becoming more and more complicated, and a demand has risen for the deployment of unmanned systems into the most complex of environments. Additionally, the objectives for unmanned systems are once more complicated by the necessity for beyond line of sight teleoperation, and in some cases complete vehicle autonomy. Such systems require adequate sensory devices for appropriate situational awareness. Additionally, a large majority of what is currently being done with unmanned systems requires visual data acquisition. A stereo vision system is ideal for such missions as it doubles as both an image acquisition device, and a range finding device. The 2D images captured with a stereo vision system can be mapped to three dimensional point clouds with reference to the optic center of one of the stereo cameras. While stand alone commercial stereo vision systems are capable of doing just that, the GPS/INS aided stereo vision system also has integrated 3-axis accelerometers, 3-axis gyros, 3-axis magnetometer, and GPS receiver allowing for the measurement of the system's position and orientation in global coordinates. This capability provides the potential to geo-reference the 3D data captured with the stereo camera. The GPS/INS aided stereo vision system integrates a combination of commercial and in-house developed devices. The total system includes a Point Grey Research Bumblebee stereovision camera, a Versalogic PC104 computer, a PCB designed for sensor acquisition and power considerations, and a self contained battery. The entire system is all contained within a 9.5â x 5â x 6.5â aluminum enclosure and weighs approximately 6 lbs. The system is also accompanied with a graphical user interface which displays the geo-referenced data within a 3D virtual environment providing adequate sensor feedback for a teleoperated unmanned vehicle. This thesis details the design and implementation of the hardware and software included within this system as well as the results of operation. / Master of Science
37

3-D Point Cloud Generation from Rigid and Flexible Stereo Vision Systems

Short, Nathaniel Jackson 07 January 2010 (has links)
When considering the operation of an Unmanned Aerial Vehicle (UAV) or an Unmanned Ground Vehicle (UGV), such problems as landing site estimation or robot path planning become a concern. Deciding if an area of terrain has a level enough slope and a wide enough area to land a Vertical Take Off and Landing (VTOL) UAV or if an area of terrain is traversable by a ground robot is reliant on data gathered from sensors, such as cameras. 3-D models, which can be built from data extracted from digital cameras, can help facilitate decision making for such tasks by providing a virtual model of the surrounding environment the system is in. A stereo vision system utilizes two or more cameras, which capture images of a scene from two or more viewpoints, to create 3-D point clouds. A point cloud is a set of un-gridded 3-D points corresponding to a 2-D image, and is used to build gridded surface models. Designing a stereo system for distant terrain modeling requires an extended baseline, or distance between the two cameras, in order to obtain a reasonable depth resolution. As the width of the baseline increases, so does the flexibility of the system, causing the orientation of the cameras to deviate from their original state. A set of tools have been developed to generate 3-D point clouds from rigid and flexible stereo systems, along with a method for applying corrections to a flexible system to regain distance accuracy in a flexible system. / Master of Science
38

Using Texture Features To Perform Depth Estimation

Kotha, Bhavi Bharat 22 January 2018 (has links)
There is a great need in real world applications for estimating depth through electronic means without human intervention. There are many methods in the field which help in autonomously finding depth measurements. Some of which are using LiDAR, Radar, etc. One of the most researched topic in the field of depth measurements is Computer Vision which uses techniques on 2D images to achieve the desired result. Out of the many 3D vision techniques used, stereovision is a field where a lot of research is being done to solve this kind of problem. Human vision plays an important part behind the inspiration and research performed in this field. Stereovision gives a very high spatial resolution of depth estimates which is used for obstacle avoidance, path planning, object recognition, etc. Stereovision makes use of two images in the image pair. These images are taken with two cameras from different views and those two images are processed to get depth information. Processing stereo images has been one of the most intensively sought-after research topics in computer vision. Many factors affect the performance of this approach like computational efficiency, depth discontinuities, lighting changes, correspondence and correlation, electronic noise, etc. An algorithm is proposed which uses texture features obtained using Laws Energy Masks and multi-block approach to perform correspondence matching between stereo pair of images with high baseline. This is followed by forming disparity maps to get the relative depth of pixels in the image. An analysis is also made between this approach to the current state-of-the-art algorithms. A robust method to score and rank the stereo algorithms is also proposed. This approach provides a simple way for researchers to rank the algorithms according to their application needs. / Master of Science
39

Erdvinio vaizdo algoritmų palyginimas / Comparison of stereo vision algorithms

Abramovich, Alexander 17 July 2014 (has links)
Kompiuterinė erdvinė rega - tai erdvinės informacijos gavimas iš skaitmeninių vaizdų. Ši mokslo sritis yra ganėtinai nauja ir jos populiarumas auga. Kompiuterinė erdvinė rega naudojama robotikoje, pramonėje, buityje ir kitose srityse. Pagrindinis magistro darbo tikslas yra išanalizuoti ir palyginti erdvinio vaizdo algoritmus. Šiam tikslui pasiekti yra keliami šie uždaviniai: suklasifikuoti erdvinio vaizdo algoritmus, apžvelgti jų sudarymo metodus, sukurti erdvinio vaizdo algoritmų įvertinimo metodiką ir vadovaujantis ja įvertinti erdvinio vaizdo algoritmus. Vadovaujantis kitų autorių moksliniais darbais, erdvinės regos algoritmai, pagal jų veikimo principus buvo suklasifikuoti ir išskirti i dvi grupės: lokalinius ir globalius. Iš kiekvienos grupės buvo išskirti keli algoritmai, su kuriais ir buvo atliekamas tyrimas. Iš lokalinių buvo išrinktas bazinis lokalinis algoritmas su skirtingais matematiniais sprendimais, o iš globalių buvo paimtas dinaminis programavimas. Išvardintų algoritmų palyginimui buvo sukurta įvertinimo metodika. Pagrindiniai jos kriterijai yra koreliacijos koeficientas ir algoritmo atlikimo laikas. Visi išvardinti algoritmai buvo išbandyti vadovaujantis įvertinimo metodika. Remiantis bandymų rezultatais ir metodika buvo išrinkti geriausi rezultatai. Atliktų eksperimentinių tyrimų rezultatai parodo, kad išbandyti algoritmai nėra tobuli, bet ir jie tinka vartojimui, nors ir su tam tikrais apribojimais. Taip pat algoritmų netobulumas parodo, kad ne... [toliau žr. visą tekstą] / Computer stereo vision is a receiving of stereo information from digital images. This field of science is rather new and its popularity is increasing rapidly. Computer stereo vision is applied in robotics, manufacturing industry, everyday life and other spheres. The aim of the Thesis is to analyze and compare the stereo vision algorithms. In order to achieve the aim of the Research, the following tasks are determined: to classify the stereo vision algorithms, to study the methods of algorithm design, to create the method of assessment of stereo vision algorithms and to assess the stereo vision algorithms basing on the ground of this method. Basing on academic works of various authors, the stereo vision algorithms are classified and divided into two groups in accordance with the modes of their functioning: local and global algorithms. Several algorithms are chosen from each group. A based local algorithm with different mathematical solutions is chosen from the group of local algorithms, and the dynamic programming is chosen from the group of the global algorithms. The chosen algorithms are tested in the course of the Research. To compare the mentioned algorithms, the assessment method is prepared, the main criteria of which are the correlation index and the running time of algorithm. All abovementioned algorithms are tested by the means of the method of assessment. The best results are chosen basing on the method and results of the tests. The results of the conducted... [to full text]
40

A Novel Approach for Spherical Stereo Vision / Ein Neuer Ansatz für Sphärisches Stereo Vision

Findeisen, Michel 27 April 2015 (has links) (PDF)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.

Page generated in 0.4779 seconds