• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 15
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 131
  • 131
  • 131
  • 37
  • 36
  • 31
  • 30
  • 26
  • 24
  • 17
  • 14
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analysis of independent motion detection in 3D scenes

Floren, Andrew William 30 October 2012 (has links)
In this thesis, we develop an algorithm for detecting independent motion in real-time from 2D image sequences of arbitrarily complex 3D scenes. We discuss the necessary background information in image formation, optical flow, multiple view geometry, robust estimation, and real-time camera and scene pose estimation for constructing and understanding the operation of our algorithm. Furthermore, we provide an overview of existing independent motion detection techniques and compare them to our proposed solution. Unfortunately, the existing independent motion detection techniques were not evaluated quantitatively nor were their source code made publicly available. Therefore, it is not possible to make direct comparisons. Instead, we constructed several comparison algorithms which should have comparable performance to these previous approaches. We developed methods for quantitatively comparing independent motion detection algorithms and found that our solution had the best performance. By establishing a method for quantitatively evaluating these algorithms and publishing our results, we hope to foster better research in this area and help future investigators more quickly advance the state of the art. / text
42

Robust Self-Calibration and Fundamental Matrix Estimation in 3D Computer Vision

Rastgar, Houman 30 September 2013 (has links)
The recent advances in the field of computer vision have brought many of the laboratory algorithms into the realm of industry. However, one problem that still remains open in the field of 3D vision is the problem of noise. The challenging problem of 3D structure recovery from images is highly sensitive to the presence of input data that are contaminated by errors that do not conform to ideal assumptions. Tackling the problem of extreme data, or outliers has led to many robust methods in the field that are able to handle moderate levels of outliers and still provide accurate outputs. However, this problem remains open, especially for higher noise levels and so it has been the goal of this thesis to address the issue of robustness with respect to two central problems in 3D computer vision. The two problems are highly related and they have been presented together within a Structure from Motion (SfM) context. The first, is the problem of robustly estimating the fundamental matrix from images whose correspondences contain high outlier levels. Even though this area has been extensively studied, two algorithms have been proposed that significantly speed up the computation of the fundamental matrix and achieve accurate results in scenarios containing more than 50% outliers. The presented algorithms rely on ideas from the field of robust statistics in order to develop guided sampling techniques that rely on information inferred from residual analysis. The second, problem addressed in this thesis is the robust estimation of camera intrinsic parameters from fundamental matrices, or self-calibration. Self-calibration algorithms are notoriously unreliable for general cases and it is shown that the existing methods are highly sensitive to noise. In spite of this, robustness in self-calibration has received little attention in the literature. Through experimental results, it is shown that it is essential for a real-world self-calibration algorithm to be robust. In order to introduce robustness to the existing methods, three robust algorithms have been proposed that utilize existing constraints for self-calibration from the fundamental matrix. However, the resulting algorithms are less affected by noise than existing algorithms based on these constraints. This is an important milestone since self-calibration offers many possibilities by providing estimates of camera parameters without requiring access to the image acquisition device. The proposed algorithms rely on perturbation theory, guided sampling methods and a robust root finding method for systems of higher order polynomials. By adding robustness to self-calibration it is hoped that this idea is one step closer to being a practical method of camera calibration rather than merely a theoretical possibility.
43

Comparing Photogrammetric and Spectral Depth Techniques in Extracting Bathymetric Data from a Gravel-Bed River

Shintani, Christina 27 October 2016 (has links)
Recent advances in through-water photogrammetry and optical imagery indicate that accurate, continuous bathymetric mapping may be possible in shallow, clear streams. This research directly compares the ability of through-water photogrammetry and spectral depth approaches to extract water depth for monitoring fish habitat. Imagery and cross sections were collected on a 140 meter reach of the Salmon River, Oregon, using an unmanned aerial vehicle (UAV) and rtk-GPS. Structure-from-Motion (SfM) software produced a digital elevation model (DEM) (1.5 cm) and orthophoto (0.37 cm). The photogrammetric approach of applying a site-specific refractive index provided the most accurate (mean error 0.009 m) and precise (standard deviation of error 0.17 m) bathymetric data (R2 = 0.67) over the spectral depth and the 1.34 refractive index approaches. This research provides a quantitative comparison between and within bathymetric mapping methods, and suggests that a site-specific refractive index may be appropriate for similar gravel-bed, relatively shallow, clear streams.
44

Modelo para reconstru??o 3D de cenas baseado em imagens

Marro, Alessandro Assi 22 December 2014 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-03-02T22:51:52Z No. of bitstreams: 1 AlessandroAssiMarro_DISSERT.pdf: 1160027 bytes, checksum: dd5a92d0199f985c348be6a11b25208b (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-03-03T23:41:10Z (GMT) No. of bitstreams: 1 AlessandroAssiMarro_DISSERT.pdf: 1160027 bytes, checksum: dd5a92d0199f985c348be6a11b25208b (MD5) / Made available in DSpace on 2016-03-03T23:41:10Z (GMT). No. of bitstreams: 1 AlessandroAssiMarro_DISSERT.pdf: 1160027 bytes, checksum: dd5a92d0199f985c348be6a11b25208b (MD5) Previous issue date: 2014-12-22 / Reconstru??o 3D ? o processo pelo qual se faz poss?vel a obten??o de um modelo gr?- fico detalhado em tr?s dimens?es de alguma cena objetivada. Para a obten??o do modelo gr?fico que detalha a cena, faz-se uso de sequ?ncias de imagens que a fotografam, assim ? poss?vel adquirir de forma automatizada informa??es sobre a profundidade de pontos caracter?sticos, ou como comumente chamados, features. Esses pontos s?o portanto destacados utilizando-se alguma t?cnica computacional sobre as imagens que comp?em o dataset utilizado. Utilizando pontos caracter?sticos SURF (Speeded-Up Robust Features) este trabalho procura propor um modelo para obten??o de informa??es 3D sobre pontos principais detectados pelo sistema. Ao termino da aplica??o do sistema proposto sobre sequ?ncias de imagens ? objetivada a aquisi??o de tr?s importantes informa??es: a posi- ??o 3D dos pontos caracter?sticos; as matrizes de rota??o e transla??o relativas entre as imagens; o estudo que relaciona a baseline entre as imagens adjacentes e o erro de precis?o do ponto 3D encontrado. Resultados de implementa??es s?o mostrados indicando resultados consistentes. O sistema proposto tamb?m segue restri??es de Software livre, o que ? uma contribui??o significativa para esta ?rea de aplica??o. / 3D Reconstruction is the process used to obtain a detailed graphical model in three dimensions that represents some real objectified scene. This process uses sequences of images taken from the scene, so it can automatically extract the information about the depth of feature points. These points are then highlighted using some computational technique on the images that compose the used dataset. Using SURF feature points this work propose a model for obtaining depth information of feature points detected by the system. At the ending, the proposed system extract three important information from the images dataset: the 3D position for feature points; relative rotation and translation matrices between images; the realtion between the baseline for adjacent images and the 3D point accuracy error found.
45

MARRT Pipeline: Pipeline for Markerless Augmented Reality Systems Based on Real-Time Structure from Motion

Paulo Gomes Neto, Severino 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:53:49Z (GMT). No. of bitstreams: 2 arquivo1931_1.pdf: 3171518 bytes, checksum: 18e05da39f750dea38eaa754f1aa4735 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Atualmente, com o aumento do poder computacional e os estudos em usabilidade, sistemas de tempo real e foto-realismo, os requisitos de qualquer sistema de computador são mais complexos e sofisticados. Sistemas de Realidade Aumentada não são exceção em sua tentativa de resolver problemas da vida real do usuário com um nível reduzido de risco, tempo gasto ou complexidade de aprendizado. Tais sistemas podem ser classificados como baseados em marcadores ou livres de marcadores. O papel essencial da realidade aumentada sem marcadores é evitar o uso desnecessário e indesejável de marcadores nas aplicações. Para atender à demanda por tecnologias de realidade aumentada robustas e não-intrusivas, esta dissertação propõe uma cadeia de execução para o desenvolvimento de aplicações de realidade aumentada sem marcadores, especialmente baseadas na técnica de recuperação da estrutura a partir do movimento em tempo real
46

Models and methods for geometric computer vision

Kannala, J. (Juho) 27 April 2010 (has links)
Abstract Automatic three-dimensional scene reconstruction from multiple images is a central problem in geometric computer vision. This thesis considers topics that are related to this problem area. New models and methods are presented for various tasks in such specific domains as camera calibration, image-based modeling and image matching. In particular, the main themes of the thesis are geometric camera calibration and quasi-dense image matching. In addition, a topic related to the estimation of two-view geometric relations is studied, namely, the computation of a planar homography from corresponding conics. Further, as an example of a reconstruction system, a structure-from-motion approach is presented for modeling sewer pipes from video sequences. In geometric camera calibration, the thesis concentrates on central cameras. A generic camera model and a plane-based camera calibration method are presented. The experiments with various real cameras show that the proposed calibration approach is applicable for conventional perspective cameras as well as for many omnidirectional cameras, such as fish-eye lens cameras. In addition, a method is presented for the self-calibration of radially symmetric central cameras from two-view point correspondences. In image matching, the thesis proposes a method for obtaining quasi-dense pixel matches between two wide baseline images. The method extends the match propagation algorithm to the wide baseline setting by using an affine model for the local geometric transformations between the images. Further, two adaptive propagation strategies are presented, where local texture properties are used for adjusting the local transformation estimates during the propagation. These extensions make the quasi-dense approach applicable for both rigid and non-rigid wide baseline matching. In this thesis, quasi-dense matching is additionally applied for piecewise image registration problems which are encountered in specific object recognition and motion segmentation. The proposed object recognition approach is based on grouping the quasi-dense matches between the model and test images into geometrically consistent groups, which are supposed to represent individual objects, whereafter the number and quality of grouped matches are used as recognition criteria. Finally, the proposed approach for dense two-view motion segmentation is built on a layer-based segmentation framework which utilizes grouped quasi-dense matches for initializing the motion layers, and is applicable under wide baseline conditions.
47

Modeling of structured 3-D environments from monocular image sequences

Repo, T. (Tapio) 08 November 2002 (has links)
Abstract The purpose of this research has been to show with applications that polyhedral scenes can be modeled in real time with a single video camera. Sometimes this can be done very efficiently without any special image processing hardware. The developed vision sensor estimates its three-dimensional position with respect to the environment and models it simultaneously. Estimates become recursively more accurate when objects are approached and observed from different viewpoints. The modeling process starts by extracting interesting tokens, like lines and corners, from the first image. Those features are then tracked in subsequent image frames. Also some previously taught patterns can be used in tracking. A few features in the same image are extracted. By this way the processing can be done at a video frame rate. New features appearing can also be added to the environment structure. Kalman filtering is used in estimation. The parameters in motion estimation are location and orientation and their first derivates. The environment is considered a rigid object in respect to the camera. The environment structure consists of 3-D coordinates of the tracked features. The initial model lacks depth information. The relational depth is obtained by utilizing facts such as closer points move faster on the image plane than more distant ones during translational motion. Additional information is needed to obtain absolute coordinates. Special attention has been paid to modeling uncertainties. Measurements with high uncertainty get less weight when updating the motion and environment model. The rigidity assumption is utilized by using shapes of a thin pencil for initial model structure uncertainties. By observing continuously motion uncertainties, the performance of the modeler can be monitored. In contrast to the usual solution, the estimations are done in separate state vectors, which allows motion and 3-D structure to be estimated asynchronously. In addition to having a more distributed solution, this technique provides an efficient failure detection mechanism. Several trackers can estimate motion simultaneously, and only those with the most confident estimates are allowed to update the common environment model. Tests showed that motion with six degrees of freedom can be estimated in an unknown environment. The 3-D structure of the environment is estimated simultaneously. The achieved accuracies were millimeters at a distance of 1-2 meters, when simple toy-scenes and more demanding industrial pallet scenes were used in tests. This is enough to manipulate objects when the modeler is used to offer visual feedback.
48

Local and global methods for registering 2D image sets and 3D point clouds / Méthodes d'optimisation locales et globales pour le recalage d'images 2D et de nuages de points 3D

Paudel, Danda Pani 10 December 2015 (has links)
Pas de résumé / In this thesis, we study the problem of registering 2D image sets and 3D point clouds under threedifferent acquisition set-ups. The first set-up assumes that the image sets are captured using 2Dcameras that are fully calibrated and coupled, or rigidly attached, with a 3D sensor. In this context,the point cloud from the 3D sensor is registered directly to the asynchronously acquired 2D images.In the second set-up, the 2D cameras are internally calibrated but uncoupled from the 3D sensor,allowing them to move independently with respect to each other. The registration for this set-up isperformed using a Structure-from-Motion reconstruction emanating from images and planar patchesrepresenting the point cloud. The proposed registration method is globally optimal and robust tooutliers. It is based on the theory Sum-of-Squares polynomials and a Branch-and-Bound algorithm.The third set-up consists of uncoupled and uncalibrated 2D cameras. The image sets from thesecameras are registered to the point cloud in a globally optimal manner using a Branch-and-Prunealgorithm. Our method is based on a Linear Matrix Inequality framework that establishes directrelationships between 2D image measurements and 3D scene voxels.
49

3D Cave and Ice Block Morphology from Integrated Geophysical Methods: A Case Study at Scărişoara Ice Cave, Romania

Hubbard, Jackson Durain 24 March 2017 (has links)
Scărişoara Ice Cave has been a catalyst of scientific intrigue and effort for over 150 years. These efforts have revealed and described countless natural phenomena – and in the process have made it one of the most studied caves in the world. Of especial interest is the massive ice block located within its Great Hall and scientific reservations. The ice block, which is the oldest and largest known to exist in a cave, has been the focus of multiple surveying and mapping efforts, typically ones utilizing traditional equipment. In this study, the goals were to reconstruct the ice block/cave floor interface and to estimate the volume of the ice block. Once the models were constructed, we aimed to study the relationships between the cave and ice block morphologies. In order to accomplish this goal, three (3) main datasets were collected, processed, and amalgamated. Ground penetrating radar data was used to discern the floor morphology below the ice block. Over 1,500 photographs were collected in the cave and used with Structure from Motion photogrammetry software to construct a texturized 3D model of the cave and ice surfaces. And a total station survey was performed to scale, georeference, and validate each model. Once georeferenced, the data was imported into an ArcGIS geodatabase for further analysis. The methodology described within this study provides a powerful set of instructions for producing highly valuable scientific data, especially related to caves. Here, we describe in detail the novel tools and software used to validate, inspect, manipulate, and measure morphological information while immersed in a fully 3D experience. With this methodology, it is possible to easily and inexpensively create digital elevation models of underground rooms and galleries, to measure the differences between surfaces, to create 3D models from the combination of surfaces, and to intimately inspect a subject area without actually being there. At the culmination of these efforts, the partial ice block volume was estimated to be 118,000 m3 with an uncertainty of ± 9.5%. The volume computed herein is significantly larger than previously thought and the total volume is likely significantly larger, since certain portions were not modeled during this study. In addition, the morphology of ceiling enlargement was linked to areas of high elevation at the base of the ice block. A counterintuitive depression was recognized at the base of the Entrance Shaft. The thickest areas of the ice were identified for future coring projects. And combining all this a new informational allowed us to propose a new theory on the formation of the ice block and to decipher particular speleogenetic aspects.
50

Asservissement visuel coordonné de deux bras manipulateurs / Coordinated visual servoing of two manipulator arms

Fleurmond, Renliw 17 December 2015 (has links)
Nous nous intéressons ici au problème de la coordination de plusieurs bras manipulateurs au moyen de la vision. Après avoir étudié les approches de commande dédiées à ce problème, notre première contribution a consisté à établir un formalisme basé sur l'asservissement visuel 2D. Ce formalisme permet de bénéficier des images fournies par une ou plusieurs caméras, embarquées ou déportées, pour coordonner les mouvements d'un système robotique multi-bras. Il permet de plus d'exploiter la redondance de ce type de système pour prendre en compte des contraintes supplémentaires. Nous avons ainsi développé une stratégie de commande pour réaliser une tâche de manipulation coordonnée tout en évitant les butées articulaires et la perte des indices visuels. Afin d'aller plus loin et de tolérer les occultations, nous avons proposé des approches permettant de reconstruire la structure des objets manipulés et donc les indices visuels qui les caractérisent. Enfin, nous avons validé nos travaux en simulation et expérimentalement sur le robot PR2. / We address the problem of coordinating a dual arm robot using one or several cameras. After proposing an overview of the control techniques dedicated to this problem, we develop a formalism allowing to coordinate the motions of several arms thanks to multicameras image based visual servoing. Our approach allows to bene?t from the natural redundancy provided by the robotic system to take into account useful constraints such as joint limits and occlusions avoidance. We propose a strategy to deal with these tasks simultaneously. Finally, to make our control more robust with respect to image losses, we reconstruct the structure of the manipulated objects and the corresponding visual features. To validate our approach, we use the formalism to make the dual arm PR2 robot recap a pen. Simulations and experimental results are provided.

Page generated in 0.0855 seconds