Spelling suggestions: "subject:"3structure anda motion"" "subject:"3structure ando motion""
41 |
SAR remote sensing of soil MoistureSnapir, Boris 12 1900 (has links)
Synthetic Aperture Radar (SAR) has been identified as a good candidate to
provide high-resolution soil moisture information over extended areas. SAR data
could be used as observations within a global Data Assimilation (DA) approach
to benefit applications such as hydrology and agriculture. Prior to developing an
operational DA system, one must tackle the following challenges of soil moisture
estimation with SAR: (1) the dependency of the measured radar signal on both soil
moisture and soil surface roughness which leads to an ill-conditioned inverse problem,
and (2) the difficulty in characterizing spatially/temporally surface roughness of
natural soils and its scattering contribution.
The objectives of this project are (1) to develop a roughness measurement method
to improve the spatial/temporal characterization of soil surface roughness, and (2)
to investigate to what extent the inverse problem can be solved by combining multipolarization,
multi-incidence, and/or multi-frequency radar measurements.
The first objective is achieved with a measurement method based on Structure
from Motion (SfM). It is tailored to monitor natural surface roughness changes which
have often been assumed negligible although without evidence.
The measurement method is flexible, a.ordable, straightforward and generates
Digital Elevation Models (DEMs) for a SAR-pixel-size plot with mm accuracy. A
new processing method based on band-filtering of the DEM and its 2D Power Spectral
Density (PSD) is proposed to compute the classical roughness parameters. Time
series of DEMs show that non-negligible changes in surface roughness can happen
within two months at scales relevant for microwave scattering.
The second objective is achieved using maximum likelihood fitting of the Oh
backscattering model to (1) full-polarimetric Radarsat-2 data and (2) simulated
multi-polarization / multi-incidence / multi-frequency radar data.
Model fitting with the Radarsat-2 images leads to poor soil moisture retrieval
which is related to inaccuracy of the Oh model. Model fitting with the simulated
data quantifies the amount of multilooking for di.erent combinations of measurements
needed to mitigate the critical e.ect of speckle on soil moisture uncertainty.
Results also suggest that dual-polarization measurements at L- and C-bands are a
promising combination to achieve the observation requirements of soil moisture.
In conclusion, the SfM method along with the recommended processing techniques
are good candidates to improve the characterization of surface roughness. A
combination of multi-polarization and multi-frequency radar measurements appears
to be a robust basis for a future Data Assimilation system for global soil moisture
monitoring.
|
42 |
Analysis of independent motion detection in 3D scenesFloren, Andrew William 30 October 2012 (has links)
In this thesis, we develop an algorithm for detecting independent motion in real-time from 2D image sequences of arbitrarily complex 3D scenes. We discuss the necessary background information in image formation, optical flow, multiple view geometry, robust estimation, and real-time camera and scene pose estimation for constructing and understanding the operation of our algorithm. Furthermore, we provide an overview of existing independent motion detection techniques and compare them to our proposed solution. Unfortunately, the existing independent motion detection techniques were not evaluated quantitatively nor were their source code made publicly available. Therefore, it is not possible to make direct comparisons. Instead, we constructed several comparison algorithms which should have comparable performance to these previous approaches. We developed methods for quantitatively comparing independent motion detection algorithms and found that our solution had the best performance. By establishing a method for quantitatively evaluating these algorithms and publishing our results, we hope to foster better research in this area and help future investigators more quickly advance the state of the art. / text
|
43 |
Robust Self-Calibration and Fundamental Matrix Estimation in 3D Computer VisionRastgar, Houman 30 September 2013 (has links)
The recent advances in the field of computer vision have brought many of the laboratory algorithms into the realm of industry. However, one problem that still remains open in the field of 3D vision is the problem of noise. The challenging problem of 3D structure recovery from images is highly sensitive to the presence of input data that are contaminated by errors that do not conform to ideal assumptions. Tackling the problem of extreme data, or outliers has led to many robust methods in the field that are able to handle moderate levels of outliers and still provide accurate outputs. However, this problem remains open, especially for higher noise levels and so it has been the goal of this thesis to address the issue of robustness with respect to two central problems in 3D computer vision. The two problems are highly related and they have been presented together within a Structure from Motion (SfM) context. The first, is the problem of robustly estimating the fundamental matrix from images whose correspondences contain high outlier levels. Even though this area has been extensively studied, two algorithms have been proposed that significantly speed up the computation of the fundamental matrix and achieve accurate results in scenarios containing more than 50% outliers. The presented algorithms rely on ideas from the field of robust statistics in order to develop guided sampling techniques that rely on information inferred from residual analysis. The second, problem addressed in this thesis is the robust estimation of camera intrinsic parameters from fundamental matrices, or self-calibration. Self-calibration algorithms are notoriously unreliable for general cases and it is shown that the existing methods are highly sensitive to noise. In spite of this, robustness in self-calibration has received little attention in the literature. Through experimental results, it is shown that it is essential for a real-world self-calibration algorithm to be robust. In order to introduce robustness to the existing methods, three robust algorithms have been proposed that utilize existing constraints for self-calibration from the fundamental matrix. However, the resulting algorithms are less affected by noise than existing algorithms based on these constraints. This is an important milestone since self-calibration offers many possibilities by providing estimates of camera parameters without requiring access to the image acquisition device. The proposed algorithms rely on perturbation theory, guided sampling methods and a robust root finding method for systems of higher order polynomials. By adding robustness to self-calibration it is hoped that this idea is one step closer to being a practical method of camera calibration rather than merely a theoretical possibility.
|
44 |
Comparing Photogrammetric and Spectral Depth Techniques in Extracting Bathymetric Data from a Gravel-Bed RiverShintani, Christina 27 October 2016 (has links)
Recent advances in through-water photogrammetry and optical imagery indicate that accurate, continuous bathymetric mapping may be possible in shallow, clear streams. This research directly compares the ability of through-water photogrammetry and spectral depth approaches to extract water depth for monitoring fish habitat. Imagery and cross sections were collected on a 140 meter reach of the Salmon River, Oregon, using an unmanned aerial vehicle (UAV) and rtk-GPS. Structure-from-Motion (SfM) software produced a digital elevation model (DEM) (1.5 cm) and orthophoto (0.37 cm). The photogrammetric approach of applying a site-specific refractive index provided the most accurate (mean error 0.009 m) and precise (standard deviation of error 0.17 m) bathymetric data (R2 = 0.67) over the spectral depth and the 1.34 refractive index approaches. This research provides a quantitative comparison between and within bathymetric mapping methods, and suggests that a site-specific refractive index may be appropriate for similar gravel-bed, relatively shallow, clear streams.
|
45 |
Modelo para reconstru??o 3D de cenas baseado em imagensMarro, Alessandro Assi 22 December 2014 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-03-02T22:51:52Z
No. of bitstreams: 1
AlessandroAssiMarro_DISSERT.pdf: 1160027 bytes, checksum: dd5a92d0199f985c348be6a11b25208b (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-03-03T23:41:10Z (GMT) No. of bitstreams: 1
AlessandroAssiMarro_DISSERT.pdf: 1160027 bytes, checksum: dd5a92d0199f985c348be6a11b25208b (MD5) / Made available in DSpace on 2016-03-03T23:41:10Z (GMT). No. of bitstreams: 1
AlessandroAssiMarro_DISSERT.pdf: 1160027 bytes, checksum: dd5a92d0199f985c348be6a11b25208b (MD5)
Previous issue date: 2014-12-22 / Reconstru??o 3D ? o processo pelo qual se faz poss?vel a obten??o de um modelo gr?-
fico detalhado em tr?s dimens?es de alguma cena objetivada. Para a obten??o do modelo
gr?fico que detalha a cena, faz-se uso de sequ?ncias de imagens que a fotografam, assim
? poss?vel adquirir de forma automatizada informa??es sobre a profundidade de pontos
caracter?sticos, ou como comumente chamados, features. Esses pontos s?o portanto destacados
utilizando-se alguma t?cnica computacional sobre as imagens que comp?em o
dataset utilizado. Utilizando pontos caracter?sticos SURF (Speeded-Up Robust Features)
este trabalho procura propor um modelo para obten??o de informa??es 3D sobre pontos
principais detectados pelo sistema. Ao termino da aplica??o do sistema proposto sobre
sequ?ncias de imagens ? objetivada a aquisi??o de tr?s importantes informa??es: a posi-
??o 3D dos pontos caracter?sticos; as matrizes de rota??o e transla??o relativas entre as
imagens; o estudo que relaciona a baseline entre as imagens adjacentes e o erro de precis?o
do ponto 3D encontrado. Resultados de implementa??es s?o mostrados indicando
resultados consistentes. O sistema proposto tamb?m segue restri??es de Software livre, o
que ? uma contribui??o significativa para esta ?rea de aplica??o. / 3D Reconstruction is the process used to obtain a detailed graphical model in three
dimensions that represents some real objectified scene. This process uses sequences of
images taken from the scene, so it can automatically extract the information about the
depth of feature points. These points are then highlighted using some computational
technique on the images that compose the used dataset. Using SURF feature points this
work propose a model for obtaining depth information of feature points detected by the
system. At the ending, the proposed system extract three important information from
the images dataset: the 3D position for feature points; relative rotation and translation
matrices between images; the realtion between the baseline for adjacent images and the
3D point accuracy error found.
|
46 |
MARRT Pipeline: Pipeline for Markerless Augmented Reality Systems Based on Real-Time Structure from MotionPaulo Gomes Neto, Severino 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:53:49Z (GMT). No. of bitstreams: 2
arquivo1931_1.pdf: 3171518 bytes, checksum: 18e05da39f750dea38eaa754f1aa4735 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2009 / Atualmente, com o aumento do poder computacional e os estudos em usabilidade, sistemas de tempo real e foto-realismo, os requisitos de qualquer sistema de computador são mais complexos e sofisticados.
Sistemas de Realidade Aumentada não são exceção em sua tentativa de resolver problemas da vida real do usuário com um nível reduzido de risco, tempo gasto ou complexidade de aprendizado. Tais sistemas podem ser classificados como baseados em marcadores ou livres de marcadores.
O papel essencial da realidade aumentada sem marcadores é evitar o uso desnecessário e indesejável de marcadores nas aplicações.
Para atender à demanda por tecnologias de realidade aumentada robustas e não-intrusivas, esta dissertação propõe uma cadeia de execução para o desenvolvimento de aplicações de realidade aumentada sem marcadores, especialmente baseadas na técnica de recuperação da estrutura a partir do movimento em tempo real
|
47 |
Models and methods for geometric computer visionKannala, J. (Juho) 27 April 2010 (has links)
Abstract
Automatic three-dimensional scene reconstruction from multiple images is a central problem in geometric computer vision. This thesis considers topics that are related to this problem area. New models and methods are presented for various tasks in such specific domains as camera calibration, image-based modeling and image matching. In particular, the main themes of the thesis are geometric camera calibration and quasi-dense image matching. In addition, a topic related to the estimation of two-view geometric relations is studied, namely, the computation of a planar homography from corresponding conics. Further, as an example of a reconstruction system, a structure-from-motion approach is presented for modeling sewer pipes from video sequences.
In geometric camera calibration, the thesis concentrates on central cameras. A generic camera model and a plane-based camera calibration method are presented. The experiments with various real cameras show that the proposed calibration approach is applicable for conventional perspective cameras as well as for many omnidirectional cameras, such as fish-eye lens cameras. In addition, a method is presented for the self-calibration of radially symmetric central cameras from two-view point correspondences.
In image matching, the thesis proposes a method for obtaining quasi-dense pixel matches between two wide baseline images. The method extends the match propagation algorithm to the wide baseline setting by using an affine model for the local geometric transformations between the images. Further, two adaptive propagation strategies are presented, where local texture properties are used for adjusting the local transformation estimates during the propagation. These extensions make the quasi-dense approach applicable for both rigid and non-rigid wide baseline matching.
In this thesis, quasi-dense matching is additionally applied for piecewise image registration problems which are encountered in specific object recognition and motion segmentation. The proposed object recognition approach is based on grouping the quasi-dense matches between the model and test images into geometrically consistent groups, which are supposed to represent individual objects, whereafter the number and quality of grouped matches are used as recognition criteria. Finally, the proposed approach for dense two-view motion segmentation is built on a layer-based segmentation framework which utilizes grouped quasi-dense matches for initializing the motion layers, and is applicable under wide baseline conditions.
|
48 |
Modeling of structured 3-D environments from monocular image sequencesRepo, T. (Tapio) 08 November 2002 (has links)
Abstract
The purpose of this research has been to show with applications that polyhedral scenes can be modeled in real time with a single video camera. Sometimes this can be done very efficiently without any special image processing hardware. The developed vision sensor estimates its three-dimensional position with respect to the environment and models it simultaneously. Estimates become recursively more accurate when objects are approached and observed from different viewpoints.
The modeling process starts by extracting interesting tokens, like lines and corners, from the first image. Those features are then tracked in subsequent image frames. Also some previously taught patterns can be used in tracking. A few features in the same image are extracted. By this way the processing can be done at a video frame rate. New features appearing can also be added to the environment structure.
Kalman filtering is used in estimation. The parameters in motion estimation are location and orientation and their first derivates. The environment is considered a rigid object in respect to the camera. The environment structure consists of 3-D coordinates of the tracked features. The initial model lacks depth information. The relational depth is obtained by utilizing facts such as closer points move faster on the image plane than more distant ones during translational motion. Additional information is needed to obtain absolute coordinates.
Special attention has been paid to modeling uncertainties. Measurements with high uncertainty get less weight when updating the motion and environment model. The rigidity assumption is utilized by using shapes of a thin pencil for initial model structure uncertainties. By observing continuously motion uncertainties, the performance of the modeler can be monitored.
In contrast to the usual solution, the estimations are done in separate state vectors, which allows motion and 3-D structure to be estimated asynchronously. In addition to having a more distributed solution, this technique provides an efficient failure detection mechanism. Several trackers can estimate motion simultaneously, and only those with the most confident estimates are allowed to update the common environment model.
Tests showed that motion with six degrees of freedom can be estimated in an unknown environment. The 3-D structure of the environment is estimated simultaneously. The achieved accuracies were millimeters at a distance of 1-2 meters, when simple toy-scenes and more demanding industrial pallet scenes were used in tests. This is enough to manipulate objects when the modeler is used to offer visual feedback.
|
49 |
Local and global methods for registering 2D image sets and 3D point clouds / Méthodes d'optimisation locales et globales pour le recalage d'images 2D et de nuages de points 3DPaudel, Danda Pani 10 December 2015 (has links)
Pas de résumé / In this thesis, we study the problem of registering 2D image sets and 3D point clouds under threedifferent acquisition set-ups. The first set-up assumes that the image sets are captured using 2Dcameras that are fully calibrated and coupled, or rigidly attached, with a 3D sensor. In this context,the point cloud from the 3D sensor is registered directly to the asynchronously acquired 2D images.In the second set-up, the 2D cameras are internally calibrated but uncoupled from the 3D sensor,allowing them to move independently with respect to each other. The registration for this set-up isperformed using a Structure-from-Motion reconstruction emanating from images and planar patchesrepresenting the point cloud. The proposed registration method is globally optimal and robust tooutliers. It is based on the theory Sum-of-Squares polynomials and a Branch-and-Bound algorithm.The third set-up consists of uncoupled and uncalibrated 2D cameras. The image sets from thesecameras are registered to the point cloud in a globally optimal manner using a Branch-and-Prunealgorithm. Our method is based on a Linear Matrix Inequality framework that establishes directrelationships between 2D image measurements and 3D scene voxels.
|
50 |
3D Cave and Ice Block Morphology from Integrated Geophysical Methods: A Case Study at Scărişoara Ice Cave, RomaniaHubbard, Jackson Durain 24 March 2017 (has links)
Scărişoara Ice Cave has been a catalyst of scientific intrigue and effort for over 150 years. These efforts have revealed and described countless natural phenomena – and in the process have made it one of the most studied caves in the world.
Of especial interest is the massive ice block located within its Great Hall and scientific reservations. The ice block, which is the oldest and largest known to exist in a cave, has been the focus of multiple surveying and mapping efforts, typically ones utilizing traditional equipment. In this study, the goals were to reconstruct the ice block/cave floor interface and to estimate the volume of the ice block. Once the models were constructed, we aimed to study the relationships between the cave and ice block morphologies.
In order to accomplish this goal, three (3) main datasets were collected, processed, and amalgamated. Ground penetrating radar data was used to discern the floor morphology below the ice block. Over 1,500 photographs were collected in the cave and used with Structure from Motion photogrammetry software to construct a texturized 3D model of the cave and ice surfaces. And a total station survey was performed to scale, georeference, and validate each model. Once georeferenced, the data was imported into an ArcGIS geodatabase for further analysis.
The methodology described within this study provides a powerful set of instructions for producing highly valuable scientific data, especially related to caves. Here, we describe in detail the novel tools and software used to validate, inspect, manipulate, and measure morphological information while immersed in a fully 3D experience.
With this methodology, it is possible to easily and inexpensively create digital elevation models of underground rooms and galleries, to measure the differences between surfaces, to create 3D models from the combination of surfaces, and to intimately inspect a subject area without actually being there.
At the culmination of these efforts, the partial ice block volume was estimated to be 118,000 m3 with an uncertainty of ± 9.5%. The volume computed herein is significantly larger than previously thought and the total volume is likely significantly larger, since certain portions were not modeled during this study. In addition, the morphology of ceiling enlargement was linked to areas of high elevation at the base of the ice block. A counterintuitive depression was recognized at the base of the Entrance Shaft. The thickest areas of the ice were identified for future coring projects. And combining all this a new informational allowed us to propose a new theory on the formation of the ice block and to decipher particular speleogenetic aspects.
|
Page generated in 0.105 seconds