Spelling suggestions: "subject:"3structure anda Motion"" "subject:"3structure ando Motion""
31 |
Exploring the multiple techniques available for developing an understanding of soil erosion in the UKBenaud, Pia Emma January 2017 (has links)
Accelerated soil erosion and the subsequent decline in soil depth has negative environmental, and consequently financial, impacts that have implications across all land cover classifications and scales of land management. Ironically, although attempts to quantify soil erosion nationally have illustrated that soil erosion can occur in the UK, understanding whether or not the UK has a soil erosion problem still remains a question to be answered. Accurately quantifying rates of soil erosion requires capturing both the volumetric nature of the visible, fluvial pathways and the subtle nature of the less-visible, diffuse pathways, across varying spatial and temporal scales. Accordingly, as we move towards a national-scale understanding of soil erosion in the UK, this thesis aims to explore some of the multiple techniques available for developing an understanding of soil erosion in the UK. The thesis first explored the information content of existing UK-based soil erosion studies, ascertaining the extent to which these existing data and methodological approaches can be used to develop an empirically derived understanding of soil erosion in the UK. The second research chapter then assessed which of two proximal sensing technologies, Terrestrial Laser Scanning and Structure-from-Motion Multi-view Stereo (SfM-MVS), is best suited to a cost-effective, replicable and robust assessment of soil erosion within a laboratory environment. The final research chapter built on these findings, using both Rare Earth Oxide tracers and SfM-MVS to elucidate retrospective information about sediment sources under changing soil erosion conditions, also within a laboratory environment Given the biased nature of the soil erosion story presented within the existing soil erosion research in the UK, it is impossible to ascertain if the frequency and magnitude of soil erosion events in the UK are problematic. However, this study has also identified that without ‘true’ observations of soil loss i.e. collection of sediment leaving known plot areas, proxies, such as the novel techniques presented in the experimental work herein and the methods used in the existing landscape scale assessments of soil erosion as included in the database chapter, are not capable of providing a complete assessment of soil erosion rates. However, this work has indicated that despite this limitation, each technique can present valuable information on the complex and spatially variable nature of soil erosion and associated processes, across different observational environments and scales.
|
32 |
Mapping Wild Leek with UAV and Satellite Remote SensingMiglhorance, Edmar 05 March 2019 (has links)
Wild leek (Allium tricoccum) is a spring ephemeral of northeastern North America. In the Canadian province of Quebec, it is listed as threatened due to human harvesting, and in Gatineau Park its presence is used as an indicator of human impact. Wild leek grows in patches on the forest floor, and before the tree canopy develops its green leaves are clearly visible through the bare branches of deciduous forests, allowing it to be observed with optical remote sensing. This study developed and tested a new method for monitoring wild leek across large geographic areas by integrating field observations, UAV video, and satellite imagery. Three-cm resolution orthomosaics were generated for five <0.1 km2 sites from the UAV video using Structure-from-Motion, segmented, and classified into wild leek (WL) or other (OT) surface types using a simple greenness threshold. The resulting maps, validated using the field observations, had a high overall accuracy (F1-scores between 0.64 to 0.94). These maps were then used to calibrate a linear model predicting the per-pixel percentage cover of wild leek (%WL) from NDVI in the satellite imagery. The linear model calibrated for a Sentinel-2 image from 2018, covering all of Gatineau Park (~361 km2), allowed %WL to be predicted with an RMSE of 10.32. A similar model calibrated for a WorldView-2 image from 2018 was noisy (RMSE = 37.64), though much improved by resampling this image to match the spatial resolution of Sentinel-2, due to MAUP scale effect (RMSE = 13.06). Testing the potential for satellite-based monitoring of wild leek, the %WL prediction errors were similar when a new linear model was developed using the Sentinel-2 image from 2017 (RMSE = 12.84) and when the model calibrated with the 2018 Sentinel-2 image was applied to the 2017 satellite data (RMSE = 16.97). The linear models developed for the Sentinel-2 and WorldView-2 images from 2018 were used to map wild leek cover for Gatineau Park. Both images allowed production of similar wild leek maps that, based on field experience and visual inspection of the imagery, provide good descriptions of the actual distribution of wild leek at Gatineau Park.
|
33 |
Camera View Planning for Structure from Motion: Achieving Targeted Inspection Through More Intelligent View Planning MethodsOkeson, Trent James 01 June 2018 (has links)
Remote sensors and unmanned aerial vehicles (UAVs) have the potential to dramatically improve infrastructure health monitoring in terms of accuracy of the information and frequency of data collection. UAV automation has made significant progress but that automation is also creating vast amounts of data that needs to be processed into actionable information. A key aspect of this work is the optimization (not just automation) of data collection from UAVs for targeted planning of mission objectives. This work investigates the use of camera planning for Structure from Motion for 3D modeling of infrastructure. Included in this thesis is a novel multi-scale view-planning algorithm for autonomous targeted inspection. The method presented reduced the number of photos needed and therefore reduced the processing time while maintaining desired accuracies across the test site. A second focus in this work investigates various set covering problem algorithms to use for selecting the optimal camera set. The trade-offs between solve time and quality of results are explored. The Carousel Greedy algorithm is found to be the best method for solving the problem due to its relatively fast solve speeds and the high quality of the solutions found. Finally, physical flight tests are used to demonstrate the quality of the method for determining coverage. Each of the set covering problem algorithms are used to create a camera set that achieves 95% coverage. The models from the different camera sets are comparable despite having a large amount of variability in the camera sets chosen. While this study focuses on multi-scale view planning for optical sensors, the methods could be extended to other remote sensors, such as aerial LiDAR.
|
34 |
Camera View Planning for Structure from Motion: Achieving Targeted Inspection Through More Intelligent View Planning MethodsOkeson, Trent James 01 June 2018 (has links)
Remote sensors and unmanned aerial vehicles (UAVs) have the potential to dramatically improve infrastructure health monitoring in terms of accuracy of the information and frequency of data collection. UAV automation has made significant progress but that automation is also creating vast amounts of data that needs to be processed into actionable information. A key aspect of this work is the optimization (not just automation) of data collection from UAVs for targeted planning of mission objectives. This work investigates the use of camera planning for Structure from Motion for 3D modeling of infrastructure. Included in this thesis is a novel multi-scale view-planning algorithm for autonomous targeted inspection. The method presented reduced the number of photos needed and therefore reduced the processing time while maintaining desired accuracies across the test site. A second focus in this work investigates various set covering problem algorithms to use for selecting the optimal camera set. The trade-offs between solve time and quality of results are explored. The Carousel Greedy algorithm is found to be the best method for solving the problem due to its relatively fast solve speeds and the high quality of the solutions found. Finally, physical flight tests are used to demonstrate the quality of the method for determining coverage. Each of the set covering problem algorithms are used to create a camera set that achieves 95% coverage. The models from the different camera sets are comparable despite having a large amount of variability in the camera sets chosen. While this study focuses on multi-scale view planning for optical sensors, the methods could be extended to other remote sensors, such as aerial LiDAR.
|
35 |
Quantifying Computer Vision Model Quality Using Various Processing TechniquesRuggles, Samantha Anna 01 June 2016 (has links)
Recently, the use of unmanned aerial vehicles (UAVs) has increased in popularity across several industries. Most notable, however, is the impact that this technology has had in research at academic institutions worldwide. As the technology for UAVs has improved, with that comes easier to operate, more accessible equipment. UAVs have been used in various types of applications and are quickly becoming a preferred method of studying and analyzing a site. Currently, the most common use of a UAV is to monitor a location of interest to a researcher that is difficult to gain access to otherwise. The UAV can be altered to meet the needs of any given project and this versatility has contributed to their popularity. Often, they are equipped with a type of remote sensor that can gather information in the form of images, sounds, heat, or light. Once data has been gathered from a site, it is processed and modified, allowing it to be studied and analyzed. A process known as Structure from Motion (SfM) creates a 3D digital terrain model from camera images captured through the use of a UAV. SfM is a common method of processing the vast amount of images that are taken at a site and the 3D model that it creates is a helpful resource for analysis. These digital models, while useful, are oftentimes created at an unknown accuracy. This research presents a comparative study of the accuracies obtained when different parameters are applied during the SfM process. The results present a comparison of the time required to process a particular model and the accuracy that the model had. Depending on the application and type of project, a desired level of accuracy can be obtained in the presented amount of time. This particular study used a landslide as the site of interest and captured the imagery using a helicopter UAV.
|
36 |
Robust estimation of structure from motion in the uncalibrated casevan den Hengel, Anton January 2000 (has links)
A picture of a scene is a 2-dimensional representation of a 3-dimensional world. In the process of projecting the scene onto the 2-dimensional image plane, some of the information about the 3-dimensional scene is inevitably lost. Given a series of images of a scene, typically taken by a video camera, it is sometimes possible to recover some of this lost 3-dimensional information. Within the computer vision literature this process is described as that of recovering structure from motion. If some of the information about the internal geometry of the camera is unknown, then the problem is described as that of recovering structure from motion in the uncalibrated case. It is this uncalibrated version of the problem that is the concern of this thesis. Optical flow represents the movement of points across the image plane over time. Previous work in the area of structure from motion has given rise to a so-called differential epipolar equation which describes the relationship between optical flow and the motion and internal parameters of the camera. This equation allows the calibration of a camera undergoing unknown motion and having an unknown, and possibly varying, focal length. Obtaining accurate estimates of the camera motion and internal parameters in the presence of noisy optical flow data is critical to the structure recovery process. We present and compare a variety of methods for estimating the coefficients of the differential epipolar equation. The goal of this process is to derive a tractable total least squares estimator of structure from motion robust to the presence of inaccuracies in the data. Methods are also presented for rectifying optical flow to a particular motion estimates, eliminating outliers from the data, and calculating the relative motion of a camera over an image sequence. This thesis thus explores the application of numerical and statistical techniques for the estimation of structure from motion in the uncalibrated case. / Thesis (Ph.D.)--Mathematical and Computer Sciences (Department of Computer Science), 2000.
|
37 |
The Incremental Rigidity Scheme for Recovering Structure from Motion: Position vs. Velocity Based FormulationsGrzywacz, Norberto M., Hildreth, Ellen C. 01 October 1985 (has links)
Perceptual studies suggest that the visual system uses the "rigidity" assumption to recover three dimensional structures from motion. Ullman (1984) recently proposed a computational scheme, the incremental rigidity scheme, which uses the rigidity assumptions to recover the structure of rigid and non-rigid objects in motion. The scheme assumes the input to be discrete positions of elements in motion, under orthographic projection. We present formulations of Ullmans' method that use velocity information and perspective projection in the recovery of structure. Theoretical and computer analyses show that the velocity based formulations provide a rough estimate of structure quickly, but are not robust over an extended time period. The stable long term recovery of structure requires disparate views of moving objects. Our analysis raises interesting questions regarding the recovery of structure from motion in the human visual system.
|
38 |
Maximizing Rigidity: The Incremental Recovery of 3-D Structure from Rigid and Rubbery MotionUllman, Shimon 01 June 1983 (has links)
The human visual system can extract 3-D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3-D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The human visual system requires a longer temporal extension, but it can cope, however, with considerable deviations from rigidity. It is shown how the 3-D structure of rigid and non-rigid objects can be recovered by maintaining an internal model of the viewed object and modifying it at each instant by the minimal non-rigid change that is sufficient to account for the observed transformation. The results of applying this incremental rigidity scheme to rigid and non-rigid objects in motion are described and compared with human perceptions.
|
39 |
Robust Extraction Of Sparse 3d Points From Image SequencesVural, Elif 01 September 2008 (has links) (PDF)
In this thesis, the extraction of sparse 3D points from calibrated image sequences is studied. The presented method for sparse 3D reconstruction is examined in two steps, where the first part addresses the problem of two-view reconstruction, and the second part is the extension of the two-view reconstruction algorithm to include multiple views. The examined two-view reconstruction method consists of some basic building blocks, such as feature detection and matching, epipolar geometry estimation, and the reconstruction of cameras and scene structure. Feature detection and matching is achieved by Scale Invariant Feature Transform (SIFT) method. For the estimation of epipolar geometry, the 7-point and 8-point algorithms are examined for Fundamental matrix (F-matrix) computation, while RANSAC and PROSAC are utilized for the robustness and accuracy for model estimation. In the final stage of two-view reconstruction, the camera projection matrices are computed from the F-matrix, and the locations of 3D scene points are estimated by triangulation / hence, determining the scene structure and cameras up to a projective transformation. The extension of the two-view reconstruction to multiple views is achieved by estimating the camera projection matrix of each additional view from the already reconstructed matches, and then adding new points to the scene structure by triangulating the unreconstructed matches. Finally, the reconstruction is upgraded from projective to metric by a rectifying homography computed from the camera calibration information. In order to obtain a refined reconstruction, two different methods are suggested for the removal of erroneous points from the scene structure. In addition to the examination of the solution to the reconstruction problem, experiments have been conducted that compare the performances of competing algorithms used in various stages of reconstruction. In connection with sparse reconstruction, a rate-distortion efficient piecewise planar scene representation algorithm that generates mesh models of scenes from reconstructed point clouds is examined, and its performance is evaluated through experiments.
|
40 |
Tectonic smoothing and mappingNi, Kai 16 May 2011 (has links)
Large-scale mapping has become the key to numerous applications, e.g. simultaneous localization and mapping (SLAM) for autonomous robots. Despite of the success of many SLAM projects, there are still some challenging scenarios in which most of the current algorithms are not able to deliver an exact solution fast enough. One of these challenges is the size of SLAM problems, which has increased by several magnitudes over the last decade. Another challenge for SLAM problems is the large amount of noise baked in the measurements, which often yields poor initializations and slows or even fails the optimization.
Urban 3D reconstruction is another popular application for large-scale mapping and has received considerable attention recently from the computer vision community. High-quality 3D models are useful in various successful cartographic and architectural applications, such as Google Earth or Microsoft Live Local. At the heart of urban reconstruction problems is structure from motion (SfM). Due to the wide availability of cameras, especially on handhold devices, SfM is becoming a more and more crucial technique to handle a large amount of images.
In the thesis, I present a novel batch algorithm, namely Tectonic Smoothing and Mapping (TSAM). I will show that the original SLAM graph can be recursively partitioned into multiple-level submaps using the nested dissection algorithm, which leads to the cluster tree, a powerful graph representation. By employing the nested dissection algorithm, the algorithm greatly minimizes the dependencies between two subtrees, and the optimization of the original graph can be done using a bottom-up inference along the corresponding cluster tree. To speed up the computation, a base node is introduced for each submap and is used to represent the rigid transformation of the submap in the global coordinate frame. As a result, the optimization moves the base nodes rather than the actual submap variables. I will also show that TSAM can be successfully applied to the SfM problem as well, in which a hypergraph representation is employed to capture the pairwise constraints between cameras. The hierarchical partitioning based on the hypergraph not only yields a cluster tree as in the SLAM problem but also forces resulting submaps to be nonsingular. I will demonstrate the TSAM algorithm using various simulation and real-world data sets.
|
Page generated in 0.0832 seconds