• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 15
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 131
  • 131
  • 131
  • 37
  • 36
  • 31
  • 30
  • 26
  • 24
  • 17
  • 14
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Mapping Wild Leek with UAV and Satellite Remote Sensing

Miglhorance, Edmar 05 March 2019 (has links)
Wild leek (Allium tricoccum) is a spring ephemeral of northeastern North America. In the Canadian province of Quebec, it is listed as threatened due to human harvesting, and in Gatineau Park its presence is used as an indicator of human impact. Wild leek grows in patches on the forest floor, and before the tree canopy develops its green leaves are clearly visible through the bare branches of deciduous forests, allowing it to be observed with optical remote sensing. This study developed and tested a new method for monitoring wild leek across large geographic areas by integrating field observations, UAV video, and satellite imagery. Three-cm resolution orthomosaics were generated for five <0.1 km2 sites from the UAV video using Structure-from-Motion, segmented, and classified into wild leek (WL) or other (OT) surface types using a simple greenness threshold. The resulting maps, validated using the field observations, had a high overall accuracy (F1-scores between 0.64 to 0.94). These maps were then used to calibrate a linear model predicting the per-pixel percentage cover of wild leek (%WL) from NDVI in the satellite imagery. The linear model calibrated for a Sentinel-2 image from 2018, covering all of Gatineau Park (~361 km2), allowed %WL to be predicted with an RMSE of 10.32. A similar model calibrated for a WorldView-2 image from 2018 was noisy (RMSE = 37.64), though much improved by resampling this image to match the spatial resolution of Sentinel-2, due to MAUP scale effect (RMSE = 13.06). Testing the potential for satellite-based monitoring of wild leek, the %WL prediction errors were similar when a new linear model was developed using the Sentinel-2 image from 2017 (RMSE = 12.84) and when the model calibrated with the 2018 Sentinel-2 image was applied to the 2017 satellite data (RMSE = 16.97). The linear models developed for the Sentinel-2 and WorldView-2 images from 2018 were used to map wild leek cover for Gatineau Park. Both images allowed production of similar wild leek maps that, based on field experience and visual inspection of the imagery, provide good descriptions of the actual distribution of wild leek at Gatineau Park.
32

Camera View Planning for Structure from Motion: Achieving Targeted Inspection Through More Intelligent View Planning Methods

Okeson, Trent James 01 June 2018 (has links)
Remote sensors and unmanned aerial vehicles (UAVs) have the potential to dramatically improve infrastructure health monitoring in terms of accuracy of the information and frequency of data collection. UAV automation has made significant progress but that automation is also creating vast amounts of data that needs to be processed into actionable information. A key aspect of this work is the optimization (not just automation) of data collection from UAVs for targeted planning of mission objectives. This work investigates the use of camera planning for Structure from Motion for 3D modeling of infrastructure. Included in this thesis is a novel multi-scale view-planning algorithm for autonomous targeted inspection. The method presented reduced the number of photos needed and therefore reduced the processing time while maintaining desired accuracies across the test site. A second focus in this work investigates various set covering problem algorithms to use for selecting the optimal camera set. The trade-offs between solve time and quality of results are explored. The Carousel Greedy algorithm is found to be the best method for solving the problem due to its relatively fast solve speeds and the high quality of the solutions found. Finally, physical flight tests are used to demonstrate the quality of the method for determining coverage. Each of the set covering problem algorithms are used to create a camera set that achieves 95% coverage. The models from the different camera sets are comparable despite having a large amount of variability in the camera sets chosen. While this study focuses on multi-scale view planning for optical sensors, the methods could be extended to other remote sensors, such as aerial LiDAR.
33

Camera View Planning for Structure from Motion: Achieving Targeted Inspection Through More Intelligent View Planning Methods

Okeson, Trent James 01 June 2018 (has links)
Remote sensors and unmanned aerial vehicles (UAVs) have the potential to dramatically improve infrastructure health monitoring in terms of accuracy of the information and frequency of data collection. UAV automation has made significant progress but that automation is also creating vast amounts of data that needs to be processed into actionable information. A key aspect of this work is the optimization (not just automation) of data collection from UAVs for targeted planning of mission objectives. This work investigates the use of camera planning for Structure from Motion for 3D modeling of infrastructure. Included in this thesis is a novel multi-scale view-planning algorithm for autonomous targeted inspection. The method presented reduced the number of photos needed and therefore reduced the processing time while maintaining desired accuracies across the test site. A second focus in this work investigates various set covering problem algorithms to use for selecting the optimal camera set. The trade-offs between solve time and quality of results are explored. The Carousel Greedy algorithm is found to be the best method for solving the problem due to its relatively fast solve speeds and the high quality of the solutions found. Finally, physical flight tests are used to demonstrate the quality of the method for determining coverage. Each of the set covering problem algorithms are used to create a camera set that achieves 95% coverage. The models from the different camera sets are comparable despite having a large amount of variability in the camera sets chosen. While this study focuses on multi-scale view planning for optical sensors, the methods could be extended to other remote sensors, such as aerial LiDAR.
34

Quantifying Computer Vision Model Quality Using Various Processing Techniques

Ruggles, Samantha Anna 01 June 2016 (has links)
Recently, the use of unmanned aerial vehicles (UAVs) has increased in popularity across several industries. Most notable, however, is the impact that this technology has had in research at academic institutions worldwide. As the technology for UAVs has improved, with that comes easier to operate, more accessible equipment. UAVs have been used in various types of applications and are quickly becoming a preferred method of studying and analyzing a site. Currently, the most common use of a UAV is to monitor a location of interest to a researcher that is difficult to gain access to otherwise. The UAV can be altered to meet the needs of any given project and this versatility has contributed to their popularity. Often, they are equipped with a type of remote sensor that can gather information in the form of images, sounds, heat, or light. Once data has been gathered from a site, it is processed and modified, allowing it to be studied and analyzed. A process known as Structure from Motion (SfM) creates a 3D digital terrain model from camera images captured through the use of a UAV. SfM is a common method of processing the vast amount of images that are taken at a site and the 3D model that it creates is a helpful resource for analysis. These digital models, while useful, are oftentimes created at an unknown accuracy. This research presents a comparative study of the accuracies obtained when different parameters are applied during the SfM process. The results present a comparison of the time required to process a particular model and the accuracy that the model had. Depending on the application and type of project, a desired level of accuracy can be obtained in the presented amount of time. This particular study used a landslide as the site of interest and captured the imagery using a helicopter UAV.
35

Robust estimation of structure from motion in the uncalibrated case

van den Hengel, Anton January 2000 (has links)
A picture of a scene is a 2-dimensional representation of a 3-dimensional world. In the process of projecting the scene onto the 2-dimensional image plane, some of the information about the 3-dimensional scene is inevitably lost. Given a series of images of a scene, typically taken by a video camera, it is sometimes possible to recover some of this lost 3-dimensional information. Within the computer vision literature this process is described as that of recovering structure from motion. If some of the information about the internal geometry of the camera is unknown, then the problem is described as that of recovering structure from motion in the uncalibrated case. It is this uncalibrated version of the problem that is the concern of this thesis. Optical flow represents the movement of points across the image plane over time. Previous work in the area of structure from motion has given rise to a so-called differential epipolar equation which describes the relationship between optical flow and the motion and internal parameters of the camera. This equation allows the calibration of a camera undergoing unknown motion and having an unknown, and possibly varying, focal length. Obtaining accurate estimates of the camera motion and internal parameters in the presence of noisy optical flow data is critical to the structure recovery process. We present and compare a variety of methods for estimating the coefficients of the differential epipolar equation. The goal of this process is to derive a tractable total least squares estimator of structure from motion robust to the presence of inaccuracies in the data. Methods are also presented for rectifying optical flow to a particular motion estimates, eliminating outliers from the data, and calculating the relative motion of a camera over an image sequence. This thesis thus explores the application of numerical and statistical techniques for the estimation of structure from motion in the uncalibrated case. / Thesis (Ph.D.)--Mathematical and Computer Sciences (Department of Computer Science), 2000.
36

The Incremental Rigidity Scheme for Recovering Structure from Motion: Position vs. Velocity Based Formulations

Grzywacz, Norberto M., Hildreth, Ellen C. 01 October 1985 (has links)
Perceptual studies suggest that the visual system uses the "rigidity" assumption to recover three dimensional structures from motion. Ullman (1984) recently proposed a computational scheme, the incremental rigidity scheme, which uses the rigidity assumptions to recover the structure of rigid and non-rigid objects in motion. The scheme assumes the input to be discrete positions of elements in motion, under orthographic projection. We present formulations of Ullmans' method that use velocity information and perspective projection in the recovery of structure. Theoretical and computer analyses show that the velocity based formulations provide a rough estimate of structure quickly, but are not robust over an extended time period. The stable long term recovery of structure requires disparate views of moving objects. Our analysis raises interesting questions regarding the recovery of structure from motion in the human visual system.
37

Maximizing Rigidity: The Incremental Recovery of 3-D Structure from Rigid and Rubbery Motion

Ullman, Shimon 01 June 1983 (has links)
The human visual system can extract 3-D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3-D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The human visual system requires a longer temporal extension, but it can cope, however, with considerable deviations from rigidity. It is shown how the 3-D structure of rigid and non-rigid objects can be recovered by maintaining an internal model of the viewed object and modifying it at each instant by the minimal non-rigid change that is sufficient to account for the observed transformation. The results of applying this incremental rigidity scheme to rigid and non-rigid objects in motion are described and compared with human perceptions.
38

Robust Extraction Of Sparse 3d Points From Image Sequences

Vural, Elif 01 September 2008 (has links) (PDF)
In this thesis, the extraction of sparse 3D points from calibrated image sequences is studied. The presented method for sparse 3D reconstruction is examined in two steps, where the first part addresses the problem of two-view reconstruction, and the second part is the extension of the two-view reconstruction algorithm to include multiple views. The examined two-view reconstruction method consists of some basic building blocks, such as feature detection and matching, epipolar geometry estimation, and the reconstruction of cameras and scene structure. Feature detection and matching is achieved by Scale Invariant Feature Transform (SIFT) method. For the estimation of epipolar geometry, the 7-point and 8-point algorithms are examined for Fundamental matrix (F-matrix) computation, while RANSAC and PROSAC are utilized for the robustness and accuracy for model estimation. In the final stage of two-view reconstruction, the camera projection matrices are computed from the F-matrix, and the locations of 3D scene points are estimated by triangulation / hence, determining the scene structure and cameras up to a projective transformation. The extension of the two-view reconstruction to multiple views is achieved by estimating the camera projection matrix of each additional view from the already reconstructed matches, and then adding new points to the scene structure by triangulating the unreconstructed matches. Finally, the reconstruction is upgraded from projective to metric by a rectifying homography computed from the camera calibration information. In order to obtain a refined reconstruction, two different methods are suggested for the removal of erroneous points from the scene structure. In addition to the examination of the solution to the reconstruction problem, experiments have been conducted that compare the performances of competing algorithms used in various stages of reconstruction. In connection with sparse reconstruction, a rate-distortion efficient piecewise planar scene representation algorithm that generates mesh models of scenes from reconstructed point clouds is examined, and its performance is evaluated through experiments.
39

Tectonic smoothing and mapping

Ni, Kai 16 May 2011 (has links)
Large-scale mapping has become the key to numerous applications, e.g. simultaneous localization and mapping (SLAM) for autonomous robots. Despite of the success of many SLAM projects, there are still some challenging scenarios in which most of the current algorithms are not able to deliver an exact solution fast enough. One of these challenges is the size of SLAM problems, which has increased by several magnitudes over the last decade. Another challenge for SLAM problems is the large amount of noise baked in the measurements, which often yields poor initializations and slows or even fails the optimization. Urban 3D reconstruction is another popular application for large-scale mapping and has received considerable attention recently from the computer vision community. High-quality 3D models are useful in various successful cartographic and architectural applications, such as Google Earth or Microsoft Live Local. At the heart of urban reconstruction problems is structure from motion (SfM). Due to the wide availability of cameras, especially on handhold devices, SfM is becoming a more and more crucial technique to handle a large amount of images. In the thesis, I present a novel batch algorithm, namely Tectonic Smoothing and Mapping (TSAM). I will show that the original SLAM graph can be recursively partitioned into multiple-level submaps using the nested dissection algorithm, which leads to the cluster tree, a powerful graph representation. By employing the nested dissection algorithm, the algorithm greatly minimizes the dependencies between two subtrees, and the optimization of the original graph can be done using a bottom-up inference along the corresponding cluster tree. To speed up the computation, a base node is introduced for each submap and is used to represent the rigid transformation of the submap in the global coordinate frame. As a result, the optimization moves the base nodes rather than the actual submap variables. I will also show that TSAM can be successfully applied to the SfM problem as well, in which a hypergraph representation is employed to capture the pairwise constraints between cameras. The hierarchical partitioning based on the hypergraph not only yields a cluster tree as in the SLAM problem but also forces resulting submaps to be nonsingular. I will demonstrate the TSAM algorithm using various simulation and real-world data sets.
40

SAR remote sensing of soil Moisture

Snapir, Boris 12 1900 (has links)
Synthetic Aperture Radar (SAR) has been identified as a good candidate to provide high-resolution soil moisture information over extended areas. SAR data could be used as observations within a global Data Assimilation (DA) approach to benefit applications such as hydrology and agriculture. Prior to developing an operational DA system, one must tackle the following challenges of soil moisture estimation with SAR: (1) the dependency of the measured radar signal on both soil moisture and soil surface roughness which leads to an ill-conditioned inverse problem, and (2) the difficulty in characterizing spatially/temporally surface roughness of natural soils and its scattering contribution. The objectives of this project are (1) to develop a roughness measurement method to improve the spatial/temporal characterization of soil surface roughness, and (2) to investigate to what extent the inverse problem can be solved by combining multipolarization, multi-incidence, and/or multi-frequency radar measurements. The first objective is achieved with a measurement method based on Structure from Motion (SfM). It is tailored to monitor natural surface roughness changes which have often been assumed negligible although without evidence. The measurement method is flexible, a.ordable, straightforward and generates Digital Elevation Models (DEMs) for a SAR-pixel-size plot with mm accuracy. A new processing method based on band-filtering of the DEM and its 2D Power Spectral Density (PSD) is proposed to compute the classical roughness parameters. Time series of DEMs show that non-negligible changes in surface roughness can happen within two months at scales relevant for microwave scattering. The second objective is achieved using maximum likelihood fitting of the Oh backscattering model to (1) full-polarimetric Radarsat-2 data and (2) simulated multi-polarization / multi-incidence / multi-frequency radar data. Model fitting with the Radarsat-2 images leads to poor soil moisture retrieval which is related to inaccuracy of the Oh model. Model fitting with the simulated data quantifies the amount of multilooking for di.erent combinations of measurements needed to mitigate the critical e.ect of speckle on soil moisture uncertainty. Results also suggest that dual-polarization measurements at L- and C-bands are a promising combination to achieve the observation requirements of soil moisture. In conclusion, the SfM method along with the recommended processing techniques are good candidates to improve the characterization of surface roughness. A combination of multi-polarization and multi-frequency radar measurements appears to be a robust basis for a future Data Assimilation system for global soil moisture monitoring.

Page generated in 0.1606 seconds