Spelling suggestions: "subject:"3structure‐frommotion"" "subject:"3structure‐fromation""
81 |
The Integration of Iterative Convergent Photogrammetric Models and UAV View and Path Planning Algorithms into the Aerial Inspection Practices in Areas with Aerial HazardsFreeman, Michael James 01 December 2020 (has links)
Small unmanned aerial vehicles (sUAV) can produce valuable data for inspections, topography, mapping, and 3D modeling of structures. Used by multiple industries, sUAV can help inspect and study geographic and structural sites. Typically, the sUAV and camera specifications require optimal conditions with known geography and fly pre-determined flight paths. However, if the environment changes, new undetectable aerial hazards may intersect new flight paths. This makes it difficult to construct autonomous flight path missions that are safe in post-hazard areas where the flight paths are based on previously built models or previously known terrain details. The goal of this research is to make it possible for an unskilled pilot to obtain high quality images at key angles which will facilitate the inspections of dangerous environments affected by natural disasters through the construction of accurate 3D models. An iterative process with converging variables can circumvent the current deficit in flying UAVs autonomously and make it possible for an unskilled pilot to gather high quality data for the construction of photogrammetric models. This can be achieved by gaining preliminary photogrammetric data, then creating new flight paths which consider new developments contained in the generated dense clouds. Initial flight paths are used to develop a coarse representation of the target area by aligning key tie points of the initial set of images. With each iteration, a 3D mesh is used to compute a new optimized view and flight path used for the data collection of a better-known location. These data are collected, the model updated, and a new flight path is computed until the model resolution meets the required heights or ground sample distances (GSD). This research uses basic UAVs and camera sensors to lower costs and reduce the need for specialized sensors or data analysis. The four basic stages followed in the study include: determination of required height reductions for comparison and convergent limitation, construction of real-time reconnaissance models, optimized view and flight paths with vertical and horizontal buffers constructed from previous models, and develop an autonomous process that combines the previous stages iteratively. This study advances the use of autonomous sUAV inspections by developing an iterative process of flying a sUAV to potentially detect and avoid buildings, trees, wires, and other hazards in an iterative manner with minimal pilot experience or human intervention; while optimally collecting the required images to generate geometric models of predetermined quality.
|
82 |
Evaluating the performance of multi-rotor UAV-Sfm imagery in assessing simple and complex forest structures: comparison to advanced remote sensing sensorsOnwudinjo, Kenechukwu Chukwudubem 08 March 2022 (has links)
The implementation of Unmanned Aerial Vehicles (UAVs) and Structure‐from‐Motion (SfM) photogrammetry in assessing forest structures for forest inventory and biomass estimations has shown great promise in reducing costs and labour intensity while providing relative accuracy. Tree Height (TH) and Diameter at Breast Height (DBH) are two major variables in biomass assessment. UAV-based TH estimations depend on reliable Digital Terrain Models (DTMs), while UAV-based DBH estimations depend on reliable dense photogrammetric point cloud. The main aim of this study was to evaluate the performance of multirotor UAV photogrammetric point cloud in estimating homogeneous and heterogeneous forest structures, and their comparison to more accurate LiDAR data obtained from Aerial Laser Scanners (ALS), Terrestrial Laser Scanners (TLS), and more conventional means like manual field measurements. TH was assessed using UAVSfM and LiDAR point cloud derived DTMs, while DBH was assessed by comparing UAVSfM photogrammetric point cloud to LiDAR point cloud, as well as to manual measurements. The results obtained in the study indicated that there was a high correlation between UAVSfM TH and ALSLiDAR TH (R2 = 0.9258) for homogeneous forest structures, while a lower correlation between UAVSfM TH and TLSLiDAR TH (R2 = 0.8614) and UAVSfM TH and ALSLiDAR TH (R2 = 0.8850) was achieved for heterogeneous forest structures. A moderate correlation was obtained between UAVSfM DBH and field measurements (R2 = 0.5955) for homogenous forest structures, as well as between UAVSfM DBH and TLSLiDAR DBH (R2 = 0.5237), but a low correlation between UAVSfM DBH and UAVLiDAR DBH (R2 = 0.1114). This research has demonstrated that UAVSfM can be adequately used as a cheaper alternative in forestry management compared to more highcost and accurate LiDAR, as well as traditional technologies, depending on accuracy requirements.
|
83 |
Forensic Validation of 3D modelsLindberg, Mimmi January 2020 (has links)
3D reconstruction can be used in forensic science to reconstruct crime scenes and objects so that measurements and further information can be acquired off-site. It is desirable to use image based reconstruction methods but there is currently no procedure available for determining the uncertainty of such reconstructions. In this thesis the uncertainty of Structure from Motion is investigated. This is done by exploring the literature available on the subject and compiling the relevant information in a literary summary. Also, Monte Carlo simulations are conducted to study how the feature position uncertainty affects the uncertainty of the parameters estimated by bundle adjustment. The experimental results show that poses of cameras that contain few image correspondences are estimated with higher uncertainty. The poses of such cameras are estimated with lesser uncertainty if they have feature correspondences in cameras that contain a higher number of projections.
|
84 |
Nástroj pro 3D rekonstrukci z dat z více typů senzorů / Scalable Multisensor 3D Reconstruction FrameworkŠolony, Marek January 2017 (has links)
Realistické 3D modely prostředí jsou užitečné v mnoha oborech, od inspekce přírodních struktur nebo budov, navigace robotů a tvorby map až po filmový průmysl při zaměřování scény nebo pro integraci speciálních efektů. Je běžné při snímání takové scény použít různých typů senzorů, jako například monokulární, stereoskopické nebo sférické kamery nebo 360° laserové skenery, pro dosažení velkého pokrytí scény. Výhoda laserových skenerů a sférických kamer spočívá právě v zachycení celého okolí jako jeden celistvý snímek. Použitím konvenčních monokulárních kamer lze naproti tomu snadno pokrýt zastíněné části scény nebo zachytit detaily. Proces 3D rekonstrukce sestává ze tří kroků: snímání, zpracování dat a registrace a zpřesnění rekonstrukce. Přínos této disertační práce je podrobná analýza metod registrace obrazu ze sférických a planárních kamer a implementace unifikovaného systému sensorů a měření pro 3D rekonstrukci, jež umožňuje rekonstrukci ze všech dostupných dat. Hlavní výhodou navržené unifikované reprezentace je, že umožňuje společně optimalizovat všechny pózy sensorů a bodů scény aplikací nelineárních optimalizačních metod. Tím dosahuje lepší přesnosti rekonstrukce aniž by se výrazně zvýšily výpočetní nároky.
|
85 |
Interrogating Data-integrity from Archaeological Surface Surveys Using Spatial Statistics and Geospatial Analysis: A Case Study from Stelida, NaxosPitt, Yorgan January 2020 (has links)
The implementation and application of Geographic Information Systems (GIS) and spatial analyses have become standard practice in many archaeological projects. In this study, we demonstrate how GIS can play a crucial role in the study of taphonomy, i.e., understanding the processes that underpinned the creation of archaeological deposits, in this case the distribution of artifacts across an archeological site. The Stelida Naxos Archeological Project (SNAP) is focused on the exploration of a Paleolithic-Mesolithic stone tool quarry site located on the island of Naxos, Greece. An extensive pedestrian survey was conducted during the 2013 and 2014 archeological field seasons. An abundance of lithic material was collected across the surface, with some diagnostic pieces dating to more than 250 Kya. Spatial statistical analysis (Empirical Bayesian Kriging) was conducted on the survey data to generate predictive distribution maps for the site. This study then determined the contextual integrity of the surface artifact distributions through a study of geomorphic processes. A digital surface model (DSM) of the site was produced using Unmanned Aerial Vehicle (UAV) aerial photography and Structure from Motion (SfM) terrain modeling. The DSM employed to develop a Revised Universal Soil Loss Equation (RUSLE) model and hydrological flow models. The model results provide important insights into the site geomorphological processes and allow categorization of the diagnostic surface material locations based on their contextual integrity. The GIS analysis demonstrates that the surface artifact distribution has been significantly altered by post-depositional geomorphic processes, resulting in an overall low contextual integrity of surface artifacts. Conversely, the study identified a few areas with high contextual integrity, loci that represent prime locations for excavation. The results from this study will not only be used to inform and guide further development of the archeological project (as well as representing significant new data in its own right), but also contributes to current debates in survey archaeology, and in mapping and prospection more generally. This project demonstrates the benefit of using spatial analysis as a tool for planning of pedestrian surveys and for predictive mapping of artifact distributions prior to archaeological excavations. / Thesis / Master of Science (MSc)
|
86 |
Dissertation_Meghdad_revised_2.pdfSeyyed Meghdad Hasheminasab (14030547) 30 November 2022 (has links)
<p> </p>
<p>Modern remote sensing platforms such as unmanned aerial vehicles (UAVs) that can carry a variety of sensors including RGB frame cameras, hyperspectral (HS) line cameras, and LiDAR sensors are commonly used in several application domains. In order to derive accurate products such as point clouds and orthophotos, sensors’ interior and exterior orientation parameters (IOP and EOP) must be established. These parameters are derived/refined in a triangulation framework through minimizing the discrepancy between conjugate features extracted from involved datasets. Existing triangulation approaches are not general enough to deal with varying nature of data from different sensors/platforms acquired in diverse environmental conditions. This research develops a generic triangulation framework that can handle different types of primitives (e.g., point, linear, and/or planar features), and sensing modalities (e.g., RGB cameras, HS cameras, and/or LiDAR sensors) for delivering accurate products under challenging conditions with a primary focus on digital agriculture and stockpile monitoring application domains. </p>
|
87 |
A Comparison of Monocular Camera Calibration TechniquesVan Hook, Richard L. 23 May 2014 (has links)
No description available.
|
88 |
Fast and Scalable Structure-from-Motion for High-precision Mobile Augmented Reality SystemsBae, Hyojoon 24 April 2014 (has links)
A key problem in mobile computing is providing people access to necessary cyber-information associated with their surrounding physical objects. Mobile augmented reality is one of the emerging techniques that address this key problem by allowing users to see the cyber-information associated with real-world physical objects by overlaying that cyber-information on the physical objects's imagery. As a consequence, many mobile augmented reality approaches have been proposed to identify and visualize relevant cyber-information on users' mobile devices by intelligently interpreting users' positions and orientations in 3D and their associated surroundings. However, existing approaches for mobile augmented reality primarily rely on Radio Frequency (RF) based location tracking technologies (e.g., Global Positioning Systems or Wireless Local Area Networks), which typically do not provide sufficient precision in RF-denied areas or require additional hardware and custom mobile devices.
To remove the dependency on external location tracking technologies, this dissertation presents a new vision-based context-aware approach for mobile augmented reality that allows users to query and access semantically-rich 3D cyber-information related to real-world physical objects and see it precisely overlaid on top of imagery of the associated physical objects. The approach does not require any RF-based location tracking modules, external hardware attachments on the mobile devices, and/or optical/fiducial markers for localizing a user's position. Rather, the user's 3D location and orientation are automatically and purely derived by comparing images from the user's mobile device to a 3D point cloud model generated from a set of pre-collected photographs.
A further challenge of mobile augmented reality is creating 3D cyber-information and associating it with real-world physical objects, especially using the limited 2D user interfaces in standard mobile devices. To address this challenge, this research provides a new image-based 3D cyber-physical content authoring method designed specifically for the limited screen sizes and capabilities of commodity mobile devices. This new approach does not only provide a method for creating 3D cyber-information with standard mobile devices, but also provides an automatic association of user-driven cyber-information with real-world physical objects in 3D.
Finally, a key challenge of scalability for mobile augmented reality is addressed in this dissertation. In general, mobile augmented reality is required to work regardless of users' location and environment, in terms of physical scale, such as size of objects, and in terms of cyber-information scale, such as total number of cyber-information entities associated with physical objects. However, many existing approaches for mobile augmented reality have mainly tested their approaches on limited real-world use-cases and have challenges in scaling their approaches. By designing fast direct 2D-to-3D matching algorithms for localization, as well as applying caching scheme, the proposed research consistently supports near real-time localization and information association regardless of users' location, size of physical objects, and number of cyber-physical information items.
To realize all of these research objectives, five research methods are developed and validated: 1) Hybrid 4-Dimensional Augmented Reality (HD4AR), 2) Plane transformation based 3D cyber-physical content authoring from a single 2D image, 3) Cached k-d tree generation for fast direct 2D-to-3D matching, 4) double-stage matching algorithm with a single indexed k-d tree, and 5) K-means Clustering of 3D physical models with geo-information. After discussing each solution with technical details, the perceived benefits and limitations of the research are discussed with validation results. / Ph. D.
|
89 |
Simultaneous recognition, localization and mapping for wearable visual robotsCastle, Robert Oliver January 2009 (has links)
With the advent of ever smaller and more powerful portable computing devices, and ever smaller cameras, wearable computing is becoming more feasible. The ever increasing numbers of augmented reality applications are allowing users to view additional data about their world overlaid on their world using portable computing devices. The main aim of this research is to enable a user of a wearable robot to explore large environments automatically viewing augmented reality at locations and on objects of interest. To implement this research a wearable visual robotic assistant is designed and constructed. Evaluation of the different technologies results in a final design that combines a shoulder mounted self stabilizing active camera, and a hand held magic lens into a single portable system. To enable the wearable assistant to locate known objects, a system is designed that combines an established method for appearance-based recognition with one for simultaneous localization and mapping using a single camera. As well as identifying planar objects, the objects are located relative to the camera in 3D by computing the image-to-database homography. The 3D positions of the objects are then used as additional measurements in the SLAM process, which routinely uses other point features to acquire and maintain a map of the surroundings, irrespective of whether objects are present or not. The monocular SLAM system is then replaced with a new method for building maps and tracking. Instead of tracking and mapping in a linear frame-rate driven manner, this adopted method separates the mapping from the tracking. This allows higher density maps to be constructed, and provides more robust tracking. The flexible framework provided by this method is extended to support multiple independent cameras, and multiple independent maps, allowing the user of the wearable two-camera robot to escape the confines of the desk top and explore arbitrarily sized environments. The final part of the work brings together the parallel tracking and multiple mapping system with the recognition and localization of planar objects from a database. The method is able to build multiple feature rich maps of the world and simultaneously recognize, reconstruct and localize objects within these maps. The object reconstruction process uses the spatially separated keyframes from the tracking and mapping processes to recognize and localize known objects in the world. These are then used for augmented reality overlays related to the objects.
|
90 |
Construction de modèles 3D à partir de données vidéo fisheye : application à la localisation en milieu urbain / Construction of 3D models from fisheye video data—Application to the localisation in urban areaMoreau, Julien 07 June 2016 (has links)
Cette recherche vise à la modélisation 3D depuis un système de vision fisheye embarqué, utilisée pour une application GNSS dans le cadre du projet Predit CAPLOC. La propagation des signaux satellitaires en milieu urbain est soumise à des réflexions sur les structures, altérant la précision et la disponibilité de la localisation. L’ambition du projet est (1) de définir un système de vision omnidirectionnelle capable de fournir des informations sur la structure 3D urbaine et (2) de montrer qu’elles permettent d’améliorer la localisation.Le mémoire expose les choix en (1) calibrage automatique, (2) mise en correspondance entre images, (3) reconstruction 3D ; chaque algorithme est évalué sur images de synthèse et réelles. De plus, il décrit une manière de corriger les réflexions des signaux GNSS depuis un nuage de points 3D pour améliorer le positionnement. En adaptant le meilleur de l’état de l’art du domaine, deux systèmes sont proposés et expérimentés. Le premier est un système stéréoscopique à deux caméras fisheye orientées vers le ciel. Le second en est l’adaptation à une unique caméra.Le calibrage est assuré à travers deux étapes : l’algorithme des 9 points adapté au modèle « équisolide » couplé à un RANSAC, suivi d’un affinement par optimisation Levenberg-Marquardt. L’effort a été porté sur la manière d’appliquer la méthode pour des performances optimales et reproductibles. C’est un point crucial pour un système à une seule caméra car la pose doit être estimée à chaque nouvelle image.Les correspondances stéréo sont obtenues pour tout pixel par programmation dynamique utilisant un graphe 3D. Elles sont assurées le long des courbes épipolaires conjuguées projetées de manière adaptée sur chaque image. Une particularité est que les distorsions ne sont pas rectifiées afin de ne pas altérer le contenu visuel ni diminuer la précision. Dans le cas binoculaire il est possible d’estimer les coordonnées à l’échelle. En monoculaire, l’ajout d’un odomètre permet d’y arriver. Les nuages successifs peuvent être calés pour former un nuage global en SfM.L’application finale consiste dans l’utilisation du nuage 3D pour améliorer la localisation GNSS. Il est possible d’estimer l’erreur de pseudodistance d’un signal après multiples réflexions et d’en tenir compte pour une position plus précise. Les surfaces réfléchissantes sont modélisées grâce à une extraction de plans et de l’empreinte des bâtiments. La méthode est évaluée sur des paires d’images fixes géo-référencées par un récepteur bas-coût et un récepteur GPS RTK (vérité terrain). Les résultats montrent une amélioration de la localisation en milieu urbain. / This research deals with 3D modelling from an embedded fisheye vision system, used for a GNSS application as part of CAPLOC project. Satellite signal propagation in urban area implies reflections on structures, impairing localisation’s accuracy and availability. The project purpose is (1) to define an omnidirectional vision system able to provide information on urban 3D structure and (2) to demonstrate that it allows to improve localisation.This thesis addresses problems of (1) self-calibration, (2) matching between images, (3) 3D reconstruction ; each algorithm is assessed on computer-generated and real images. Moreover, it describes a way to correct GNSS signals reflections from a 3D point cloud to improve positioning. We propose and evaluate two systems based on state-of-the-art methods. First one is a stereoscopic system made of two sky facing fisheye cameras. Second one is the adaptation of the former to a single camera.Calibration is handled by a two-steps process: the 9-point algorithm fitted to “equisolid” model coupled with a RANSAC, followed by a Levenberg-Marquardt optimisation refinement. We focused on the way to apply the method for optimal and repeatable performances. It is a crucial point for a system composed of only one camera because the pose must be estimated for every new image.Stereo matches are obtained for every pixel by dynamic programming using a 3D graph. Matching is done along conjugated epipolar curves projected in a suitable manner on each image. A distinctive feature is that distortions are not rectified in order to neither degrade visual content nor to decrease accuracy. In the binocular case it is possible to estimate full-scale coordinates.In the monocular case, we do it by adding odometer information. Local clouds can be wedged in SfM to form a global cloud.The end application is the usage of the 3D cloud to improve GNSS localisation. It is possible to estimate and consider a signal pseudodistance error after multiple reflections in order to increase positioning accuracy. Reflecting surfaces are modelled thanks to plane and buildings trace fitting. The method is evaluated on fixed image pairs, georeferenced by a low-cost receiver and a GPS RTK receiver (ground truth). Study results show the localisation improvement ability in urban environment.
|
Page generated in 0.1593 seconds