• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 15
  • 9
  • 6
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 132
  • 132
  • 132
  • 37
  • 37
  • 31
  • 30
  • 27
  • 25
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Using Structure-from-Motion Technology to Compare Coral Coverage on Restored vs. Unrestored Reefs

Rosing, Trina 17 June 2021 (has links)
No description available.
102

Mobilní aplikace pro 3D rekonstrukci / Mobile application for 3D reconstruction

Krátký, Martin January 2021 (has links)
The aim of this master thesis is to create mobile application for spatial reconstruction. Thesis describes image processing methods suitable for solving this problem. Available platforms for creating mobile application are described. The parameters of the measured scenes are defined. A mobile application containing the described methods is created. The application is tested by reconstruction of objects in different conditions.
103

Approches 2D/2D pour le SFM à partir d'un réseau de caméras asynchrones / 2D/2D approaches for SFM using an asynchronous multi-camera network

Mhiri, Rawia 14 December 2015 (has links)
Les systèmes d'aide à la conduite et les travaux concernant le véhicule autonome ont atteint une certaine maturité durant ces dernières aimées grâce à l'utilisation de technologies avancées. Une étape fondamentale pour ces systèmes porte sur l'estimation du mouvement et de la structure de l'environnement (Structure From Motion) pour accomplir plusieurs tâches, notamment la détection d'obstacles et de marquage routier, la localisation et la cartographie. Pour estimer leurs mouvements, de tels systèmes utilisent des capteurs relativement chers. Pour être commercialisés à grande échelle, il est alors nécessaire de développer des applications avec des dispositifs bas coûts. Dans cette optique, les systèmes de vision se révèlent une bonne alternative. Une nouvelle méthode basée sur des approches 2D/2D à partir d'un réseau de caméras asynchrones est présentée afin d'obtenir le déplacement et la structure 3D à l'échelle absolue en prenant soin d'estimer les facteurs d'échelle. La méthode proposée, appelée méthode des triangles, se base sur l'utilisation de trois images formant un triangle : deux images provenant de la même caméra et une image provenant d'une caméra voisine. L'algorithme admet trois hypothèses: les caméras partagent des champs de vue communs (deux à deux), la trajectoire entre deux images consécutives provenant d'une même caméra est approximée par un segment linéaire et les caméras sont calibrées. La connaissance de la calibration extrinsèque entre deux caméras combinée avec l'hypothèse de mouvement rectiligne du système, permet d'estimer les facteurs d'échelle absolue. La méthode proposée est précise et robuste pour les trajectoires rectilignes et présente des résultats satisfaisants pour les virages. Pour affiner l'estimation initiale, certaines erreurs dues aux imprécisions dans l'estimation des facteurs d'échelle sont améliorées par une méthode d'optimisation : un ajustement de faisceaux local appliqué uniquement sur les facteurs d'échelle absolue et sur les points 3D. L'approche présentée est validée sur des séquences de scènes routières réelles et évaluée par rapport à la vérité terrain obtenue par un GPS différentiel. Une application fondamentale dans les domaines d'aide à la conduite et de la conduite automatisée est la détection de la route et d'obstacles. Pour un système asynchrone, une première approche pour traiter cette application est présentée en se basant sur des cartes de disparité éparses. / Driver assistance systems and autonomous vehicles have reached a certain maturity in recent years through the use of advanced technologies. A fundamental step for these systems is the motion and the structure estimation (Structure From Motion) that accomplish several tasks, including the detection of obstacles and road marking, localisation and mapping. To estimate their movements, such systems use relatively expensive sensors. In order to market such systems on a large scale, it is necessary to develop applications with low cost devices. In this context, vision systems is a good alternative. A new method based on 2D/2D approaches from an asynchronous multi-camera network is presented to obtain the motion and the 3D structure at the absolute scale, focusing on estimating the scale factors. The proposed method, called Triangle Method, is based on the use of three images forming a. triangle shape: two images from the same camera and an image from a neighboring camera. The algorithrn has three assumptions: the cameras share common fields of view (two by two), the path between two consecutive images from a single camera is approximated by a line segment, and the cameras are calibrated. The extrinsic calibration between two cameras combined with the assumption of rectilinear motion of the system allows to estimate the absolute scale factors. The proposed method is accurate and robust for straight trajectories and present satisfactory results for curve trajectories. To refine the initial estimation, some en-ors due to the inaccuracies of the scale estimation are improved by an optimization method: a local bundle adjustment applied only on the absolute scale factors and the 3D points. The presented approach is validated on sequences of real road scenes, and evaluated with respect to the ground truth obtained through a differential GPS. Finally, another fundamental application in the fields of driver assistance and automated driving is road and obstacles detection. A method is presented for an asynchronous system based on sparse disparity maps
104

Comparing Structure from Motion Photogrammetry and Computer Vision for Low-Cost 3D Cave Mapping: Tipton-Haynes Cave, Tennessee

Elmore, Clinton 01 August 2019 (has links)
Natural caves represent one of the most difficult environments to map with modern 3D technologies. In this study I tested two relatively new methods for 3D mapping in Tipton-Haynes Cave near Johnson City, Tennessee: Structure from Motion Photogrammetry and Computer Vision using Tango, an RGB-D (Red Green Blue and Depth) technology. Many different aspects of these two methods were analyzed with respect to the needs of average cave explorers. Major considerations were cost, time, accuracy, durability, simplicity, lighting setup, and drift. The 3D maps were compared to a conventional cave map drafted with measurements from a modern digital survey instrument called the DistoX2, a clinometer, and a measuring tape. Both 3D mapping methods worked, but photogrammetry proved to be too time consuming and laborious for capturing more than a few meters of passage. RGB-D was faster, more accurate, and showed promise for the future of low-cost 3D cave mapping.
105

Potentialities of Unmanned Aerial Vehicles in Hydraulic Modelling : Drone remote sensing through photogrammetry for 1D flow numerical modelling

Reali, Andrea January 2018 (has links)
In civil and environmental engineering numerous are the applications that require prior collection of data on the ground. When it comes to hydraulic modelling, valuable topographic and morphology features of the region are one of the most useful of them, yet often unavailable, expensive or difficult to obtain. In the last few years UAVs entered the scene of remote sensing tools used to deliver such information and their applications connected to various photo-analysis techniques have been tested in specific engineering fields, with promising results. The content of this thesis aims contribute to the growing literature on the topic, assessing the potentialities of UAV and SfM photogrammetry analysis in developing terrain elevation models to be used as input data for numerical flood modelling. This thesis covered all phases of the engineering process, from the survey to the implementation of a 1D hydraulic model based on the photogrammetry derived topography The area chosen for the study was the Limpopo river. The challenging environment of the Mozambican inland showed the great advantages of this technology, which allowed a precise and fast survey easily overcoming risks and difficulties. The test on the field was also useful to expose the current limits of the drone tool in its high susceptibility to weather conditions, wind and temperatures and the restricted battery capacity which did not allow flight longer than 20 minutes. The subsequent photogrammetry analysis showed a high degree of dependency on a number of ground control points and the need of laborious post-processing manipulations in order to obtain a reliable DEM and avoid the insurgence of dooming effects. It revealed, this way, the importance of understanding the drone and the photogrammetry software as a single instrument to deliver a quality DEM and consequently the importance of planning a survey photogrammetry-oriented by the adoption of specific precautions. Nevertheless, the DEM we produced presented a degree of spatial resolution comparable to the one high precision topography sources. Finally, considering four different topography sources (SRTM DEM 30 m, lidar DEM 1 m, drone DEM 0.6 m, total station&RTK bathymetric cross sections o.5 m) the relationship between spatial accuracy and water depth estimation was tested through 1D, steady flow models on HECRAS. The performances of each model were expressed in terms of mean absolute error (MAE) in water depth estimations of the considered model compared to the one based on the bathymetric cross-sections. The result confirmed the potentialities of the drone for hydraulic engineering applications, with MAE differences between lidar, bathymetry and drone included within 1 m. The calibration of SRTM, Lidar and Drone based models to the bathymetry one demonstrated the relationship between geometry detail and roughness of the cross-sections, with a global improvement in the MAE, but more pronounced for the coarse geometry of SRTM.
106

Registration and Localization of Unknown Moving Objects in Markerless Monocular SLAM

Troutman, Blake 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Simultaneous localization and mapping (SLAM) is a general device localization technique that uses realtime sensor measurements to develop a virtualization of the sensor's environment while also using this growing virtualization to determine the position and orientation of the sensor. This is useful for augmented reality (AR), in which a user looks through a head-mounted display (HMD) or viewfinder to see virtual components integrated into the real world. Visual SLAM (i.e., SLAM in which the sensor is an optical camera) is used in AR to determine the exact device/headset movement so that the virtual components can be accurately redrawn to the screen, matching the perceived motion of the world around the user as the user moves the device/headset. However, many potential AR applications may need access to more than device localization data in order to be useful; they may need to leverage environment data as well. Additionally, most SLAM solutions make the naive assumption that the environment surrounding the system is completely static (non-moving). Given these circumstances, it is clear that AR may benefit substantially from utilizing a SLAM solution that detects objects that move in the scene and ultimately provides localization data for each of these objects. This problem is known as the dynamic SLAM problem. Current attempts to address the dynamic SLAM problem often use machine learning to develop models that identify the parts of the camera image that belong to one of many classes of potentially-moving objects. The limitation with these approaches is that it is impractical to train models to identify every possible object that moves; additionally, some potentially-moving objects may be static in the scene, which these approaches often do not account for. Some other attempts to address the dynamic SLAM problem also localize the moving objects they detect, but these systems almost always rely on depth sensors or stereo camera configurations, which have significant limitations in real-world use cases. This dissertation presents a novel approach for registering and localizing unknown moving objects in the context of markerless, monocular, keyframe-based SLAM with no required prior information about object structure, appearance, or existence. This work also details a novel deep learning solution for determining SLAM map initialization suitability in structure-from-motion-based initialization approaches. This dissertation goes on to validate these approaches by implementing them in a markerless, monocular SLAM system called LUMO-SLAM, which is built from the ground up to demonstrate this approach to unknown moving object registration and localization. Results are collected for the LUMO-SLAM system, which address the accuracy of its camera localization estimates, the accuracy of its moving object localization estimates, and the consistency with which it registers moving objects in the scene. These results show that this solution to the dynamic SLAM problem, though it does not act as a practical solution for all use cases, has an ability to accurately register and localize unknown moving objects in such a way that makes it useful for some applications of AR without thwarting the system's ability to also perform accurate camera localization.
107

Structure from Motion with Unstructured RGBD Data

Svensson, Niclas January 2021 (has links)
This thesis covers the topic of depth- assisted Structure from Motion (SfM). When performing classic SfM, the goal is to reconstruct a 3D scene using only a set of unstructured RGB images. What is attempted to be achieved in this thesis is adding the depth dimension to the problem formulation, and consequently create a system that can receive a set of RGBD images. The problem has been addressed by modifying an already existing SfM pipeline and in particular, its Bundle Adjustment (BA) process. Comparisons between the modified framework and the baseline framework resulted in conclusions regarding the impact of the modifications. The results show mainly two things. First of all, the accuracy of the framework is increased in most situations. The difference is the most significant when the captured scene only is covered from a small sector. However, noisy data can cause the modified pipeline to decrease in performance. Secondly, the run time of the framework is significantly reduced. A discussion of how to modify other parts of the pipeline is covered in the conclusion of the report. / Följande examensarbete behandlar ämnet djupassisterad Struktur genom Rörelse (eng. SfM). Vid klassisk SfM är målet att återskapa en 3D scen, endast med hjälp av en sekvens av oordnade RGB bilder. I djupassiterad SfM adderas djupinformationen till problemformulering och följaktligen har ett system som kan motta RGBD bilder skapats. Problemet har lösts genom att modifiera en befintlig SfM- mjukvara och mer specifikt dess Buntjustering (eng. BA). Resultatet från den modifierade mjukvaran jämförs med resultatet av originalutgåvan för att dra slutsatser rådande modifikationens påverkan på prestandan. Resultaten visar huvudsakligen två saker. Först och främst, den modifierade mjukvaran producerar resultat med högre noggrannhet i de allra flesta fall. Skillnaden är som allra störst när bilderna är tagna från endast en liten sektor som omringar scenen. Data med brus kan dock försämra systemets prestanda aningen jämfört med orginalsystemet. För det andra, så minskar exekutionstiden betydligt. Slutligen diskuteras hur mjukvaran kan vidareutvecklas för att ytterligare förbättra resultaten.
108

Modeling Smooth Time-Trajectories for Camera and Deformable Shape in Structure from Motion with Occlusion

Gotardo, Paulo Fabiano Urnau 28 September 2010 (has links)
No description available.
109

Pose Estimation and Structure Analysis of Image Sequences

Hedborg, Johan January 2009 (has links)
Autonomous navigation for ground vehicles has many challenges. Autonomous systems must be able to self-localise, avoid obstacles and determine navigable surfaces. This thesis studies several aspects of autonomous navigation with a particular emphasis on vision, motivated by it being a primary component for navigation in many high-level biological organisms.  The key problem of self-localisation or pose estimation can be solved through analysis of the changes in appearance of rigid objects observed from different view points. We therefore describe a system for structure and motion estimation for real-time navigation and obstacle avoidance. With the explicit assumption of a calibrated camera, we have studied several schemes for increasing accuracy and speed of the estimation.The basis of most structure and motion pose estimation algorithms is a good point tracker. However point tracking is computationally expensive and can occupy a large portion of the CPU resources. In thisthesis we show how a point tracker can be implemented efficiently on the graphics processor, which results in faster tracking of points and the CPU being available to carry out additional processing tasks.In addition we propose a novel view interpolation approach, that can be used effectively for pose estimation given previously seen views. In this way, a vehicle will be able to estimate its location by interpolating previously seen data.Navigation and obstacle avoidance may be carried out efficiently using structure and motion, but only whitin a limited range from the camera. In order to increase this effective range, additional information needs to be incorporated, more specifically the location of objects in the image. For this, we propose a real-time object recognition method, which uses P-channel matching, which may be used for improving navigation accuracy at distances where structure estimation is unreliable. / Diplecs
110

Use of consumer grade small unmanned aerial systems (sUAS) for mapping storm damage in forested environments

Cox, James Dewey 13 May 2022 (has links) (PDF)
Storm damages to forested environments pose significant challenges to landowners, land managers, and conservationists alike. Damage scope and scale assessments can be difficult, costly, and time consuming with conventional pedestrian survey techniques. Consumer grade sUAS technology offers an efficient, cost-effective way to accurately assess storm damage in small to moderate sized survey areas (less than 10 km²). Data were collected over a 0.195 km² area of damaged timber within the Kisatchie National Forest in Central Louisiana using a DJI Mavic 2 Pro drone. Collected imagery was processed into an orthomosaic using Agisoft Metashape Professional with a resulting ground sampling distance of 2.58 cm per pixel. Combined X and Y ground distance accuracy r was calculated as 1.39230 meters and a combined horizontal error was calculated as 0.810455526 meters. From the generated orthomosaic, the total storm damage area was estimated as 2.68 Ha, or 6.63 ac based on digitized polygon area calculations.

Page generated in 0.0934 seconds