• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 79
  • 79
  • 36
  • 34
  • 24
  • 18
  • 18
  • 16
  • 15
  • 15
  • 15
  • 15
  • 13
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Event-Based Visual SLAM : An Explorative Approach

Rideg, Johan January 2023 (has links)
Simultaneous Localization And Mapping (SLAM) is an important topic within the field of roboticsaiming to localize an agent in a unknown or partially known environment while simultaneouslymapping the environment. The ability to perform robust SLAM is especially important inhazardous environments such as natural disasters, firefighting and space exploration wherehuman exploration may be too dangerous or impractical. In recent years, neuromorphiccameras have been made commercially available. This new type of sensor does not outputconventional frames but instead an asynchronous signal of events at a microsecond resolutionand is capable of capturing details in complex lightning scenarios where a standard camerawould be either under- or overexposed, making neuromorphic cameras a promising solution insituations where standard cameras struggle. This thesis explores a set of different approachesto virtual frames, a frame-based representation of events, in the context of SLAM.UltimateSLAM, a project fusing events, gray scale and IMU data, is investigated using virtualframes of fixed and varying frame rate both with and without motion compensation. Theresulting trajectories are compared to the trajectories produced when using gray scale framesand the number of detected and tracked features are compared. We also use a traditional visualSLAM project, ORB-SLAM, to investigate the Gaussian weighted virtual frames and gray scaleframes reconstructed from the event stream using a recurrent network model. While virtualframes can be used for SLAM, the event camera is not a plug and play sensor and requires agood choice of parameters when constructing virtual frames, relying on pre-existing knowledgeof the scene.
42

ESA ExoMars Rover PanCam System Geometric Modeling and Evaluation

Li, Ding 14 May 2015 (has links)
No description available.
43

Simultaneous Three-Dimensional Mapping and Geolocation of Road Surface

Li, Diya 23 October 2018 (has links)
This thesis paper presents a simultaneous 3D mapping and geolocation of road surface technique that combines local road surface mapping and global camera localization. The local road surface is generated by structure from motion (SFM) with multiple views and optimized by Bundle Adjustment (BA). A system is developed for the global reconstruction of 3D road surface. Using the system, the proposed technique globally reconstructs 3D road surface by estimating the global camera pose using the Adaptive Extended Kalman Filter (AEKF) and integrates it with local road surface reconstruction techniques. The proposed AEKF-based technique uses image shift as prior. And the camera pose was corrected with the sparse low-accuracy Global Positioning System (GPS) data and digital elevation map (DEM). The AEKF adaptively updates the covariance of uncertainties such that the estimation works well in environment with varying uncertainties. The image capturing system is designed with the camera frame rate being dynamically controlled by vehicle speed read from on-board diagnostics (OBD) for capturing continuous data and helping to remove the effects of moving vehicle shadow from the images with a Random Sample and Consensus (RANSAC) algorithm. The proposed technique is tested in both simulation and field experiment, and compared with similar previous work. The results show that the proposed technique achieves better accuracy than conventional Extended Kalman Filter (EKF) method and achieves smaller translation error than other similar other works. / Master of Science / This thesis paper presents a simultaneous three dimensional (3D) mapping and geolocation of road surface technique that combines local road surface mapping and global camera localization. The local road surface is reconstructed by image processing technique with optimization. And the designed system globally reconstructs 3D road surface by estimating the global camera poses using the proposed Adaptive Extended Kalman Filter (AEKF)-based method and integrates with local road surface reconstructing technique. The camera pose uses image shift as prior, and is corrected with the sparse low-accuracy Global Positioning System (GPS) data and digital elevation map (DEM). The final 3D road surface map with geolocation is generated by combining both local road surface mapping and global localization results. The proposed technique is tested in both simulation and field experiment, and compared with similar previous work. The results show that the proposed technique achieves better accuracy than conventional Extended Kalman Filter (EKF) method and achieves smaller translation error than other similar other works.
44

Visual odometry: comparing a stereo and a multi-camera approach / Odometria visual: comparando métodos estéreo e multi-câmera

Pereira, Ana Rita 25 July 2017 (has links)
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results. / O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
45

Visual odometry and mapping in natural environments for arbitrary camera motion models

Terzakis, George January 2016 (has links)
This is a thesis on outdoor monocular visual SLAM in natural environments. The techniques proposed herein aim at estimating camera pose and 3D geometrical structure of the surrounding environment. This problem statement was motivated by the GPS-denied scenario for a sea-surface vehicle developed at Plymouth University named Springer. The algorithms proposed in this thesis are mainly adapted for the Springer’s environmental conditions, so that the vehicle can navigate on a vision based localization system when GPS is not available; such environments include estuarine areas, forests and the occasional semi-urban territories. The research objectives are constrained versions of the ever-abiding problems in the fields of multiple view geometry and mobile robotics. The research is proposing new techniques or improving existing ones for problems such as scene reconstruction, relative camera pose recovery and filtering, always in the context of the aforementioned landscapes (i.e., rivers, forests, etc.). Although visual tracking is paramount for the generation of data point correspondences, this thesis focuses primarily on the geometric aspect of the problem as well as with the probabilistic framework in which the optimization of pose and structure estimates takes place. Besides algorithms, the deliverables of this research should include the respective implementations and test data for these algorithms in the form of a software library and a dataset containing footage of estuarine regions taken from a boat, along with synchronized sensor logs. This thesis is not the final analysis on vision based navigation. It merely proposes various solutions for the localization problem of a vehicle navigating in natural environments either on land or on the surface of the water. Although these solutions can be used to provide position and orientation estimates when GPS is not available, they have limitations and there is still a vast new world of ideas to be explored.
46

Visual odometry: comparing a stereo and a multi-camera approach / Odometria visual: comparando métodos estéreo e multi-câmera

Ana Rita Pereira 25 July 2017 (has links)
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results. / O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
47

A Robust Synthetic Basis Feature Descriptor Implementation and Applications Pertaining to Visual Odometry, Object Detection, and Image Stitching

Raven, Lindsey Ann 05 December 2017 (has links)
Feature detection and matching is an important step in many object tracking and detection algorithms. This paper discusses methods to improve upon previous work on the SYnthetic BAsis feature descriptor (SYBA) algorithm, which describes and compares image features in an efficient and discreet manner. SYBA utilizes synthetic basis images overlaid on a feature region of interest (FRI) to generate binary numbers that uniquely describe the feature contained within the FRI. These binary numbers are then used to compare against feature values in subsequent images for matching. However, in a non-ideal environment the accuracy of the feature matching suffers due to variations in image scale, and rotation. This paper introduces a new version of SYBA which processes FRI’s such that the descriptions developed by SYBA are rotation and scale invariant. To demonstrate the improvements of this robust implementation of SYBA called rSYBA, included in this paper are applications that have to cope with high amounts of image variation. The first detects objects along an oil pipeline by transforming and comparing frame-by-frame two surveillance videos recorded at two different times. The second shows camera pose plotting for a ground based vehicle using monocular visual odometry. The third generates panoramic images through image stitching and image transforms. All applications contain large amounts of image variation between image frames and therefore require a significant amount of correct feature matches to generate acceptable results.
48

Filtering Techniques for Pose Estimation with Applications to Unmanned Air Vehicles

Ready, Bryce Benson 29 November 2012 (has links) (PDF)
This work presents two novel methods of estimating the state of a dynamic system in a Kalman Filtering framework. The first is an application specific method for use with systems performing Visual Odometry in a mostly planar scene. Because a Visual Odometry method inherently provides relative information about the pose of a platform, we use this system as part of the time update in a Kalman Filtering framework, and develop a novel way to propagate the uncertainty of the pose through this time update method. Our initial results show that this method is able to reduce localization error significantly with respect to pure INS time update, limiting drift in our test system to around 30 meters for tens of seconds. The second key contribution of this work is the Manifold EKF, a generalized version of the Extended Kalman Filter which is explicitly designed to estimate manifold-valued states. This filter works for a large number of commonly useful manifolds, and may have applications to other manifolds as well. In our tests, the Manifold EKF demonstrated significant advantages in terms of consistency when compared to other filtering methods. We feel that these promising initial results merit further study of the Manifold EKF, related filters, and their properties.
49

Localization of Combat Aircraft at High Altitude using Visual Odometry

Nilsson Boij, Jenny January 2022 (has links)
Most of the navigation systems used in today’s aircraft rely on Global Navigation Satellite Systems (GNSS). However, GNSS is not fully reliable. For example, it can be jammed by attacks on the space or ground segments of the system or denied at inaccessible areas. Hence to ensure successful navigation it is of great importance to continuously be able to establish the aircraft’s location without having to rely on external reference systems. Localization is one of many sub-problems in navigation and will be the focus of this thesis. This brings us to the field of visual odometry (VO), which involves determining position and orientation with the help of images from one or more camera sensors. But to date, most VO systems have primarily been established on ground vehicles and low flying multi-rotor systems. This thesis seeks to extend VO to new applications by exploring it in a fairly new context; a fixed-wing piloted combat aircraft, for vision-only pose estimation in applications of extremely large scene depth. A major part of this research work is the data gathering, where the data is collected using the flight simulator X-Plane 11. Three different flight routes are flown; a straight line, a curve and a loop, for two types of visual conditions; in clear weather with daylight and during sunset. The method used in this work is ORB-SLAM3, an open-source library for visual simultaneous localization and mapping (SLAM). It has shown excellent results in previous works and has become a benchmark method often used in the field of visual pose estimation. ORB-SLAM3 tracks the straight line of 78 km very well at an altitude over 2700 m. The absolute trajectory error (ATE) is 0.072% of the total distance traveled in daylight and 0.11% during sunset. These results are of the same magnitude as ORB-SLAM3 on the EuRoC MAV dataset. For the curved trajectory of 79 km ATE is 2.0% and 1.2% of total distance traveled in daylight and sunset respectively.  The longest flight route of 258 km shows the challenges of visual pose estimation. Although it is managing to close loops in daylight, it has an ATE of 3.6% during daylight. During sunset the features do not possess enough invariant characteristics to close loops, resulting in an even larger ATE of 14% of total distance traveled. Hence to be able to use and properly rely on vision in localization, more sensor information is needed. But since all aircraft already possess an inertial measurement unit (IMU), the future work naturally includes IMU data in the system. Nevertheless, the results from this research show that vision is useful, even at the high altitudes and speeds used by a combat aircraft.
50

Odométrie visuelle directe et cartographie dense de grands environnements à base d'images panoramiques RGB-D / Direct visual odometry and dense large-scale environment mapping from panoramic RGB-D images

Martins, Renato 27 October 2017 (has links)
Cette thèse se situe dans le domaine de l'auto-localisation et de la cartographie 3D des caméras RGB-D pour des robots mobiles et des systèmes autonomes avec des caméras RGB-D. Nous présentons des techniques d'alignement et de cartographie pour effectuer la localisation d'une caméra (suivi), notamment pour des caméras avec mouvements rapides ou avec faible cadence. Les domaines d'application possibles sont la réalité virtuelle et augmentée, la localisation de véhicules autonomes ou la reconstruction 3D des environnements.Nous proposons un cadre consistant et complet au problème de localisation et cartographie 3D à partir de séquences d'images RGB-D acquises par une plateforme mobile. Ce travail explore et étend le domaine d'applicabilité des approches de suivi direct dites "appearance-based". Vis-à-vis des méthodes fondées sur l'extraction de primitives, les approches directes permettent une représentation dense et plus précise de la scène mais souffrent d'un domaine de convergence plus faible nécessitant une hypothèse de petits déplacements entre images.Dans la première partie de la thèse, deux contributions sont proposées pour augmenter ce domaine de convergence. Tout d'abord une méthode d'estimation des grands déplacements est développée s'appuyant sur les propriétés géométriques des cartes de profondeurs contenues dans l'image RGB-D. Cette estimation grossière (rough estimation) peut être utilisée pour initialiser la fonction de coût minimisée dans l'approche directe. Une seconde contribution porte sur l'étude des domaines de convergence de la partie photométrique et de la partie géométrique de cette fonction de coût. Il en résulte une nouvelle fonction de coût exploitant de manière adaptative l'erreur photométrique et géométrique en se fondant sur leurs propriétés de convergence respectives.Dans la deuxième partie de la thèse, nous proposons des techniques de régularisation et de fusion pour créer des représentations précises et compactes de grands environnements. La régularisation s'appuie sur une segmentation de l'image sphérique RGB-D en patchs utilisant simultanément les informations géométriques et photométriques afin d'améliorer la précision et la stabilité de la représentation 3D de la scène. Cette segmentation est également adaptée pour la résolution non uniforme des images panoramiques. Enfin les images régularisées sont fusionnées pour créer une représentation compacte de la scène, composée de panoramas RGB-D sphériques distribués de façon optimale dans l'environnement. Ces représentations sont particulièrement adaptées aux applications de mobilité, tâches de navigation autonome et de guidage, car elles permettent un accès en temps constant avec une faible occupation de mémoire qui ne dépendent pas de la taille de l'environnement. / This thesis is in the context of self-localization and 3D mapping from RGB-D cameras for mobile robots and autonomous systems. We present image alignment and mapping techniques to perform the camera localization (tracking) notably for large camera motions or low frame rate. Possible domains of application are localization of autonomous vehicles, 3D reconstruction of environments, security or in virtual and augmented reality. We propose a consistent localization and 3D dense mapping framework considering as input a sequence of RGB-D images acquired from a mobile platform. The core of this framework explores and extends the domain of applicability of direct/dense appearance-based image registration methods. With regard to feature-based techniques, direct/dense image registration (or image alignment) techniques are more accurate and allow us a more consistent dense representation of the scene. However, these techniques have a smaller domain of convergence and rely on the assumption that the camera motion is small.In the first part of the thesis, we propose two formulations to relax this assumption. Firstly, we describe a fast pose estimation strategy to compute a rough estimate of large motions, based on the normal vectors of the scene surfaces and on the geometric properties between the RGB-D images. This rough estimation can be used as initialization to direct registration methods for refinement. Secondly, we propose a direct RGB-D camera tracking method that exploits adaptively the photometric and geometric error properties to improve the convergence of the image alignment.In the second part of the thesis, we propose techniques of regularization and fusion to create compact and accurate representations of large scale environments. The regularization is performed from a segmentation of spherical frames in piecewise patches using simultaneously the photometric and geometric information to improve the accuracy and the consistency of the scene 3D reconstruction. This segmentation is also adapted to tackle the non-uniform resolution of panoramic images. Finally, the regularized frames are combined to build a compact keyframe-based map composed of spherical RGB-D panoramas optimally distributed in the environment. These representations are helpful for autonomous navigation and guiding tasks as they allow us an access in constant time with a limited storage which does not depend on the size of the environment.

Page generated in 0.0436 seconds