• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 14
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

ORB-SLAM PERFORMANCE FOR INDOOR ENVIRONMENT USING JACKAL MOBILE ROBOT

Tianshu Ruan (8632812) 16 April 2020 (has links)
This thesis explains how Oriented FAST and rotated BRIEF SLAM (ORB-SLAM), one of the best visual SLAM solutions, works indoor and evaluates the technique performance for three different cameras: monocular camera, stereo camera and RGB-D camera. Three experiments are designed to find the limitation of the algorithm. From the experiments, the RGB-D SLAM gives the most accurate result for the indoor environment. The monocular SLAM performs better than stereo SLAM on our platform due to limited computation power. It is expected that stereo SLAM provides better results by increasing the experimental platform computational power. The ORBSLAM results demonstrate the applicability of the approach for the autonomous navigation and future autonomous cars.
12

[pt] SLAM VISUAL EM AMBIENTES DINÂMICOS UTILIZANDO SEGMENTAÇÃO PANÓPTICA / [en] VISUAL SLAM IN DYNAMIC ENVIRONMENTS USING PANOPTIC SEGMENTATION

GABRIEL FISCHER ABATI 10 August 2023 (has links)
[pt] Robôs moveis se tornaram populares nos últimos anos devido a sua capacidade de operar de forma autônoma e performar tarefas que são perigosas, repetitivas ou tediosas para seres humanos. O robô necessita ter um mapa de seus arredores e uma estimativa de sua localização dentro desse mapa para alcançar navegação autônoma. O problema de Localização e Mapeamento Simultâneos (SLAM) está relacionado com a determinação simultânea do mapa e da localização usando medidas de sensores. SLAM visual diz respeito a estimar a localização e o mapa de um robô móvel usando apenas informações visuais capturadas por câmeras. O uso de câmeras para o sensoriamento proporciona uma vantagem significativa, pois permite resolver tarefas de visão computacional que fornecem informações de alto nível sobre a cena, incluindo detecção, segmentação e reconhecimento de objetos. A maioria dos sistemas de SLAM visuais não são robustos a ambientes dinâmicos. Os sistemas que lidam com conteúdo dinâmico normalmente contem com métodos de aprendizado profundo para detectar e filtrar objetos dinâmicos. Existem vários sistemas de SLAM visual na literatura com alta acurácia e desempenho, porem a maioria desses métodos não englobam objetos desconhecidos. Este trabalho apresenta um novo sistema de SLAM visual robusto a ambientes dinâmicos, mesmo na presença de objetos desconhecidos. Este método utiliza segmentação panóptica para filtrar objetos dinâmicos de uma cena durante o processo de estimação de estado. A metodologia proposta é baseada em ORB-SLAM3, um sistema de SLAM estado-da-arte em ambientes estáticos. A implementação foi testada usando dados reais e comparado com diversos sistemas da literatura, incluindo DynaSLAM, DS-SLAM e SaD-SLAM. Além disso, o sistema proposto supera os resultados do ORB-SLAM3 em um conjunto de dados personalizado composto por ambientes dinâmicos e objetos desconhecidos em movimento. / [en] Mobile robots have become popular in recent years due to their ability to operate autonomously and accomplish tasks that would otherwise be too dangerous, repetitive, or tedious for humans. The robot must have a map of its surroundings and an estimate of its location within this map to achieve full autonomy in navigation. The Simultaneous Localization and Mapping (SLAM) problem is concerned with determining both the map and localization concurrently using sensor measurements. Visual SLAM involves estimating the location and map of a mobile robot using only visual information captured by cameras. Utilizing cameras for sensing provides a significant advantage, as they enable solving computer vision tasks that offer high-level information about the scene, including object detection, segmentation, and recognition. There are several visual SLAM systems in the literature with high accuracy and performance, but the majority of them are not robust in dynamic scenarios. The ones that deal with dynamic content in the scenes usually rely on deep learning-based methods to detect and filter dynamic objects. However, these methods cannot deal with unknown objects. This work presents a new visual SLAM system robust to dynamic environments, even in the presence of unknown moving objects. It uses Panoptic Segmentation to filter dynamic objects from the scene during the state estimation process. The proposed methodology is based on ORB-SLAM3, a state-of-the-art SLAM system for static environments. The implementation was tested using real-world datasets and compared with several systems from the literature, including DynaSLAM, DS-SLAM and SaD-SLAM. Also, the proposed system surpasses ORB-SLAM3 results in a custom dataset composed of dynamic environments with unknown moving objects.
13

MonoDepth-vSLAM: A Visual EKF-SLAM using Optical Flow and Monocular Depth Estimation

Dey, Rohit 04 October 2021 (has links)
No description available.
14

Ego-Motion Estimation of Drones / Positionsestimering för drönare

Ay, Emre January 2017 (has links)
To remove the dependency on external structure for drone positioning in GPS-denied environments, it is desirable to estimate the ego-motion of drones on-board. Visual positioning systems have been studied for quite some time and the literature on the area is diligent. The aim of this project is to investigate the currently available methods and implement a visual odometry system for drones which is capable of giving continuous estimates with a lightweight solution. In that manner, the state of the art systems are investigated and a visual odometry system is implemented based on the design decisions. The resulting system is shown to give acceptable estimates. / För att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.
15

An Implementation Of Mono And Stereo Slam System Utilizing Efficient Map Management Strategy

Kalay, Adnan 01 September 2008 (has links) (PDF)
For an autonomous mobile robot, localization and map building are vital capabilities. The localization ability provides the robot location information, so the robot can navigate in the environment. On the other hand, the robot can interact with its environment using a model of the environment (map information) which is provided by map building mechanism. These two capabilities depends on each other and simultaneous operation of them is called SLAM (Simultaneous Localization and Map Building). While various sensors are used for this algorithm, vision-based approaches are relatively new and have attracted more interest in recent years. In this thesis work, a versatile Visual SLAM system is constructed and presented. In the core of this work is a vision-based simultaneous localization and map building algorithm which uses point features in the environment as visual landmarks and Extended Kalman Filter for state estimation. A detailed analysis of this algorithm is made including state estimation, feature extraction and data association steps. The algorithm is extended to be used for both stereo and single camera systems. The core of both algorithms is same and we mention the differences of both algorithms originated from the measurement dissimilarity. The algorithm is run also in different motion modes, namely predefined, manual and autonomous. Secondly, a map management strategy is developed especially for extended environments. When the robot runs the SLAM algorithm in large environments, the constructed map contains a great number of landmarks obviously. The efficiency algorithm takes part, when the total number of features exceeds a critical value for the system. In this case, the current map is rarefied without losing the geometrical distribution of the landmarks. Furthermore, a well-organized graphical user interface is implemented which enables the operator to select operational modes, change various parameters of the main SLAM algorithm and see the results of the SLAM operation both textually and graphically. Finally, a basic mission concept is defined in our system, in order to illustrate what robot can do using the outputs of the SLAM algorithm. All of these ideas mentioned are implemented in this thesis, experiments are conducted using a real robot and the analysis results are discussed by comparing the algorithm outputs with ground-truth measurements.
16

Visual odometry and mapping in natural environments for arbitrary camera motion models

Terzakis, George January 2016 (has links)
This is a thesis on outdoor monocular visual SLAM in natural environments. The techniques proposed herein aim at estimating camera pose and 3D geometrical structure of the surrounding environment. This problem statement was motivated by the GPS-denied scenario for a sea-surface vehicle developed at Plymouth University named Springer. The algorithms proposed in this thesis are mainly adapted for the Springer’s environmental conditions, so that the vehicle can navigate on a vision based localization system when GPS is not available; such environments include estuarine areas, forests and the occasional semi-urban territories. The research objectives are constrained versions of the ever-abiding problems in the fields of multiple view geometry and mobile robotics. The research is proposing new techniques or improving existing ones for problems such as scene reconstruction, relative camera pose recovery and filtering, always in the context of the aforementioned landscapes (i.e., rivers, forests, etc.). Although visual tracking is paramount for the generation of data point correspondences, this thesis focuses primarily on the geometric aspect of the problem as well as with the probabilistic framework in which the optimization of pose and structure estimates takes place. Besides algorithms, the deliverables of this research should include the respective implementations and test data for these algorithms in the form of a software library and a dataset containing footage of estuarine regions taken from a boat, along with synchronized sensor logs. This thesis is not the final analysis on vision based navigation. It merely proposes various solutions for the localization problem of a vehicle navigating in natural environments either on land or on the surface of the water. Although these solutions can be used to provide position and orientation estimates when GPS is not available, they have limitations and there is still a vast new world of ideas to be explored.
17

Odométrie visuelle directe et cartographie dense de grands environnements à base d'images panoramiques RGB-D / Direct visual odometry and dense large-scale environment mapping from panoramic RGB-D images

Martins, Renato 27 October 2017 (has links)
Cette thèse se situe dans le domaine de l'auto-localisation et de la cartographie 3D des caméras RGB-D pour des robots mobiles et des systèmes autonomes avec des caméras RGB-D. Nous présentons des techniques d'alignement et de cartographie pour effectuer la localisation d'une caméra (suivi), notamment pour des caméras avec mouvements rapides ou avec faible cadence. Les domaines d'application possibles sont la réalité virtuelle et augmentée, la localisation de véhicules autonomes ou la reconstruction 3D des environnements.Nous proposons un cadre consistant et complet au problème de localisation et cartographie 3D à partir de séquences d'images RGB-D acquises par une plateforme mobile. Ce travail explore et étend le domaine d'applicabilité des approches de suivi direct dites "appearance-based". Vis-à-vis des méthodes fondées sur l'extraction de primitives, les approches directes permettent une représentation dense et plus précise de la scène mais souffrent d'un domaine de convergence plus faible nécessitant une hypothèse de petits déplacements entre images.Dans la première partie de la thèse, deux contributions sont proposées pour augmenter ce domaine de convergence. Tout d'abord une méthode d'estimation des grands déplacements est développée s'appuyant sur les propriétés géométriques des cartes de profondeurs contenues dans l'image RGB-D. Cette estimation grossière (rough estimation) peut être utilisée pour initialiser la fonction de coût minimisée dans l'approche directe. Une seconde contribution porte sur l'étude des domaines de convergence de la partie photométrique et de la partie géométrique de cette fonction de coût. Il en résulte une nouvelle fonction de coût exploitant de manière adaptative l'erreur photométrique et géométrique en se fondant sur leurs propriétés de convergence respectives.Dans la deuxième partie de la thèse, nous proposons des techniques de régularisation et de fusion pour créer des représentations précises et compactes de grands environnements. La régularisation s'appuie sur une segmentation de l'image sphérique RGB-D en patchs utilisant simultanément les informations géométriques et photométriques afin d'améliorer la précision et la stabilité de la représentation 3D de la scène. Cette segmentation est également adaptée pour la résolution non uniforme des images panoramiques. Enfin les images régularisées sont fusionnées pour créer une représentation compacte de la scène, composée de panoramas RGB-D sphériques distribués de façon optimale dans l'environnement. Ces représentations sont particulièrement adaptées aux applications de mobilité, tâches de navigation autonome et de guidage, car elles permettent un accès en temps constant avec une faible occupation de mémoire qui ne dépendent pas de la taille de l'environnement. / This thesis is in the context of self-localization and 3D mapping from RGB-D cameras for mobile robots and autonomous systems. We present image alignment and mapping techniques to perform the camera localization (tracking) notably for large camera motions or low frame rate. Possible domains of application are localization of autonomous vehicles, 3D reconstruction of environments, security or in virtual and augmented reality. We propose a consistent localization and 3D dense mapping framework considering as input a sequence of RGB-D images acquired from a mobile platform. The core of this framework explores and extends the domain of applicability of direct/dense appearance-based image registration methods. With regard to feature-based techniques, direct/dense image registration (or image alignment) techniques are more accurate and allow us a more consistent dense representation of the scene. However, these techniques have a smaller domain of convergence and rely on the assumption that the camera motion is small.In the first part of the thesis, we propose two formulations to relax this assumption. Firstly, we describe a fast pose estimation strategy to compute a rough estimate of large motions, based on the normal vectors of the scene surfaces and on the geometric properties between the RGB-D images. This rough estimation can be used as initialization to direct registration methods for refinement. Secondly, we propose a direct RGB-D camera tracking method that exploits adaptively the photometric and geometric error properties to improve the convergence of the image alignment.In the second part of the thesis, we propose techniques of regularization and fusion to create compact and accurate representations of large scale environments. The regularization is performed from a segmentation of spherical frames in piecewise patches using simultaneously the photometric and geometric information to improve the accuracy and the consistency of the scene 3D reconstruction. This segmentation is also adapted to tackle the non-uniform resolution of panoramic images. Finally, the regularized frames are combined to build a compact keyframe-based map composed of spherical RGB-D panoramas optimally distributed in the environment. These representations are helpful for autonomous navigation and guiding tasks as they allow us an access in constant time with a limited storage which does not depend on the size of the environment.
18

Localisation et cartographie simultanées par ajustement de faisceaux local : propagation d'erreurs et réduction de la dérive à l'aide d'un odomètre / Simultaneous localization and mapping by local beam adjustment : error propagation and drift reduction using an odometer

Eudes, Alexandre 14 March 2011 (has links)
Les travaux présentés ici concernent le domaine de la localisation de véhicule par vision artificielle. Dans ce contexte, la trajectoire d’une caméra et la structure3D de la scène filmée sont estimées par une méthode d’odométrie visuelle monoculaire basée sur l’ajustement de faisceaux local. Les contributions de cette thèse sont plusieurs améliorations de cette méthode. L’incertitude associée à la position estimée n’est pas fournie par la méthode d’ajustement de faisceaux local. C’est pourtant une information indispensable pour pouvoir utiliser cette position, notamment dans un système de fusion multi-sensoriel. Une étude de la propagation d’incertitude pour cette méthode d’odométrie visuelle a donc été effectuée pour obtenir un calcul d’incertitude temps réel et représentant l’erreur de manière absolue (dans le repère du début de la trajectoire). Sur de longues séquences (plusieurs kilomètres), les méthodes monoculaires de localisation sont connues pour présenter des dérives importantes dues principalement à la dérive du facteur d’échelle (non observable). Pour réduire cette dérive et améliorer la qualité de la position fournie, deux méthodes de fusion ont été développées. Ces deux améliorations permettent de rendre cette méthode monoculaire exploitable dans le cadre automobile sur de grandes distances tout en conservant les critères de temps réel nécessaire dans ce type d’application. De plus, notre approche montre l’intérêt de disposer des incertitudes et ainsi de tirer parti de l’information fournie par d’autres capteurs. / The present work is about localisation of vehicle using computer vision methods. In this context, the camera trajectory and the 3D structure of the scene is estimated by a monocular visual odometry method based on local bundle adjustment. This thesis contributions are some improvements of this method. The uncertainty of the estimated position was not provided by the local bundle adjustment method. Indeed, this uncertainty is crucial in a multi-sensorial fusion system to use optimally the estimated position. A study of the uncertainty propagation in this visual odometry method has been done and an uncertainty calculus method has been designed to comply with real time performance. By the way, monocular visual localisation methods are known to have serious drift issues on long trajectories (some kilometers). This error mainly comes from bad propagation of the scale factor. To limit this drift and improve the quality of the given position, we proposed two data fusion methods between an odometer and the visual method. Finally, the two improvements presented here allow us to use visual localisation method in real urban environment on long trajectories under real time constraints.
19

Systém pro autonomní mapování závodní dráhy / System for autonomous racetrack mapping

Soboňa, Tomáš January 2021 (has links)
The focus of this thesis is to theoretically design, describe, implement and verify thefunctionality of the selected concept for race track mapping. The theoretical part ofthe thesis describes the ORB-SLAM2 algorithm for vehicle localization. It then furtherdescribes the format of the map - occupancy grid and the method of its creation. Suchmap should be in a suitable format for use by other trajectory planning systems. Severalcameras, as well as computer units, are described in this part, and based on parametersand tests, the most suitable ones are selected. The thesis also proposes the architectureof the mapping system, it describes the individual units that make up the system, aswell as what is exchanged between the units, and in what format the system output issent. The individual parts of the system are first tested separately and subsequently thesystem is tested as a whole. Finally, the achieved results are evaluated as well as thepossibilities for further expansion.
20

Evaluation of Monocular Visual SLAM Methods on UAV Imagery to Reconstruct 3D Terrain

Johansson, Fredrik, Svensson, Samuel January 2021 (has links)
When reconstructing the Earth in 3D, the imagery can come from various mediums, including satellites, planes, and drones. One significant benefit of utilizing drones in combination with a Visual Simultaneous Localization and Mapping (V-SLAM) system is that specific areas of the world can be accurately mapped in real-time at a low cost. Drones can essentially be equipped with any camera sensor, but most commercially available drones use a monocular rolling shutter camera sensor. Therefore, on behalf of Maxar Technologies, multiple monocular V-SLAM systems were studied during this thesis, and ORB-SLAM3 and LDSO were determined to be evaluated further. In order to provide an accurate and reproducible result, the methods were benchmarked on the public datasets EuRoC MAV and TUM monoVO, which includes drone imagery and outdoor sequences, respectively. A third dataset was collected with a DJI Mavic 2 Enterprise Dual drone to evaluate how the methods would perform with a consumer-friendly drone. The datasets were used to evaluate the two V-SLAM systems regarding the generated 3D map (point cloud) and estimated camera trajectory. The results showed that ORB-SLAM3 is less impacted by the artifacts caused by a rolling shutter camera sensor than LDSO. However, ORB-SLAM3 generates a sparse point cloud where depth perception can be challenging since it abstracts the images using feature descriptors. In comparison, LDSO produces a semi-dense 3D map where each point includes the pixel intensity, which improves the depth perception. Furthermore, LDSO is more suitable for dark environments and low-texture surfaces. Depending on the use case, either method can be used as long as the required prerequisites are provided. In conclusion, monocular V-SLAM systems are highly dependent on the type of sensor being used. The differences in the accuracy and robustness of the systems using a global shutter and a rolling shutter are significant, as the geometric artifacts caused by a rolling shutter are devastating for a pure visual pipeline. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>

Page generated in 0.0476 seconds