• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 44
  • 44
  • 34
  • 17
  • 13
  • 12
  • 9
  • 6
  • 6
  • 5
  • 1
  • Tagged with
  • 482
  • 110
  • 104
  • 102
  • 94
  • 92
  • 87
  • 78
  • 60
  • 55
  • 50
  • 48
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Faktorgraph-basierte Sensordatenfusion zur Anwendung auf einem Quadrocopter

Lange, Sven 12 December 2013 (has links)
Die Sensordatenfusion ist eine allgegenwärtige Aufgabe im Bereich der mobilen Robotik und darüber hinaus. In der vorliegenden Arbeit wird das typischerweise verwendete Verfahren zur Sensordatenfusion in der Robotik in Frage gestellt und anhand von neuartigen Algorithmen, basierend auf einem Faktorgraphen, gelöst sowie mit einer korrespondierenden Extended-Kalman-Filter-Implementierung verglichen. Im Mittelpunkt steht dabei das technische sowie algorithmische Sensorkonzept für die Navigation eines Flugroboters im Innenbereich. Ausführliche Experimente zeigen die Qualitätssteigerung unter Verwendung der neuen Variante der Sensordatenfusion, aber auch Einschränkungen und Beispiele mit nahezu identischen Ergebnissen beider Varianten der Sensordatenfusion. Neben Experimenten anhand einer hardwarenahen Simulation wird die Funktionsweise auch anhand von realen Hardwaredaten evaluiert.
402

Visual Place Recognition in Changing Environments using Additional Data-Inherent Knowledge

Schubert, Stefan 15 November 2023 (has links)
Visual place recognition is the task of finding same places in a set of database images for a given set of query images. This becomes particularly challenging for long-term applications when the environmental condition changes between or within the database and query set, e.g., from day to night. Visual place recognition in changing environments can be used if global position data like GPS is not available or very inaccurate, or for redundancy. It is required for tasks like loop closure detection in SLAM, candidate selection for global localization, or multi-robot/multi-session mapping and map merging. In contrast to pure image retrieval, visual place recognition can often build upon additional information and data for improvements in performance, runtime, or memory usage. This includes additional data-inherent knowledge about information that is contained in the image sets themselves because of the way they were recorded. Using data-inherent knowledge avoids the dependency on other sensors, which increases the generality of methods for an integration into many existing place recognition pipelines. This thesis focuses on the usage of additional data-inherent knowledge. After the discussion of basics about visual place recognition, the thesis gives a systematic overview of existing data-inherent knowledge and corresponding methods. Subsequently, the thesis concentrates on a deeper consideration and exploitation of four different types of additional data-inherent knowledge. This includes 1) sequences, i.e., the database and query set are recorded as spatio-temporal sequences so that consecutive images are also adjacent in the world, 2) knowledge of whether the environmental conditions within the database and query set are constant or continuously changing, 3) intra-database similarities between the database images, and 4) intra-query similarities between the query images. Except for sequences, all types have received only little attention in the literature so far. For the exploitation of knowledge about constant conditions within the database and query set (e.g., database: summer, query: winter), the thesis evaluates different descriptor standardization techniques. For the alternative scenario of continuous condition changes (e.g., database: sunny to rainy, query: sunny to cloudy), the thesis first investigates the qualitative and quantitative impact on the performance of image descriptors. It then proposes and evaluates four unsupervised learning methods, including our novel clustering-based descriptor standardization method K-STD and three PCA-based methods from the literature. To address the high computational effort of descriptor comparisons during place recognition, our novel method EPR for efficient place recognition is proposed. Given a query descriptor, EPR uses sequence information and intra-database similarities to identify nearly all matching descriptors in the database. For a structured combination of several sources of additional knowledge in a single graph, the thesis presents our novel graphical framework for place recognition. After the minimization of the graph's error with our proposed ICM-based optimization, the place recognition performance can be significantly improved. For an extensive experimental evaluation of all methods in this thesis and beyond, a benchmark for visual place recognition in changing environments is presented, which is composed of six datasets with thirty sequence combinations.
403

Asynchronous Event-Feature Detection and Tracking for SLAM Initialization

Ta, Tai January 2024 (has links)
Traditional cameras are most commonly used in visual SLAM to provide visual information about the scene and positional information about the camera motion. However, in the presence of varying illumination and rapid camera movement, the visual quality captured by traditional cameras diminishes. This limits the applicability of visual SLAM in challenging environments such as search and rescue situations. The emerging event camera has been shown to overcome the limitations of the traditional camera with the event camera's superior temporal resolution and wider dynamic range, opening up new areas of applications and research for event-based SLAM. In this thesis, several asynchronous feature detectors and trackers will be used to initialize SLAM using event camera data. To assess the pose estimation accuracy between the different feature detectors and trackers, the initialization performance was evaluated from datasets captured from various environments. Furthermore, two different methods to align corner-events were evaluated on the datasets to assess the difference. Results show that besides some slight variation in the number of accepted initializations, the alignment methods show no overall difference in any metric. Overall highest performance among the event-based trackers for initialization is HASTE with mostly high pose accuracy and a high number of accepted initializations. However, the performance degrades in featureless scenes. CET on the other hand shows mostly lower performance compared to HASTE.
404

Hybrid marker-less camera pose tracking with integrated sensor fusion

Moemeni, Armaghan January 2014 (has links)
This thesis presents a framework for a hybrid model-free marker-less inertial-visual camera pose tracking with an integrated sensor fusion mechanism. The proposed solution addresses the fundamental problem of pose recovery in computer vision and robotics and provides an improved solution for wide-area pose tracking that can be used on mobile platforms and in real-time applications. In order to arrive at a suitable pose tracking algorithm, an in-depth investigation was conducted into current methods and sensors used for pose tracking. Preliminary experiments were then carried out on hybrid GPS-Visual as well as wireless micro-location tracking in order to evaluate their suitability for camera tracking in wide-area or GPS-denied environments. As a result of this investigation a combination of an inertial measurement unit and a camera was chosen as the primary sensory inputs for a hybrid camera tracking system. After following a thorough modelling and mathematical formulation process, a novel and improved hybrid tracking framework was designed, developed and evaluated. The resulting system incorporates an inertial system, a vision-based system and a recursive particle filtering-based stochastic data fusion and state estimation algorithm. The core of the algorithm is a state-space model for motion kinematics which, combined with the principles of multi-view camera geometry and the properties of optical flow and focus of expansion, form the main components of the proposed framework. The proposed solution incorporates a monitoring system, which decides on the best method of tracking at any given time based on the reliability of the fresh vision data provided by the vision-based system, and automatically switches between visual and inertial tracking as and when necessary. The system also includes a novel and effective self-adjusting mechanism, which detects when the newly captured sensory data can be reliably used to correct the past pose estimates. The corrected state is then propagated through to the current time in order to prevent sudden pose estimation errors manifesting as a permanent drift in the tracking output. Following the design stage, the complete system was fully developed and then evaluated using both synthetic and real data. The outcome shows an improved performance compared to existing techniques, such as PTAM and SLAM. The low computational cost of the algorithm enables its application on mobile devices, while the integrated self-monitoring, self-adjusting mechanisms allow for its potential use in wide-area tracking applications.
405

L'ajustement de faisceaux contraint comme cadre d'unification des méthodes de localisation : application à la réalité augmentée sur des objets 3D

Tamaazousti, Mohamed 13 March 2013 (has links) (PDF)
Les travaux réalisés au cours de cette thèse s'inscrivent dans la problématique de localisation en temps réel d'une caméra par vision monoculaire. Dans la littérature, il existe différentes méthodes qui peuvent être classées en trois catégories. La première catégorie de méthodes considère une caméra évoluant dans un environnement complètement inconnu (SLAM). Cette méthode réalise une reconstruction enligne de primitives observées dans des images d'une séquence vidéo et utilise cette reconstruction pour localiser la caméra. Les deux autres permettent une localisation par rapport à un objet 3D de la scène en s'appuyant sur la connaissance, a priori, d'un modèle de cet objet (suivi basé modèle). L'une utilise uniquement l'information du modèle 3D de l'objet pour localiser la caméra, l'autre peut être considérée comme l'intermédiaire entre le SLAM et le suivi basé modèle. Cette dernière méthode consiste à localiser une caméra par rapport à un objet en utilisant, d'une part, le modèle de ce dernier et d'autre part, une reconstruction en ligne des primitives de l'objet d'intérêt. Cette reconstruction peut être assimilée à une mise à jour du modèle initial (suivi basé modèle avec mise à jour). Chacune de ces méthodes possède des avantages et des inconvénients. Dans le cadre de ces travaux de thèse, nous proposons une solution unifiant l'ensemble de ces méthodes de localisation dans un unique cadre désigné sous le terme de SLAM contraint. Cette solution, qui unifie ces différentes méthodes, permet de tirer profit de leurs avantages tout en limitant leurs inconvénients respectifs. En particulier, nous considérons que la caméra évolue dans un environnement partiellement connu, c'est-à-dire pour lequel un modèle (géométrique ou photométrique) 3D d'un objet statique de la scène est disponible. L'objectif est alors d'estimer de manière précise la pose de la caméra par rapport à cet objet 3D. L'information absolue issue du modèle 3D de l'objet d'intérêt est utilisée pour améliorer la localisation de type SLAM en incluant cette information additionnelle directement dans le processus d'ajustement de faisceaux. Afin de pouvoir gérer un large panel d'objets 3D et de scènes, plusieurs types de contraintes sont proposées dans ce mémoire. Ces différentes contraintes sont regroupées en deux approches. La première permet d'unifier les méthodes SLAM et de suivi basé modèle, en contraignant le déplacement de la caméra via la projection de primitives existantes extraites du modèle 3D dans les images. La seconde unifie les méthodes SLAM et de suivi basé modèle avec mise à jour en contraignant les primitives reconstruites par le SLAM à appartenir à la surface du modèle (unification SLAM et mise à jour du modèle). Les avantages de ces différents ajustements de faisceaux contraints, en terme de précision, de stabilité de recalage et de robustesse aux occultations, sont démontrés sur un grand nombre de données de synthèse et de données réelles. Des applications temps réel de réalité augmentée sont également présentées sur différents types d'objets 3D. Ces travaux ont fait l'objet de 4 publications internationales, de 2 publications nationales et d'un dépôt de brevet.
406

Českoslovenští a čeští tenisté a tenistky na Grandslamových turnajích / Czechoslovak and Czech tennis players at the Grand Slam tournaments

Mládek, Ladislav January 2015 (has links)
Title: Czechoslovak and Czech tennis players at the Grand Slam tournaments Objectives: Diploma thesis is divided into two chapters. The aim of the first chapter is to make a comprehensive and valuable overview of the historical development of the greatest tennis tournaments called the Grand Slam from the very beginning until the present. The second part of this thesis focuses on Czechoslovak and Czech men's and women's tennis players who achieved the greatest success in these tournaments. Methods: I used historiographical method in my diploma thesis. Other methods used in this thesis are based on this method. In the first chapter, which is focused on the Grand Slam tournaments, I applied mostly progressive method which captures events from the past until the present. In the second chapter, which is aimed at Czechoslovak and Czech players, I used biographical and chronological methods. Results: Czechoslovak and Czech tennis players have been the most successful at Wimbledon, they have won there fourteen times. Wimbledon is followed by French Open with ten victories. Fewest victories have achieved our players at Australien Open and US Open. Among men, Ivan Lendl was the most successful in the men's single with eight victories at the Grand Slam tournaments. Among women, Martina Navrátilová dominated...
407

[en] PROBABILISTIC SIMULTANEOUS LOCALIZATION AND MAPPING OF MOBILE ROBOTS IN INDOOR ENVIRONMENTS WITH A LASER RANGE FINDER / [pt] LOCALIZAÇÃO E MAPEAMENTO PROBABILÍSTICO SIMULTÂNEOS DE ROBÔS MÓVEIS EM AMBIENTES INTERNOS COM UM SENSOR DE VARREDURA A LASER

SMITH WASHINGTON ARAUCO CANCHUMUNI 19 August 2014 (has links)
[pt] Os Robôs Móveis são cada vez mais inteligentes, para que eles tenham a capacidade de semover livremente no interior deumambiente, evitando obstáculos e sem assistência de um ser humano, precisam possuir um conhecimento prévio do ambiente e de sua localização. Nessa situação, o robô precisa construir um mapa local de seu ambiente durante a execução de sua missão e, simultaneamente, determinar sua localização. Este problema é conhecido como Mapeamento e Localização Simultâneas (SLAM). As soluções típicas para o problema de SLAM utilizam principalmente dois tipos de sensores: (i) odômetros, que fornecem informações de movimento do robô móvel e (ii) sensores de distância, que proporcionam informação da percepção do ambiente. Neste trabalho, apresenta-se uma solução probabilistica para o problema SLAM usando o algoritmo DP-SLAM puramente baseado em medidas de um LRF (Laser Range Finder), com foco em ambientes internos estruturados. Considera-se que o robô móvel está equipado com um único sensor 2DLRF, sem nenhuma informação de odometria, a qual é substituída pela informação obtida da máxima sobreposição de duas leituras consecutivas do sensor LRF, mediante algoritmos de Correspondência de Varreduras (Scan Matching). O algoritmo de Correspondência de Varreduras usado realiza uma Transformada de Distribuições Normais (NDT) para aproximar uma função de sobreposição. Para melhorar o desempenho deste algoritmo e lidar com o LRF de baixo custo, uma reamostragem dos pontos das leituras fornecidas pelo LRF é utilizada, a qual preserva uma maior densidade de pontos da varredura nos locais onde haja características importantes do ambiente. A sobreposição entre duas leituras é otimizada fazendo o uso do algoritmo de Evolução Diferencial (ED). Durante o desenvolvimento deste trabalho, o robô móvel iRobot Create, equipado com o sensor LRF Hokuyo URG-04lx, foi utilizado para coletar dados reais de ambientes internos, e diversos mapas 2D gerados são apresentados como resultados. / [en] The robot to have the ability to move within an environment without the assistance of a human being, it is required to have a knowledge of the environment and its location within it at the same time. In many robotic applications, it is not possible to have an a priori map of the environment. In that situation, the robot needs to build a local map of its environment while executing its mission and, simultaneously, determine its location. A typical solution for the Simultaneous Localization and Mapping (SLAM) problem primarily uses two types of sensors: i) an odometer that provides information of the robot’s movement and ii) a range measurement that provides perception of the environment. In this work, a solution for the SLAM problem is presented using a DP-SLAM algorithm purely based on laser readings, focused on structured indoor environments. It considers that the mobile robot only uses a single 2D Laser Range Finder (LRF), and the odometry sensor is replaced by the information obtained from the overlapping of two consecutive laser scans. The Normal Distributions Transform (NDT) algorithm of the scan matching is used to approximate a function of the map overlapping. To improve the performance of this algorithm and deal with low-quality range data from a compact LRF, a scan point resampling is used to preserve a higher point density of high information features from the scan. An evolution differential algorithm is presented to optimize the overlapping process of two scans. During the development of this work, the mobile robot iRobot Create, assembled with one LRF Hokuyo URG-04LX, is used to collect real data in several indoor environments, generating 2D maps presented as results.
408

Development of algorithms and architectures for driving assistance in adverse weather conditions using FPGAs / Développement d'algorithmes et d'architectures pour l'aide à la conduite dans des conditions météorologiques défavorables en utilisant les FPGA

Botero galeano, Diego andres 05 December 2012 (has links)
En raison de l'augmentation du volume et de la complexité des systèmes de transport, de nouveaux systèmes avancés d'assistance à la conduite (ADAS) sont étudiés dans de nombreuses entreprises, laboratoires et universités. Ces systèmes comprennent des algorithmes avec des techniques qui ont été étudiés au cours des dernières décennies, comme la localisation et cartographie simultanées (SLAM), détection d'obstacles, la vision stéréoscopique, etc. Grâce aux progrès de l'électronique, de la robotique et de plusieurs autres domaines, de nouveaux systèmes embarqués sont développés pour garantir la sécurité des utilisateurs de ces systèmes critiques. Pour la plupart de ces systèmes, une faible consommation d'énergie ainsi qu'une taille réduite sont nécessaires. Cela crée la contrainte d'exécuter les algorithmes sur les systèmes embarqués avec des ressources limitées. Dans la plupart des algorithmes, en particulier pour la vision par ordinateur, une grande quantité de données doivent être traitées à des fréquences élevées, ce qui exige des ressources informatiques importantes. Un FPGA satisfait cette exigence, son architecture parallèle combinée à sa faible consommation d'énergie et la souplesse pour les programmer permet de développer et d'exécuter des algorithmes plus efficacement que sur d'autres plateformes de traitement. Les composants virtuels développés dans cette thèse ont été utilisés dans trois différents projets: PICASSO (vision stéréoscopique), COMMROB (détection d'obstacles à partir d'une système multicaméra) et SART (Système d'Aide au Roulage tous Temps). / Due to the increase of traffic volume and complexity of new transport systems, new Advanced Driver Assistance Systems (ADAS) are a subject of research of many companies, laboratories and universities. These systems include algorithms with techniques that have been studied during the last decades like Simultaneous Lo- calization and Mapping (SLAM), obstacle detection, stereo vision, etc. Thanks to the advances in electronics, robotics and other domains, new embedded systems are being developed to guarantee the safety of the users of these critical systems. For most of these systems a low power consumption as well as reduced size is required. It creates the constraint of execute the algorithms in embedded devices with limited resources. In most of algorithms, moreover for computer vision ones, a big amount of data must be processed at high frequencies, this amount of data demands strong computing resources. FPGAs satisfy this requirement; its parallel architecture combined with its low power consumption and exibility allows developing and executing some algorithms more efficiently than any other processing platforms. In this thesis different embedded computer vision architectures intended to be used in ADAS using FPGAs are presented such as: We present the implementation of a distortion correction architecture operating at 100 Hz in two cameras simultaneously. The correction module allows also to rectify two images for implementation of stereo vision. Obstacle detection algorithms based on Inverse Perspective Mapping (IPM) and classiffication based on Color/Texture attributes are presented. The IPM transform is based in the perspective effect of a scene perceived from two different points of view. Moreover results of the detection algorithms from color/texture attributes applied on a multi-cameras system, are fused in an occupancy grid. An accelerator to apply homographies on images, is presented; this accelerator can be used for different applications like the generation of Bird's eye view or Side view. Multispectral vision is studied using both infrared images and color ones. Syn- thetic images are generated from information acquired from visible and infrared sources to provide a visual aid to the driver. Image enhancement specific for infrared images is also implemented and evaluated, based on the Contrast Lim- ited Adaptive Histogram Equalization (CLAHE). An embedded SLAM algorithm is presented with different hardware acceler- ators (point detection, landmark tracking, active search, correlation, matrix operations). All the algorithms were simulated, implemented and verified using as target FPGAs. The validation was done using development kits. A custom board integrating all the presented algorithms is presented. Virtual components developed in this thesis were used in three different projects: PICASSO (stereo vision), COMMROB (obstacle detection from a multi-cameras system) and SART (multispectral vision).
409

Reconstruction 3D de l'environnement dynamique d'un véhicule à l'aide d'un système multi-caméras hétérogène en stéréo wide-baseline / 3D reconstruction of the dynamic environment surrounding a vehicle using a heterogeneous multi-camera system in wide-baseline stereo

Mennillo, Laurent 05 June 2019 (has links)
Cette thèse a été réalisée dans le secteur de l'industrie automobile, en collaboration avec le Groupe Renault et concerne en particulier le développement de systèmes d'aide à la conduite avancés et de véhicules autonomes. Les progrès réalisés par la communauté scientifique durant les dernières décennies, dans les domaines de l'informatique et de la robotique notamment, ont été si importants qu'ils permettent aujourd'hui la mise en application de systèmes complexes au sein des véhicules. Ces systèmes visent dans un premier temps à réduire les risques inhérents à la conduite en assistant les conducteurs, puis dans un second temps à offrir des moyens de transport entièrement autonomes. Les méthodes de SLAM multi-objets actuellement intégrées au sein de ces véhicules reposent pour majeure partie sur l'utilisation de capteurs embarqués très performants tels que des télémètres laser, au coût relativement élevé. Les caméras numériques en revanche, de par leur coût largement inférieur, commencent à se démocratiser sur certains véhicules de grande série et assurent généralement des fonctions d'assistance à la conduite, pour l'aide au parking ou le freinage d'urgence, par exemple. En outre, cette implantation plus courante permet également d'envisager leur utilisation afin de reconstruire l'environnement dynamique proche des véhicules en trois dimensions. D'un point de vue scientifique, les techniques de SLAM visuel multi-objets existantes peuvent être regroupées en deux catégories de méthodes. La première catégorie et plus ancienne historiquement concerne les méthodes stéréo, faisant usage de plusieurs caméras à champs recouvrants afin de reconstruire la scène dynamique observée. La plupart reposent en général sur l'utilisation de paires stéréo identiques et placées à faible distance l'une de l'autre, ce qui permet un appariement dense des points d'intérêt dans les images et l'estimation de cartes de disparités utilisées lors de la segmentation du mouvement des points reconstruits. L'autre catégorie de méthodes, dites monoculaires, ne font usage que d'une unique caméra lors du processus de reconstruction. Cela implique la compensation du mouvement propre du système d'acquisition lors de l'estimation du mouvement des autres objets mobiles de la scène de manière indépendante. Plus difficiles, ces méthodes posent plusieurs problèmes, notamment le partitionnement de l'espace de départ en plusieurs sous-espaces représentant les mouvements individuels de chaque objet mobile, mais aussi le problème d'estimation de l'échelle relative de reconstruction de ces objets lors de leur agrégation au sein de la scène statique. La problématique industrielle de cette thèse, consistant en la réutilisation des systèmes multi-caméras déjà implantés au sein des véhicules, majoritairement composés d'un caméra frontale et de caméras surround équipées d'objectifs très grand angle, a donné lieu au développement d'une méthode de reconstruction multi-objets adaptée aux systèmes multi-caméras hétérogènes en stéréo wide-baseline. Cette méthode est incrémentale et permet la reconstruction de points mobiles éparses, grâce notamment à plusieurs contraintes géométriques de segmentation des points reconstruits ainsi que de leur trajectoire. Enfin, une évaluation quantitative et qualitative des performances de la méthode a été menée sur deux jeux de données distincts, dont un a été développé durant ces travaux afin de présenter des caractéristiques similaires aux systèmes hétérogènes existants. / This Ph.D. thesis, which has been carried out in the automotive industry in association with Renault Group, mainly focuses on the development of advanced driver-assistance systems and autonomous vehicles. The progress made by the scientific community during the last decades in the fields of computer science and robotics has been so important that it now enables the implementation of complex embedded systems in vehicles. These systems, primarily designed to provide assistance in simple driving scenarios and emergencies, now aim to offer fully autonomous transport. Multibody SLAM methods currently used in autonomous vehicles often rely on high-performance and expensive onboard sensors such as LIDAR systems. On the other hand, digital video cameras are much cheaper, which has led to their increased use in newer vehicles to provide driving assistance functions, such as parking assistance or emergency braking. Furthermore, this relatively common implementation now allows to consider their use in order to reconstruct the dynamic environment surrounding a vehicle in three dimensions. From a scientific point of view, existing multibody visual SLAM techniques can be divided into two categories of methods. The first and oldest category concerns stereo methods, which use several cameras with overlapping fields of view in order to reconstruct the observed dynamic scene. Most of these methods use identical stereo pairs in short baseline, which allows for the dense matching of feature points to estimate disparity maps that are then used to compute the motions of the scene. The other category concerns monocular methods, which only use one camera during the reconstruction process, meaning that they have to compensate for the ego-motion of the acquisition system in order to estimate the motion of other objects. These methods are more difficult in that they have to address several additional problems, such as motion segmentation, which consists in clustering the initial data into separate subspaces representing the individual movement of each object, but also the problem of the relative scale estimation of these objects before their aggregation within the static scene. The industrial motive for this work lies in the use of existing multi-camera systems already present in actual vehicles to perform dynamic scene reconstruction. These systems, being mostly composed of a front camera accompanied by several surround fisheye cameras in wide-baseline stereo, has led to the development of a multibody reconstruction method dedicated to such heterogeneous systems. The proposed method is incremental and allows for the reconstruction of sparse mobile points as well as their trajectory using several geometric constraints. Finally, a quantitative and qualitative evaluation conducted on two separate datasets, one of which was developed during this thesis in order to present characteristics similar to existing multi-camera systems, is provided.
410

HANDHELD LIDAR ODOMETRY ESTIMATION AND MAPPING SYSTEM

Holmqvist, Niclas January 2018 (has links)
Ego-motion sensors are commonly used for pose estimation in Simultaneous Localization And Mapping (SLAM) algorithms. Inertial Measurement Units (IMUs) are popular sensors but suffer from integration drift over longer time scales. To remedy the drift they are often used in combination with additional sensors, such as a LiDAR. Pose estimation is used when scans, produced by these additional sensors, are being matched. The matching of scans can be computationally heavy as one scan can contain millions of data points. Methods exist to simplify the problem of finding the relative pose between sensor data, such as the Normal Distribution Transform SLAM algorithm. The algorithm separates the point cloud data into a voxelgrid and represent each voxel as a normal distribution, effectively decreasing the amount of data points. Registration is based on a function which converges to a minimum. Sub-optimal conditions can cause the function to converge at a local minimum. To remedy this problem this thesis explores the benefits of combining IMU sensor data to estimate the pose to be used in the NDT SLAM algorithm.

Page generated in 0.0662 seconds