• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 5
  • Tagged with
  • 12
  • 12
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

T?cnicas visuais de localiza??o e mapeamento simult?neos sem extra??o de primitivas geom?tricas da imagem

Ara?jo, Vitor Menegheti Ugulino de 29 July 2011 (has links)
Made available in DSpace on 2014-12-17T14:55:52Z (GMT). No. of bitstreams: 1 VitorMUA_DISSERT.pdf: 1704175 bytes, checksum: aa664311278faf5559b37a1627d7e89e (MD5) Previous issue date: 2011-07-29 / In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments / No SLAM (Simultaneous Localization and Mapping), um rob? posicionado em uma localiza??o desconhecida de um ambiente qualquer deve ser capaz de construir uma perspectiva deste ambiente (um mapa) e se localizar no mesmo simultaneamente, utilizando apenas informa??es captadas pelos sensores do rob? e muitas vezes sinais de controle conhecidos. Recentemente, impulsionados pelo avan?o computacional, trabalhos nessa ?rea propuseram usar c?mera de v?deo como sensor e surgiu assim o SLAM Visual. Este possui v?rias abordagens e a grande maioria delas funcionam, basicamente, extraindo caracter?sticas do ambiente, calculando as devidas correspond?ncias e atrav?s destas, e de filtros estat?sticos, estimam os par?metros necess?rios. Neste trabalho ? apresentado um sistema de SLAM Visual Monocular que utiliza registro direto de imagem para calcular o erro de reproje??o entre imagens e m?todos de otimiza??o que minimizam esse erro e assim obter os par?metros relativos ? pose do rob? e o mapa do ambiente diretamente dos pixels das imagens. Dessa forma as etapas de extra??o e correspond?ncia de caracter?sticas s?o dispensadas, possibilitando que nosso sistema funcione bem em ambientes onde as abordagens tradicionais teriam dificuldades. Al?m disso, ao se abordar o problema do SLAM da forma proposta nesse trabalho evitase um problema muito comum nas abordagens tradicionais, conhecido como acumulo do erro. Preocupando-se com o elevado custo computacional desta abordagem foram testados v?rios tipos de m?todos de otimiza??o afim de achar um bom equil?brio entre boas estimativas e tempo de processamento. Os resultados apresentados neste trabalho comprovam o funcionamento desse sistema em diferentes ambientes
2

Motion Conflict Detection and Resolution in Visual-Inertial Localization Algorithm

Wisely Babu, Benzun 30 July 2018 (has links)
In this dissertation, we have focused on conflicts that occur due to disagreeing motions in multi-modal localization algorithms. In spite of the recent achievements in robust localization by means of multi-sensor fusion, these algorithms are not applicable to all environments. This is primarily attributed to the following fundamental assumptions: (i) the environment is predominantly stationary, (ii) only ego-motion of the sensor platform exists, and (iii) multiple sensors are always in agreement with each other regarding the observed motion. Recently, studies have shown how to relax the static environment assumption using outlier rejection techniques and dynamic object segmentation. Additionally, to handle non ego-motion, approaches that extend the localization algorithm to multi-body tracking have been studied. However, there has been no attention given to the conditions where multiple sensors contradict each other with regard to the motions observed. Vision based localization has become an attractive approach for both indoor and outdoor applications due to the large information bandwidth provided by images and reduced cost of the cameras used. In order to improve the robustness and overcome the limitations of vision, an Inertial Measurement Unit (IMU) may be used. Even though visual-inertial localization has better accuracy and improved robustness due to the complementary nature of camera and IMU sensor, they are affected by disagreements in motion observations. We term such dynamic situations as environments with motion conflictbecause these are caused when multiple different but self- consistent motions are observed by different sensors. Tightly coupled visual inertial fusion approaches that disregard such challenging situations exhibit drift that can lead to catastrophic errors. We have provided a probabilistic model for motion conflict. Additionally, a novel algorithm to detect and resolve motion conflicts is also presented. Our method to detect motion conflicts is based on per-frame positional estimate discrepancy and per- landmark reprojection errors. Motion conflicts were resolved by eliminating inconsistent IMU and landmark measurements. Finally, a Motion Conflict aware Visual Inertial Odometry (MC- VIO) algorithm that combined both detection and resolution of motion conflict was implemented. Both quantitative and qualitative evaluation of MC-VIO on visually and inertially challenging datasets were obtained. Experimental results indicated that MC-VIO algorithm reduced the absolute trajectory error by 70% and the relative pose error by 34% in scenes with motion conflict, in comparison to the reference VIO algorithm. Motion conflict detection and resolution enables the application of visual inertial localization algorithms to real dynamic environments. This paves the way for articulate object tracking in robotics. It may also find numerous applications in active long term augmented reality.
3

Localização e mapeamento simultâneos (SLAM) visual usando sensor RGB-D para ambientes internos e representação de características /

Guapacha, Jovanny Bedoya January 2017 (has links)
Orientador: Suely Cunha Amaro Mantovani / Resumo: A criação de robôs que podem operar autonomamente em ambientes controlados e não controlados tem sido, um dos principais objetivos da robótica móvel. Para que um robô possa navegar em um ambiente interno desconhecido, ele deve se localizar e ao mesmo tempo construir um mapa do ambiente que o rodeia, a este problema dá-se o nome de Localização e Mapeamento Simultâneos- SLAM. Tem-se como proposta neste trabalho para solucionar o problema do SLAM, o uso de um sensor RGB-D, com 6 graus de liberdade para perceber o ambiente, o qual é embarcado em um robô. O problema do SLAM pode ser solucionado estimando a pose - posição e orientação, e a trajetória do sensor no ambiente, de forma precisa, justificando a construção de um mapa em três dimensões (3D). Esta estimação envolve a captura consecutiva de frames do ambiente fornecidos pelo sensor RGB-D, onde são determinados os pontos mais acentuados das imagens através do uso de características visuais dadas pelo algoritmo ORB. Em seguida, a comparação entre frames consecutivos e o cálculo das transformações geométricas são realizadas, mediante o algoritmo de eliminação de correspondências atípicas, bPROSAC. Por fim, uma correção de inconsistências é efetuada para a reconstrução do mapa 3D e a estimação mais precisa da trajetória do robô, utilizando técnicas de otimização não lineares. Experimentos são realizados para mostrar a construção do mapa e o desempenho da proposta. / Doutor
4

Localização e mapeamento simultâneos (SLAM) visual usando sensor RGB-D para ambientes internos e representação de características / Simultaneous location and mapping (SLAM) visual using RGB-D sensor for indoor environments and characteristics representation

Guapacha, Jovanny Bedoya [UNESP] 04 September 2017 (has links)
Submitted by JOVANNY BEDOYA GUAPACHA null (jovan@utp.edu.co) on 2017-11-02T14:40:57Z No. of bitstreams: 1 TESE _JBG_verf__02_11_2017_repositorio.pdf: 4463035 bytes, checksum: a4e99464884d8580fc971b9f062337d4 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-11-13T16:46:44Z (GMT) No. of bitstreams: 1 guapacha_jb_dr_ilha.pdf: 4463035 bytes, checksum: a4e99464884d8580fc971b9f062337d4 (MD5) / Made available in DSpace on 2017-11-13T16:46:44Z (GMT). No. of bitstreams: 1 guapacha_jb_dr_ilha.pdf: 4463035 bytes, checksum: a4e99464884d8580fc971b9f062337d4 (MD5) Previous issue date: 2017-09-04 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A criação de robôs que podem operar autonomamente em ambientes controlados e não controlados tem sido, um dos principais objetivos da robótica móvel. Para que um robô possa navegar em um ambiente interno desconhecido, ele deve se localizar e ao mesmo tempo construir um mapa do ambiente que o rodeia, a este problema dá-se o nome de Localização e Mapeamento Simultâneos- SLAM. Tem-se como proposta neste trabalho para solucionar o problema do SLAM, o uso de um sensor RGB-D, com 6 graus de liberdade para perceber o ambiente, o qual é embarcado em um robô. O problema do SLAM pode ser solucionado estimando a pose - posição e orientação, e a trajetória do sensor no ambiente, de forma precisa, justificando a construção de um mapa em três dimensões (3D). Esta estimação envolve a captura consecutiva de frames do ambiente fornecidos pelo sensor RGB-D, onde são determinados os pontos mais acentuados das imagens através do uso de características visuais dadas pelo algoritmo ORB. Em seguida, a comparação entre frames consecutivos e o cálculo das transformações geométricas são realizadas, mediante o algoritmo de eliminação de correspondências atípicas, bPROSAC. Por fim, uma correção de inconsistências é efetuada para a reconstrução do mapa 3D e a estimação mais precisa da trajetória do robô, utilizando técnicas de otimização não lineares. Experimentos são realizados para mostrar a construção do mapa e o desempenho da proposta. / The robots creation that can operate autonomously in controlled and uncontrolled environments has been, one of the main objectives of mobile robotics. In order for a robot to navigate in an unknown internal environment, it must locate yourself and at the same time construct a map of the surrounding environment this problem is called Simultaneous Location and Mapping - SLAM. The purpose of this work for solution to SLAM’s problem is to use an RGB-D sensor with 6 degrees of freedom to perceive the environment, which is embedded onto a robot.The SLAM's problem can be solved by estimating the position and orientation, and the path of the sensor/robot in the environment, in precise form, justifying the construction of a 3D map. This estimation involves the consecutive capture of the environment's frames provided by the RGB-D sensor, where the pronounced points of the images are determined through the use of visual characteristics given by the ORB algorithm. Then, the comparison between consecutive frames and the calculation of the geometric transformations are performed using the algorithm of elimination of atypical correspondences, bPROSAC. Finally, a correction of inconsistencies is made for the reconstruction of the 3D map and the more accurate estimation of the robot trajectory, using non-linear optimization techniques. Experiments are carried out to show the construction of the map and the performance of the proposal.
5

Localiza??o de rob?s m?veis aut?nomos utilizando fus?o sensorial de odometria e vis?o monocular

Santos, Guilherme Leal 07 May 2010 (has links)
Made available in DSpace on 2014-12-17T14:55:46Z (GMT). No. of bitstreams: 1 GuilhermeLS_DISSERT.pdf: 861871 bytes, checksum: 8461d130e59e8fb9ac951602b094fd18 (MD5) Previous issue date: 2010-05-07 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / The development and refinement of techniques that make simultaneous localization and mapping (SLAM) for an autonomous mobile robot and the building of local 3-D maps from a sequence of images, is widely studied in scientific circles. This work presents a monocular visual SLAM technique based on extended Kalman filter, which uses features found in a sequence of images using the SURF descriptor (Speeded Up Robust Features) and determines which features can be used as marks by a technique based on delayed initialization from 3-D straight lines. For this, only the coordinates of the features found in the image and the intrinsic and extrinsic camera parameters are avaliable. Its possible to determine the position of the marks only on the availability of information of depth. Tests have shown that during the route, the mobile robot detects the presence of characteristics in the images and through a proposed technique for delayed initialization of marks, adds new marks to the state vector of the extended Kalman filter (EKF), after estimating the depth of features. With the estimated position of the marks, it was possible to estimate the updated position of the robot at each step, obtaining good results that demonstrate the effectiveness of monocular visual SLAM system proposed in this paper / O desenvolvimento e aperfei?oamento de t?cnicas que fa?am simultaneamente o mapeamento e a localiza??o (Simultaneous Localization and Mapping - SLAM) de um rob? m?vel aut?nomo e a cria??o de mapas locais 3-D, a partir de uma sequ?ncia de imagens, ? bastante estudada no meio cient?fico. Neste trabalho ? apresentado uma t?cnica de SLAM visual monocular baseada no filtro de Kalman estendido, que utiliza caracter?sticas encontradas em uma sequ?ncia de imagens atrav?s do descritor SURF (Speeded Up Robust Features) e determina quais caracter?sticas podem ser utilizadas como marcas atrav?s de uma t?cnica de inicializa??o atrasada baseada em retas 3-D. Para isso, tem-se dispon?vel apenas as coordenadas das caracter?sticas detectadas na imagem e os par?metros intr?nsecos e extr?nsecos da c?mera. ? poss?vel determinar a posi??o das marcas somente com a disponibilidade da informa??o de profundidade. Os experimentos realizados mostraram que durante o percurso, o rob? m?vel detecta a presen?a de caracter?sticas nas imagens e, atrav?s de uma t?cnica proposta para inicializa??o atrasada de marcas, adiciona novas marcas ao vetor de estados do filtro de Kalman estendido (FKE) ap?s estimar a profundidade das caracter?sticas. Com a posi??o estimada das marcas, foi poss?vel estimar a posi??o atualizada do rob? a cada passo; obtendo resultados satisfat?rios que comprovam a efici?ncia do sistema de SLAM visual monocular proposto neste trabalho
6

[pt] SLAM VISUAL EM AMBIENTES DINÂMICOS UTILIZANDO SEGMENTAÇÃO PANÓPTICA / [en] VISUAL SLAM IN DYNAMIC ENVIRONMENTS USING PANOPTIC SEGMENTATION

GABRIEL FISCHER ABATI 10 August 2023 (has links)
[pt] Robôs moveis se tornaram populares nos últimos anos devido a sua capacidade de operar de forma autônoma e performar tarefas que são perigosas, repetitivas ou tediosas para seres humanos. O robô necessita ter um mapa de seus arredores e uma estimativa de sua localização dentro desse mapa para alcançar navegação autônoma. O problema de Localização e Mapeamento Simultâneos (SLAM) está relacionado com a determinação simultânea do mapa e da localização usando medidas de sensores. SLAM visual diz respeito a estimar a localização e o mapa de um robô móvel usando apenas informações visuais capturadas por câmeras. O uso de câmeras para o sensoriamento proporciona uma vantagem significativa, pois permite resolver tarefas de visão computacional que fornecem informações de alto nível sobre a cena, incluindo detecção, segmentação e reconhecimento de objetos. A maioria dos sistemas de SLAM visuais não são robustos a ambientes dinâmicos. Os sistemas que lidam com conteúdo dinâmico normalmente contem com métodos de aprendizado profundo para detectar e filtrar objetos dinâmicos. Existem vários sistemas de SLAM visual na literatura com alta acurácia e desempenho, porem a maioria desses métodos não englobam objetos desconhecidos. Este trabalho apresenta um novo sistema de SLAM visual robusto a ambientes dinâmicos, mesmo na presença de objetos desconhecidos. Este método utiliza segmentação panóptica para filtrar objetos dinâmicos de uma cena durante o processo de estimação de estado. A metodologia proposta é baseada em ORB-SLAM3, um sistema de SLAM estado-da-arte em ambientes estáticos. A implementação foi testada usando dados reais e comparado com diversos sistemas da literatura, incluindo DynaSLAM, DS-SLAM e SaD-SLAM. Além disso, o sistema proposto supera os resultados do ORB-SLAM3 em um conjunto de dados personalizado composto por ambientes dinâmicos e objetos desconhecidos em movimento. / [en] Mobile robots have become popular in recent years due to their ability to operate autonomously and accomplish tasks that would otherwise be too dangerous, repetitive, or tedious for humans. The robot must have a map of its surroundings and an estimate of its location within this map to achieve full autonomy in navigation. The Simultaneous Localization and Mapping (SLAM) problem is concerned with determining both the map and localization concurrently using sensor measurements. Visual SLAM involves estimating the location and map of a mobile robot using only visual information captured by cameras. Utilizing cameras for sensing provides a significant advantage, as they enable solving computer vision tasks that offer high-level information about the scene, including object detection, segmentation, and recognition. There are several visual SLAM systems in the literature with high accuracy and performance, but the majority of them are not robust in dynamic scenarios. The ones that deal with dynamic content in the scenes usually rely on deep learning-based methods to detect and filter dynamic objects. However, these methods cannot deal with unknown objects. This work presents a new visual SLAM system robust to dynamic environments, even in the presence of unknown moving objects. It uses Panoptic Segmentation to filter dynamic objects from the scene during the state estimation process. The proposed methodology is based on ORB-SLAM3, a state-of-the-art SLAM system for static environments. The implementation was tested using real-world datasets and compared with several systems from the literature, including DynaSLAM, DS-SLAM and SaD-SLAM. Also, the proposed system surpasses ORB-SLAM3 results in a custom dataset composed of dynamic environments with unknown moving objects.
7

An Observability-Driven System Concept for Monocular-Inertial Egomotion and Landmark Position Determination

Markgraf, Marcel 25 February 2019 (has links)
In this dissertation a novel alternative system concept for monocular-inertial egomotion and landmark position determination is introduced. It is mainly motivated by an in-depth analysis of the observability and consistency of the classic simultaneous localization and mapping (SLAM) approach, which is based on a world-centric model of an agent and its environment. Within the novel system concept - a body-centric agent and environment model, - a pseudo-world centric motion propagation, - and closed-form initialization procedures are introduced. This approach allows for combining the advantageous observability properties of body-centric modeling and the advantageous motion propagation properties of world-centric modeling. A consistency focused and simulation based evaluation demonstrates the capabilities as well as the limitations of the proposed concept. / In dieser Dissertation wird ein neuartiges, alternatives Systemkonzept für die monokular-inertiale Eigenbewegungs- und Landmarkenpositionserfassung vorgestellt. Dieses Systemkonzept ist maßgeblich motiviert durch eine detaillierte Analyse der Beobachtbarkeits- und Konsistenzeigenschaften des klassischen Simultaneous Localization and Mapping (SLAM), welches auf einer weltzentrischen Modellierung eines Agenten und seiner Umgebung basiert. Innerhalb des neuen Systemkonzeptes werden - eine körperzentrische Modellierung des Agenten und seiner Umgebung, - eine pseudo-weltzentrische Bewegungspropagation, - und geschlossene Initialisierungsprozeduren eingeführt. Dieser Ansatz erlaubt es, die günstigen Beobachtbarkeitseigenschaften körperzentrischer Modellierung und die günstigen Propagationseigenschaften weltzentrischer Modellierung zu kombinieren. Sowohl die Fähigkeiten als auch die Limitierungen dieses Ansatzes werden abschließend mit Hilfe von Simulationen und einem starken Fokus auf Schätzkonsistenz demonstriert.
8

Localization of autonomous ground vehicles in dense urban environments

Himstedt, Marian 25 January 2011 (has links)
The localization of autonomous ground vehicles in dense urban environments poses a challenge. Applications in classical outdoor robotics rely on the availability of GPS systems in order to estimate the position. However, the presence of complex building structures in dense urban environments hampers a reliable localization based on GPS. Alternative approaches have to be applied In order to tackle this problem. This thesis proposes an approach which combines observations of a single perspective camera and odometry in a probabilistic framework. In particular, the localization in the space of appearance is addressed. First, a topological map of reference places in the environment is built. Each reference place is associated with a set of visual features. A feature selection is carried out in order to obtain distinctive reference places. The topological map is extended to a hybrid representation by the use of metric information from Geographic Information Systems (GIS) and satellite images. The localization is solved in terms of the recognition of reference places. A particle lter implementation incorporating this and the vehicle's odometry is presented. The proposed system is evaluated based on multiple experiments in exemplary urban environments characterized by high building structures and a multitude of dynamic objects.
9

[en] REAL-TIME METRIC-SEMANTIC VISUAL SLAM FOR DYNAMIC AND CHANGING ENVIRONMENTS / [pt] SLAM VISUAL MÉTRICO-SEMÂNTICO EM TEMPO REAL PARA AMBIENTES EM MUDANÇA E DINÂMICOS

JOAO CARLOS VIRGOLINO SOARES 05 July 2022 (has links)
[pt] Robôs móveis são cada dia mais importantes na sociedade moderna, realizando tarefas consideradas tediosas ou muito repetitivas para humanos, como limpeza ou patrulhamento. A maioria dessas tarefas requer um certo nível de autonomia do robô. Para que o robô seja considerado autônomo, ele precisa de um mapa do ambiente, e de sua posição e orientação nesse mapa. O problema de localização e mapeamento simultâneos (SLAM) é a tarefa de estimar tanto o mapa quanto a posição e orientação simultaneamente, usando somente informações dos sensores, sem ajuda externa. O problema de SLAM visual consiste na tarefa de realizar SLAM usando somente câmeras para o sensoriamento. A maior vantagem de usar câmeras é a possibilidade de resolver problemas de visão computacional que provêm informações de alto nível sobre a cena, como detecção de objetos. Porém a maioria dos sistemas de SLAM visual assume um ambiente estático, o que impõe limitações para a sua aplicabilidade em cenários reais. Esta tese apresenta soluções para o problema de SLAM visual em ambientes dinâmicos e em mudança. Especificamente, a tese propõe um método para ambientes com multidões, junto com um detector de pessoas customizado baseado em aprendizado profundo. Além disso, também é proposto um método de SLAM visual para ambientes altamente dinâmicos contendo objetos em movimento, combinando um rastreador de objetos robusto com um algoritmo de filtragem de pontos. Além disso, esta tese propõe um método de SLAM visual para ambientes em mudança, isto é, em cenas onde os objetos podem mudar de lugar após o robô já os ter mapeado. Todos os métodos propostos são testados com dados públicos e experimentos, e comparados com diversos métodos da literatura, alcançando um bom desempenho em tempo real. / [en] Mobile robots have become increasingly important in modern society, as they can perform tasks that are tedious or too repetitive for humans, such as cleaning and patrolling. Most of these tasks require a certain level of autonomy of the robot. To be fully autonomous and perform navigation, the robot needs a map of the environment and its pose within this map. The Simultaneous Localization and Mapping (SLAM) problem is the task of estimating both map and localization, simultaneously, only using sensor measurements. The visual SLAM problem is the task of performing SLAM only using cameras for sensing. The main advantage of using cameras is the possibility of solving computer vision problems that provide high-level information about the scene, such as object detection. However, most visual SLAM systems assume a static environment, which imposes a limitation on their applicability in real-world scenarios. This thesis presents solutions to the visual SLAM problem in dynamic and changing environments. A custom deep learning-based people detector allows our solution to deal with crowded environments. Also, a combination of a robust object tracker and a filtering algorithm enables our visual SLAM system to perform well in highly dynamic environments containing moving objects. Furthermore, this thesis proposes a visual SLAM method for changing environments, i.e., in scenes where the objects are moved after the robot has already mapped them. All proposed methods are tested in datasets and experiments and compared with several state-of-the-art methods, achieving high accuracy in real time.
10

Bearing-only SLAM : a vision-based navigation system for autonomous robots

Huang, Henry January 2008 (has links)
To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.

Page generated in 0.0423 seconds