• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 8
  • Tagged with
  • 21
  • 21
  • 10
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fermeture de boucle pour la cartographie topologique et la navigation avec des images omnidirectionnelles / Loop closure for topological mapping and navigation with omnidirectional images

Korrapati, Hemanth 03 July 2013 (has links)
Dans le cadre de la robotique mobile, des progrès significatifs ont été obtenus au cours des trois dernières décennies pour la cartographie et la localisation. La plupart des projets de recherche traitent du problème de SLAM métrique. Les techniques alors développées sont sensibles aux erreurs liées à la dérive ce qui restreint leur utilisation à des environnements de petite échelle. Dans des environnements de grande taille, l’utilisation de cartes topologiques, qui sont indépendantes de l’information métrique, se présentent comme une alternative aux approches métriques.Cette thèse porte principalement sur le problème de la construction de cartes topologiques pour la navigation de robots mobiles dans des environnements urbains de grande taille, en utilisant des caméras omnidirectionnelles. La principale contribution de cette thèse est la résolution efficace et avec précision du problème de fermeture de boucles, problème qui est au coeur de tout algorithme de cartographie topologique. Le cadre de cartographie topologique éparse / hiérarchique proposé allie une approche de partionnement de séquence d’images (ISP) par regroupement des images visuellement similaires dans un noeud avec une approche de détection de fermeture de boucles permettant de connecter ces noeux. Le graphe topologique alors obtenu représente l’environnement du robot. L’algorithme de fermeture de boucle hiérarchique développé permet d’extraire dans un premier temps les noeuds semblables puis, dans un second temps, l’image la plus similaire. Cette détection de fermeture de boucles hiérarchique est rendue efficace par le stockage du contenu des cartes éparses sous la forme d’une structure de données d’indexation appelée fichier inversé hiérarchique (HIF). Nous proposons de combiner le score de pondération TFIDF avec des contraintes spatiales et la fréquence des amers détectés pour obtenir une meilleur robustesse de la fermeture de boucles. Les résultats en terme de densité et précision des cartes obtenues et d’efficacité sont évaluées et comparées aux résultats obtenus avec des approches de l’état de l’art sur des séquences d’images omnidirectionnelles acquises en milieu extérieur. Au niveau de la précision des détections de boucles, des résultats similaires ont été observés vis-à-vis des autres approches mais sans étape de vérification utilisant la géométrie épipolaire. Bien qu’efficace, l’approche basée sur HIF présente des inconvénients comme la faible densité des cartes et le faible taux de détection des boucles. Une seconde technique de fermeture de boucle a alors été développée pour combler ces lacunes. Le problème de la faible densité des cartes est causé par un sur-partionnement de la séquence d’images. Celui-ci est résolu en utilisant des vecteurs de descripteurs agrégés localement (VLAD) lors de l’étape de ISP. Une mesure de similarité basée sur une contrainte spatiale spécifique à la structure des images omnidirectionnelles a également été développée. Des résultats plus précis sont obtenus, même en présence de peu d’appariements. Les taux de réussite sont meilleurs qu’avec FABMAP 2.0, la méthode la plus utilisée actuellement, sans étape supplémentaire de vérification géométrique.L’environnement est souvent supposé invariant au cours du temps : la carte de l’environnement est construite lors d’une phase d’apprentissage puis n’est pas modifiée ensuite. Une gestion de la mémoire à long terme est nécessaire pour prendre en compte les modifications dans l’environnement au cours du temps. La deuxième contribution de cette thèse est la formulation d’une approche de gestion de la mémoire visuelle à long terme qui peut être utilisée dans le cadre de cartes visuelles topologiques et métriques. Les premiers résultats obtenus sont encourageants. (...) / Over the last three decades, research in mobile robotic mapping and localization has seen significant progress. However, most of the research projects these problems into the SLAM framework while trying to map and localize metrically. As metrical mapping techniques are vulnerable to errors caused by drift, their ability to produce consistent maps is limited to small scale environments. Consequently, topological mapping approaches which are independent of metrical information stand as an alternative to metrical approaches in large scale environments. This thesis mainly deals with the loop closure problem which is the crux of any topological mapping algorithm. Our main aim is to solve the loop closure problem efficiently and accurately using an omnidirectional imaging sensor. Sparse topological maps can be built by representing groups of visually similar images of a sequence as nodes of a topological graph. We propose a sparse / hierarchical topological mapping framework which uses Image Sequence Partitioning (ISP) to group visually similar images of a sequence as nodes which are then connected on occurrence of loop closures to form a topological graph. A hierarchical loop closure algorithm that can first retrieve the similar nodes and then perform an image similarity analysis on the retrieved nodes is used. An indexing data structure called Hierarchical Inverted File (HIF) is proposed to store the sparse maps to facilitate an efficient hierarchical loop closure. TFIDF weighting is combined with spatial and frequency constraints on the detected features for improved loop closure robustness. Sparsity, efficiency and accuracy of the resulting maps are evaluated and compared to that of the other two existing techniques on publicly available outdoor omni-directional image sequences. Modest loop closure recall rates have been observed without using the epi-polar geometry verification step common in other approaches. Although efficient, the HIF based approach has certain disadvantages like low sparsity of maps and low recall rate of loop closure. To address these shortcomings, another loop closure technique using spatial constraint based similarity measure on omnidirectional images has been proposed. The low sparsity of maps caused by over-partitioning of the input sequence has been overcome by using Vector of Locally Aggregated Descriptors (VLAD) for ISP. Poor resolution of the omnidirectional images causes fewer feature matches in image pairs resulting in reduced recall rates. A spatial constraint exploiting the omnidirectional image structure is used for feature matching which gives accurate results even with fewer feature matches. Recall rates better than the contemporary FABMAP 2.0 approach have been observed without the additional geometric verification. The second contribution of this thesis is the formulation of a visual memory management approach suitable for long term operability of mobile robots. The formulated approach is suitable for both topological and metrical visual maps. Initial results which demonstrate the capabilities of this approach have been provided. Finally, a detailed description of the acquisition and construction of our multi-sensor dataset is provided. The aim of this dataset is to serve the researchers working in the mobile robotics and vision communities for evaluating applications like visual SLAM, mapping and visual odometry. This is the first dataset with omnidirectional images acquired on a car-like vehicle driven along a trajectory with multiple loops. The dataset consists of 6 sequences with data from 11 sensors including 7 cameras, stretching 18 kilometers in a semi-urban environmental setting with complete and precise ground-truth.
2

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Guizilini, Vitor Campanholo 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
3

Sistema de visão omnidirecional aplicado no controle de robôs móveis. / Omnidirectional vision system applied to mobile robots control.

Grassi Júnior, Valdir 07 May 2002 (has links)
Sistemas de visão omnidirecional produzem imagens de 360º do ambiente podendo ser utilizados em navegação, tele-operação e controle servo visual de robôs. Este tipo de sistema dispensa o movimento da câmera para determinada direção de atenção mas requer processamento não convencional da imagem, uma vez que a imagem adquirida se encontra mapeada em coordenadas polares não lineares. Uma maneira efetiva de se obter uma imagem em um sistema omnidirecional é com o uso combinado de lentes e espelhos. Várias formas de espelhos convexos podem ser utilizadas montando-se uma câmera com o seu eixo óptico alinhado com o centro do espelho. Dentre as formas usadas, tem-se os cônicos, parabólicos, hiperbólicos e esféricos. Neste trabalho foi implementado um sistema de visão omnidirecional utilizando um espelho hiperbólico. Este sistema de visão desenvolvido é embarcado em um robô móvel e aplicado em uma tarefa de controle. A tarefa de controle de interesse neste trabalho é a de fazer com que o robô mantenha uma distância constante de um determinado alvo móvel. Esta tarefa é realizada com a realimentação em tempo real de informações visuais do alvo obtidas pelo sistema de visão para controle do robô utilizando uma abordagem de controle servo visual. / Omnidirectional vision systems can get images with a 360-degree of field of view. This type of system is very well suited for tasks such as robotic navigation, tele-operation and visual servoing. Such systems do not require the movement of the camera to the direction of attention of the robot. On the other hand, it requires a non-conventional image processing as the image captured by this vision system is mapped on a non-linear polar coordinate system. One effective way to obtain an image in an omnidirectional system is through the use of lenses and mirrors. Several different shapes of convex mirrors can be used, mounting the center of the mirror aligned with the camera optical axis. The most commonly used mirror shapes are conic, parabolic, hyperbolic and spherical. In this work a hyperbolical mirror was used to build an omnidirectional vision system. This system was mounted on a mobile robot and used in a control task. The task of interest here is the tracking in real time of a moving target keeping the distance between the robot and the target constant. This task is accomplished with data acquisition from the omnidirectional vision system, that is used as feedback to control the mobile robot in a visual servo approach.
4

Sistema de visão omnidirecional aplicado no controle de robôs móveis. / Omnidirectional vision system applied to mobile robots control.

Valdir Grassi Júnior 07 May 2002 (has links)
Sistemas de visão omnidirecional produzem imagens de 360º do ambiente podendo ser utilizados em navegação, tele-operação e controle servo visual de robôs. Este tipo de sistema dispensa o movimento da câmera para determinada direção de atenção mas requer processamento não convencional da imagem, uma vez que a imagem adquirida se encontra mapeada em coordenadas polares não lineares. Uma maneira efetiva de se obter uma imagem em um sistema omnidirecional é com o uso combinado de lentes e espelhos. Várias formas de espelhos convexos podem ser utilizadas montando-se uma câmera com o seu eixo óptico alinhado com o centro do espelho. Dentre as formas usadas, tem-se os cônicos, parabólicos, hiperbólicos e esféricos. Neste trabalho foi implementado um sistema de visão omnidirecional utilizando um espelho hiperbólico. Este sistema de visão desenvolvido é embarcado em um robô móvel e aplicado em uma tarefa de controle. A tarefa de controle de interesse neste trabalho é a de fazer com que o robô mantenha uma distância constante de um determinado alvo móvel. Esta tarefa é realizada com a realimentação em tempo real de informações visuais do alvo obtidas pelo sistema de visão para controle do robô utilizando uma abordagem de controle servo visual. / Omnidirectional vision systems can get images with a 360-degree of field of view. This type of system is very well suited for tasks such as robotic navigation, tele-operation and visual servoing. Such systems do not require the movement of the camera to the direction of attention of the robot. On the other hand, it requires a non-conventional image processing as the image captured by this vision system is mapped on a non-linear polar coordinate system. One effective way to obtain an image in an omnidirectional system is through the use of lenses and mirrors. Several different shapes of convex mirrors can be used, mounting the center of the mirror aligned with the camera optical axis. The most commonly used mirror shapes are conic, parabolic, hyperbolic and spherical. In this work a hyperbolical mirror was used to build an omnidirectional vision system. This system was mounted on a mobile robot and used in a control task. The task of interest here is the tracking in real time of a moving target keeping the distance between the robot and the target constant. This task is accomplished with data acquisition from the omnidirectional vision system, that is used as feedback to control the mobile robot in a visual servo approach.
5

Grades de evidência com visão estéreo omnidirecional para robôs móveis. / Evidence grids with omnidirectional stereovision for mobile robots.

Fabiano Rogério Corrêa 27 August 2004 (has links)
Robôs móveis autônomos dependem da informação obtida de seus sensores para processos de tomada de decisão durante a realização de suas tarefas. A utilização de sistemas de visão permite a aquisição de um grande volume de dados sobre o ambiente no qual o robô se encontra. Particularmente, um sistema de visão omnidirecional é capaz de fornecer informações sobre todo o espaço ao redor do robô numa única imagem. Através do processamento de um par ou mais de imagens omnidirecionais pode-se obter as distâncias entre o robô e os objetos no seu ambiente de trabalho. Devido às incertezas inerentes a qualquer sensoriamento, um modelo probabilístico do mesmo faz-se necessário para que a informação sensorial adquirida possa ser utilizada para os processos de decisão internos do robô durante a execução de sua tarefa. Assim, tendo como único sensor um sistema de visão estéreo omnidirecional utilizado como fonte de informação para uma representação estocástica espacial do ambiente, conhecida como Grades de Evidência, o robô é capaz de determinar a probabilidade da ocupação dos espaços ao seu redor e assim navegar autonomamente no ambiente. Este artigo mostra um algoritmo estéreo com imagens omnidirecionais e um modelo do sistema de visão estéreo omnidirecional para atualização das Grades de Evidência. Este é a primeira etapa de um trabalho que visa a realização de tarefas de navegação e exploração de ambientes desconhecidos e não-estruturados tendo como base de conhecimento para o robô um modelo probabilístico baseado nas Grades de Evidência. / Autonomous mobile robots depend on information acquired with its sensors to make decisions during its task. The use of vision systems provide a large amount of data about the environment in which the robot is. Particularly, an omnidirectional vision systems provide information in all directions of the environment to the robot with just one image. Through the processing of a pair of omnidirectional images it is possible to obtain the distances between the robot and the objects in its work environment. Because of the uncertainty of all sensors, a probabilistic model is necessary so that the information acquired could be used in decision make processes. Having just an omnidirectional stereovision system as a source of information to an stochastic representation of the environment, known as Evidence Grids, the robot can determine the probability of occupation of the space in the environment and navigate autonomously. This article shows a stereo algorithm and a model of the omnidirectional stereovision system to update the Evidence Grid. This is the beginning of a work that have as objective make navigation and exploration of unknown and unstructured environment having as knowledge base a probabilistic model as Evidence Grids.
6

Grades de evidência com visão estéreo omnidirecional para robôs móveis. / Evidence grids with omnidirectional stereovision for mobile robots.

Corrêa, Fabiano Rogério 27 August 2004 (has links)
Robôs móveis autônomos dependem da informação obtida de seus sensores para processos de tomada de decisão durante a realização de suas tarefas. A utilização de sistemas de visão permite a aquisição de um grande volume de dados sobre o ambiente no qual o robô se encontra. Particularmente, um sistema de visão omnidirecional é capaz de fornecer informações sobre todo o espaço ao redor do robô numa única imagem. Através do processamento de um par ou mais de imagens omnidirecionais pode-se obter as distâncias entre o robô e os objetos no seu ambiente de trabalho. Devido às incertezas inerentes a qualquer sensoriamento, um modelo probabilístico do mesmo faz-se necessário para que a informação sensorial adquirida possa ser utilizada para os processos de decisão internos do robô durante a execução de sua tarefa. Assim, tendo como único sensor um sistema de visão estéreo omnidirecional utilizado como fonte de informação para uma representação estocástica espacial do ambiente, conhecida como Grades de Evidência, o robô é capaz de determinar a probabilidade da ocupação dos espaços ao seu redor e assim navegar autonomamente no ambiente. Este artigo mostra um algoritmo estéreo com imagens omnidirecionais e um modelo do sistema de visão estéreo omnidirecional para atualização das Grades de Evidência. Este é a primeira etapa de um trabalho que visa a realização de tarefas de navegação e exploração de ambientes desconhecidos e não-estruturados tendo como base de conhecimento para o robô um modelo probabilístico baseado nas Grades de Evidência. / Autonomous mobile robots depend on information acquired with its sensors to make decisions during its task. The use of vision systems provide a large amount of data about the environment in which the robot is. Particularly, an omnidirectional vision systems provide information in all directions of the environment to the robot with just one image. Through the processing of a pair of omnidirectional images it is possible to obtain the distances between the robot and the objects in its work environment. Because of the uncertainty of all sensors, a probabilistic model is necessary so that the information acquired could be used in decision make processes. Having just an omnidirectional stereovision system as a source of information to an stochastic representation of the environment, known as Evidence Grids, the robot can determine the probability of occupation of the space in the environment and navigate autonomously. This article shows a stereo algorithm and a model of the omnidirectional stereovision system to update the Evidence Grid. This is the beginning of a work that have as objective make navigation and exploration of unknown and unstructured environment having as knowledge base a probabilistic model as Evidence Grids.
7

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Vitor Campanholo Guizilini 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
8

Omnidirectional Optical Flow and Visual Motion Detection for Autonomous Robot Navigation

Stratmann, Irem 06 December 2007 (has links)
Autonomous robot navigation in dynamic environments requires robust detection of egomotion and independent motion. This thesis introduces a novel solution to the problem of visual independent motion detection by interpreting the topological features of omnidirectional dense optical flow field and determining the background - egomotion direction. The thesis solves the problem of visual independent motion detection in four interdependent subtasks. Independent Motion Detection can only be accomplished if the egomotion detection yields a relevant background motion model. Therefore, the problem of Egomotion Detection is solved first by exploiting the topological structures of the global omnidirectional optical flow fields. The estimation of the optical flow field is the prerequisite of the Egomotion-Detection task. Since the omnidirectional projection introduces non-affine deformations on the image plane, the known optical flow calculation methods have to be modified to yield accurate results. This modification is introduced here as another subtask, Omnidirectional Optical Flow Estimation. The experiments concerning the 3D omnidirectional scene capturing are grouped under the fourth subtask 3D Omni-Image Processing.
9

Omnidirectional Vision for an Autonomous Surface Vehicle

Gong, Xiaojin 07 February 2009 (has links)
Due to the wide field of view, omnidirectional cameras have been extensively used in many applications, including surveillance and autonomous navigation. In order to implement a fully autonomous system, one of the essential problems is construction of an accurate, dynamic environment model. In Computer Vision this is called structure from stereo or motion (SFSM). The work in this dissertation addresses omnidirectional vision based SFSM for the navigation of an autonomous surface vehicle (ASV), and implements a vision system capable of locating stationary obstacles and detecting moving objects in real time. The environments where the ASV navigates are complex and fully of noise, system performance hence is a primary concern. In this dissertation, we thoroughly investigate the performance of range estimation for our omnidirectional vision system, regarding to different omnidirectional stereo configurations and considering kinds of noise, for instance, disturbances in calibration, stereo configuration, and image processing. The result of performance analysis is very important for our applications, which not only impacts the ASV's navigation, also guides the development of our omnidirectional stereo vision system. Another big challenge is to deal with noisy image data attained from riverine environments. In our vision system, a four-step image processing procedure is designed: feature detection, feature tracking, motion detection, and outlier rejection. The choice of point-wise features and outlier rejection based method makes motion detection and stationary obstacle detection efficient. Long run outdoor experiments are conducted in real time and show the effectiveness of the system. / Ph. D.
10

Local visual feature based localisation and mapping by mobile robots

Andreasson, Henrik January 2008 (has links)
This thesis addresses the problems of registration, localisation and simultaneous localisation and mapping (SLAM), relying particularly on local visual features extracted from camera images. These fundamental problems in mobile robot navigation are tightly coupled. Localisation requires a representation of the environment (a map) and registration methods to estimate the pose of the robot relative to the map given the robot’s sensory readings. To create a map, sensor data must be accumulated into a consistent representation and therefore the pose of the robot needs to be estimated, which is again the problem of localisation. The major contributions of this thesis are new methods proposed to address the registration, localisation and SLAM problems, considering two different sensor configurations. The first part of the thesis concerns a sensor configuration consisting of an omni-directional camera and odometry, while the second part assumes a standard camera together with a 3D laser range scanner. The main difference is that the former configuration allows for a very inexpensive set-up and (considering the possibility to include visual odometry) the realisation of purely visual navigation approaches. By contrast, the second configuration was chosen to study the usefulness of colour or intensity information in connection with 3D point clouds (“coloured point clouds”), both for improved 3D resolution (“super resolution”) and approaches to the fundamental problems of navigation that exploit the complementary strengths of visual and range information. Considering the omni-directional camera/odometry setup, the first part introduces a new registration method based on a measure of image similarity. This registration method is then used to develop a localisation method, which is robust to the changes in dynamic environments, and a visual approach to metric SLAM, which does not require position estimation of local image features and thus provides a very efficient approach. The second part, which considers a standard camera together with a 3D laser range scanner, starts with the proposal and evaluation of non-iterative interpolation methods. These methods use colour information from the camera to obtain range information at the resolution of the camera image, or even with sub-pixel accuracy, from the low resolution range information provided by the range scanner. Based on the ability to determine depth values for local visual features, a new registration method is then introduced, which combines the depth of local image features and variance estimates obtained from the 3D laser range scanner to realise a vision-aided 6D registration method, which does not require an initial pose estimate. This is possible because of the discriminative power of the local image features used to determine point correspondences (data association). The vision-aided registration method is further developed into a 6D SLAM approach where the optimisation constraint is based on distances of paired local visual features. Finally, the methods introduced in the second part are combined with a novel adaptive normal distribution transform (NDT) representation of coloured 3D point clouds into a robotic difference detection system.

Page generated in 0.5165 seconds