Spelling suggestions: "subject:"simultaneously localization anda mapping"" "subject:"simultaneously localization ando mapping""
1 |
Infrared based monocular relative navigation for active debris removalYilmaz, Özgün January 2018 (has links)
In space, visual based relative navigation systems suffer from the harsh illumination conditions of the target (e.g. eclipse conditions, solar glare, etc.). In current Rendezvous and Docking (RvD) missions, most of these issues are addressed by advanced mission planning techniques (e.g strict manoeuvre timings). However, such planning would not always be feasible for Active Debris Removal (ADR) missions which have more unknowns. Fortunately, thermal infrared technology can operate under any lighting conditions and therefore has the potential to be exploited in the ADR scenario. In this context, this study investigates the benefits and the challenges of infrared based relative navigation. The infrared environment of ADR is very much different to that of terrestrial applications. This study proposes a methodology of modelling this environment in a computationally cost effective way to create a simulation environment in which the navigation solution can be tested. Through an intelligent classification of possible target surface coatings, the study is generalised to simulate the thermal environment of space debris in different orbit profiles. Through modelling various scenarios, the study also discusses the possible challenges of the infrared technology. In laboratory conditions, providing the thermal-vacuum environment of ADR, these theoretical findings were replicated. By use of this novel space debris set-up, the study investigates the behaviour of infrared cues extracted by different techniques and identifies the issue of short-lifespan features in the ADR scenarios. Based on these findings, the study suggests two different relative navigation methods based on the degree of target cooperativeness: partially cooperative targets, and uncooperative targets. Both algorithms provide the navigation solution with respect to an online reconstruction of the target. The method for partially cooperative targets provides a solution for smooth trajectories by exploiting the subsequent image tracks of features extracted from the first frame. The second algorithm is for uncooperative targets and exploits the target motion (e.g. tumbling) by formulating the problem in terms of a static target and a moving map (i.e. target structure) within a filtering framework. The optical flow information is related to the target motion derivatives and the target structure. A novel technique that uses the quality of the infrared cues to improve the algorithm performance is introduced. The problem of short measurement duration due to target tumbling motion is addressed by an innovative smart initialisation procedure. Both navigation solutions were tested in a number of different scenarios by using computer simulations and a specific laboratory set-up with real infrared camera. It is shown that these methods can perform well as the infrared-based navigation solutions using monocular cameras where knowledge relating to the infrared appearance of the target is limited.
|
2 |
Indoor Positioning Using Opportunistic Multi-Frequency RSS With Foot-Mounted INS / Inomhuspositionering baserat på opportunistiska signalstyrkemätningar och fotmonterad TNSNilsson, Martin January 2014 (has links)
Reliable and accurate positioning systems are expected to significantly improve the safety for first responders and enhance their operational efficiency. To be effective, a first responder positioning systemmust provide room level accuracy during extended time periods of indoor operation. This thesis presents a system which combines a zero-velocity-update (ZUPT) aided inertial navigation system (INS), using a foot-mounted inertial measurement unit (IMU), with the use of opportunistic multi-frequency received signal strength (RSS) measurements. The system does not rely on maps or pre-collected data from surveys of the radio-frequency (RF environment; instead, it builds its own database of collected rss measurements during the course of the operation. New RSS measurements are continuously compared with the stored values in the database, and when the user returns to a previously visited area this can thus be detected. This enables loop-closures to be detected online, which can be used for error drift correction. The system utilises a distributed particle simultaneous localisation and mapping (DP-SLAM) algorithm which provides a flexible 2-D navigation platform that can be extended with more sensors. The experimental results presented in this thesis indicates that the developed rss slam algorithm can, in many cases, significantly improve the positioning performance of a foot-mounted INS.
|
3 |
Autonomous Robotic Strategies for Urban Search and RescueRyu, Kun Jin 16 November 2012 (has links)
This dissertation proposes autonomous robotic strategies for urban search and rescue (USAR) which are map-based semi-autonomous robot navigation and fully-autonomous robotic search, tracking, localization and mapping (STLAM) using a team of robots. Since the prerequisite for these solutions is accurate robot localization in the environment, this dissertation first presents a novel grid-based scan-to-map matching technique for accurate simultaneous localization and mapping (SLAM). At every acquisition of a new scan and estimation of the robot pose, the proposed technique corrects the estimation error by matching the new scan to the globally defined grid map. To improve the accuracy of the correction, each grid cell of the map is represented by multiple normal distributions (NDs). The new scan to be matched to the map is also represented by NDs, which achieves the scan-to-map matching by the ND-to-ND matching. In the map-based semi-autonomous robot navigation strategy, a robot placed in an environment creates the map of the environment and sends it to the human operator at a distant location. The human operator then makes decisions based on the map and controls the robot via tele-operation. In case of communication loss, the robot semi-autonomously returns to the home position by inversely tracking its trajectory with additional optimal path planning. In the fully-autonomous robotic solution to USAR, multiple robots communicate one another while operating together as a team. The base station collects information from each robot and assigns tasks to the robots. Unlike the semi-autonomous strategy there is no control from the human operator. To further enhance the efficiency of their cooperation each member of the team specifically works on its own task.
A series of numerical and experimental studies were conducted to demonstrate the applicability of the proposed solutions to USAR scenarios. The effectiveness of the scan-to-map matching with the multi-ND representation was confirmed by analyzing the error accumulation and by comparing with the single-ND representation. The applicability of the scan-to-map matching to the real SLAM problem was also verified in three different real environments. The results of the map-based semi-autonomous robot navigation showed the effectiveness of the approach as an immediately usable solution to USAR. The effectiveness of the proposed fully- autonomous solution was first confirmed by two real robots in a real environment. The cooperative performance of the strategy was further investigated using the developed platform- and hardware-in-the-loop simulator. The results showed significant potential as the future solution to USAR. / Ph. D.
|
4 |
Gaussian Mixture Model Based SLAM: Theory and Application the Department of Aerospace EngineeringTurnowicz, Matthew Ryan 08 December 2017 (has links)
This dissertation describes the development of a method for simultaneous localization and mapping (SLAM)algorithm which is suitable for high dimensional vehicle and map states. The goal of SLAM is to be able to navigate autonomously without the use of external aiding sources for vehicles. SLAM's combination of the localization and mapping problems makes it especially difficult to solve accurately and efficiently, due to the shear size of the unknown state vector. The vehicle states are typically constant in number while the map states increase with time. The increasing number of unknowns in the map state makes it impossible to use traditional Kalman filters to solve the problem- the covariance matrix grows too large and the computational complexity becomes too overwhelming. Particle filters have proved beneficial for alleviating the complexity of the SLAM problem for low dimensional vehicle states, but there is little work done for higher dimensional states. This research provides an Gaussian Mixture Model based alternative to the particle filtering SLAM methods, and provides a further partition that alleviates the vehicle state dimensionality problem with the standard particle filter. A SLAM background and basic theory is provided in the early chapters. A description of the new algorithm is provided in detail. Simulations are run demonstrating the performance of the algorithm, and then an aerial SLAM platform is developed for further testing. The aerial SLAM system uses a RGBD camera as well as an inertial measurement unit to collect SLAM data, and the ground truth is captured using an indoor optical motion capture system. Details on image processing and specifics on the inertial integration are provided. The performance of the algorithm is compared to a state of the art particle filtering based SLAM algorithm, and the results are discussed. Further work performed while working in the industry is described, which involves SLAM for adding transponders onto long-baseline acoustic arrays and stereo-inertial SLAM for 3D reconstruction of deep-water sub-sea structures. Finally, a neatly packaged production line version of the stereo-inertial SLAM system presented.
|
5 |
Visual simultaneous localization and mapping in a noisy static environmentMakhubela, J. K. 03 1900 (has links)
M. Tech. (Department of Information and Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology / Simultaneous Localization and Mapping (SLAM) has seen tremendous interest amongst the research community in recent years due to its ability to make the robot truly independent in navigation. Visual Simultaneous Localization and Mapping (VSLAM) is when an autonomous mobile robot is embedded with a vision sensor such as monocular, stereo vision, omnidirectional or Red Green Blue Depth (RGBD) camera to localize and map an unknown environment. The purpose of this research is to address the problem of environmental noise, such as light intensity in a static environment, which has been an issue that makes a Visual Simultaneous Localization and Mapping (VSLAM) system to be ineffective. In this study, we have introduced a Light Filtering Algorithm into the Visual Simultaneous Localization and Mapping (VSLAM) method to reduce the amount of noise in order to improve the robustness of the system in a static environment, together with the Extended Kalman Filter (EKF) algorithm for localization and mapping and A* algorithm for navigation. Simulation is utilized to execute experimental performance. Experimental results show a 60% landmark or landfeature detection of the total landmark or landfeature within a simulation environment and a root mean square error (RMSE) of 0.13m, which is minimal when compared with other Simultaneous Localization and Mapping (SLAM) systems from literature. The inclusion of a Light Filtering Algorithm has enabled the Visual Simultaneous Localization and Mapping (VSLAM) system to navigate in an obscure environment.
|
6 |
Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.Guizilini, Vitor Campanholo 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
|
7 |
Mapeamento e localização simultâneos para multirobôs cooperativos. / Cooperative multi-robot simultaneous localization and mapping.Romero Cano, Victor Adolfo 05 October 2010 (has links)
Neste trabalho foi desenvolvido um estudo comparativo entre duas estratégias básicas para a combinação de mapas parciais baseados em marcos para sistemas multirobô: a estratégia por associação de marcos e a estratégia por distância relativa entre os robôs (também conhecida por rendez-vous). O ambiente simulado corresponde a um entorno plano povoado de árvores que são mapeadas por uma equipe de dois robôs móveis equipados com sensores laser para medir a largura e localização de cada _arvore (marco). Os mapas parciais são estimados usando o algoritmo FastSLAM. Além do estudo comparativo propõe-se também um algoritmo alternativo de localização e mapeamento simultâneos para multirrobôs cooperativos, utilizando as observações entre os robôs não só para o cálculo da transformação de coordenadas, mas também no desenvolvimento de um processo seqüencial para atualizar o alinhamento entre os mapas, explorando de forma mais eficiente as observações entre robôs. Os experimentos realizados demonstraram que o algoritmo proposto pode conduzir a resultados significativamente melhores em termos de precisão quando comparado com a abordagem de combinação de mapas tradicional (usando distância relativa entre os robôs). / In this text a comparative survey between the two basic strategies used to combine partial landmark based maps in multi-robot systems, data association and inter-robot observations (known as rendezvous), is presented. The simulated environment is a at place populated by trees, which are going to be mapped by a two-mobile robot team equipped with laser range finders in order to measure every tree (landmark) location and width. Partial maps are estimated using the algorithm FastSLAM. Besides the comparative study it is also proposed an alternative algorithm for Simultaneous Localization and Mapping (SLAM) in multi-robot cooperative systems. It uses observations between robots (detections) not only for calculating the coordinate transformation but also in the development of a sequential process for updating the alignment between maps, exploiting in a more efficient way the inter-robot observations. The experiments showed that the algorithm can lead to significantly better results in terms of accuracy when compared with the traditional approach of combining maps (using the relative distance between robots).
|
8 |
Vision-enhanced localization for cooperative roboticsBoga, Sreekanth, Roppel, Thaddeus A. January 2009 (has links)
Thesis--Auburn University, 2009. / Abstract. Vita. Includes bibliographic references (p.44-49).
|
9 |
Analysis of independent motion detection in 3D scenesFloren, Andrew William 30 October 2012 (has links)
In this thesis, we develop an algorithm for detecting independent motion in real-time from 2D image sequences of arbitrarily complex 3D scenes. We discuss the necessary background information in image formation, optical flow, multiple view geometry, robust estimation, and real-time camera and scene pose estimation for constructing and understanding the operation of our algorithm. Furthermore, we provide an overview of existing independent motion detection techniques and compare them to our proposed solution. Unfortunately, the existing independent motion detection techniques were not evaluated quantitatively nor were their source code made publicly available. Therefore, it is not possible to make direct comparisons. Instead, we constructed several comparison algorithms which should have comparable performance to these previous approaches. We developed methods for quantitatively comparing independent motion detection algorithms and found that our solution had the best performance. By establishing a method for quantitatively evaluating these algorithms and publishing our results, we hope to foster better research in this area and help future investigators more quickly advance the state of the art. / text
|
10 |
Semantic mapping for service robots: building and using maps for mobile manipulators in semi-structured environmentsTrevor, Alexander J. B. 08 June 2015 (has links)
Although much progress has been made in the field of robotic mapping, many challenges remain including: efficient semantic segmentation using RGB-D sensors, map representations that include complex features (structures and objects), and interfaces for interactive annotation of maps. This thesis addresses how prior knowledge of semi-structured human environments can be leveraged to improve segmentation, mapping, and semantic annotation of maps. We present an organized connected component approach for segmenting RGB-D data into planes and clusters. These segments serve as input to our mapping approach that utilizes them as planar landmarks and object landmarks for Simultaneous Localization and Mapping (SLAM), providing necessary information for service robot tasks and improving data association and loop closure. These features are meaningful to humans, enabling annotation of mapped features to establish common ground and simplifying tasking. A modular, open-source software framework, the OmniMapper, is also presented that allows a number of different sensors and features to be combined to generate a combined map representation, and enabling easy addition of new feature types.
|
Page generated in 0.2012 seconds