11 |
An Implementation Of Mono And Stereo Slam System Utilizing Efficient Map Management StrategyKalay, Adnan 01 September 2008 (has links) (PDF)
For an autonomous mobile robot, localization and map building are vital capabilities. The localization ability provides the robot location information, so the robot can navigate in the environment. On the other hand, the robot can interact with its environment using a model of the environment (map information) which is provided by map building mechanism. These two capabilities depends on each other and simultaneous operation of them is called SLAM (Simultaneous Localization and Map Building). While various sensors are used for this algorithm, vision-based approaches are relatively new and have attracted more interest in recent years.
In this thesis work, a versatile Visual SLAM system is constructed and presented. In the core of this work is a vision-based simultaneous localization and map building algorithm which uses point features in the environment as visual landmarks and Extended Kalman Filter for state estimation. A detailed analysis of this algorithm is made including state estimation, feature extraction and data association steps. The algorithm is extended to be used for both stereo and single camera systems. The core of both algorithms is same and we mention the differences of both algorithms originated from the measurement dissimilarity. The algorithm is run also in different motion modes, namely predefined, manual and autonomous. Secondly, a map management strategy is developed especially for extended environments. When the robot runs the SLAM algorithm in large environments, the constructed map contains a great number of landmarks obviously. The efficiency algorithm takes part, when the total number of features exceeds a critical value for the system. In this case, the current map is rarefied without losing the geometrical distribution of the landmarks. Furthermore, a well-organized graphical user interface is implemented which enables the operator to select operational modes, change various parameters of the main SLAM algorithm and see the results of the SLAM operation both textually and graphically. Finally, a basic mission concept is defined in our system, in order to illustrate what robot can do using the outputs of the SLAM algorithm. All of these ideas mentioned are implemented in this thesis, experiments are conducted using a real robot and the analysis results are discussed by comparing the algorithm outputs with ground-truth measurements.
|
12 |
Localisation temps-réel d'un robot par vision monoculaire et fusion multicapteurs / Real-time robot location by monocular vision and multi-sensor fusionCharmette, Baptiste 14 December 2012 (has links)
Ce mémoire présente un système de localisation par vision pour un robot mobile circulant dans un milieu urbain. Pour cela, une première phase d’apprentissage où le robot est conduit manuellement est réalisée pour enregistrer une séquence vidéo. Les images ainsi acquises sont ensuite utilisées dans une phase hors ligne pour construire une carte 3D de l’environnement. Par la suite, le véhicule peut se déplacer dans la zone, de manière autonome ou non, et l’image reçue par la caméra permet de le positionner dans la carte. Contrairement aux travaux précédents, la trajectoire suivie peut être différente de la trajectoire d’apprentissage. L’algorithme développé permet en effet de conserver la localisation malgré des changements de point de vue importants par rapport aux images acquises initialement. Le principe consiste à modéliser les points de repère sous forme de facettes localement planes, surnommées patchs plan, dont l’orientation est connue. Lorsque le véhicule se déplace, une prédiction de la position courante est réalisée et la déformation des facettes induite par le changement de point de vue est reproduite. De cette façon la recherche des amers revient à comparer des images pratiquement identiques, facilitant ainsi leur appariement. Lorsque les positions sur l’image de plusieurs amers sont connues, la connaissance de leur position 3D permet de déduire la position du robot. La transformation de ces patchs plan est complexe et demande un temps de calcul important, incompatible avec une utilisation temps-réel. Pour améliorer les performances de l’algorithme, la localisation a été implémentée sur une architecture GPU offrant de nombreux outils permettant d’utiliser cet algorithme avec des performances utilisables en temps-réel. Afin de prédire la position du robot de manière aussi précise que possible, un modèle de mouvement du robot a été mis en place. Il utilise, en plus de la caméra, les informations provenant des capteurs odométriques. Cela permet d’améliorer la prédiction et les expérimentations montrent que cela fournit une plus grande robustesse en cas de pertes d’images lors du traitement. Pour finir ce mémoire détaille les différentes performances de ce système à travers plusieurs expérimentations en conditions réelles. La précision de la position a été mesurée en comparant la localisation avec une référence enregistrée par un GPS différentiel. / This dissertation presents a vision-based localization system for a mobile robot in an urban context. In this goal, the robot is first manually driven to record a learning image sequence. These images are then processed in an off-line way to build a 3D map of the area. Then vehicle can be —either automatically or manually— driven in the area and images seen by the camera are used to compute the position in the map. In contrast to previous works, the trajectory can be different from the learning sequence. The algorithm is indeed able to keep localization in spite of important viewpoint changes from the learning images. To do that, the features are modeled as locally planar features —named patches— whose orientation is known. While the vehicle is moving, its position is predicted and patches are warped to model the viewpoint change. In this way, matching the patches with points in the image is eased because their appearances are almost the same. After the matching, 3D positions of the patches associated with 2D points on the image are used to compute robot position. The warp of the patch is computationally expensive. To achieve real-time performance, the algorithm has been implemented on GPU architecture and many improvements have been done using tools provided by the GPU. In order to have a pose prediction as precise as possible, a motion model of the robot has been developed. This model uses, in addition to the vision-based localization, information acquired from odometric sensors. Experiments using this prediction model show that the system is more robust especially in case of image loss. Finally many experiments in real situations are described in the end of this dissertation. A differential GPS is used to evaluate the localization result of the algorithm.
|
13 |
Outdoor localization system for mobile robots based on radio-frequency signal strengthMaidana, Renan Guedes 02 March 2018 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-06-07T11:44:28Z
No. of bitstreams: 1
RENAN_GUEDES_MAIDANA_DIS.pdf: 4462325 bytes, checksum: 589fff5df748f66fa3f6b644cbc058db (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-06-15T14:20:14Z (GMT) No. of bitstreams: 1
RENAN_GUEDES_MAIDANA_DIS.pdf: 4462325 bytes, checksum: 589fff5df748f66fa3f6b644cbc058db (MD5) / Made available in DSpace on 2018-06-15T14:58:29Z (GMT). No. of bitstreams: 1
RENAN_GUEDES_MAIDANA_DIS.pdf: 4462325 bytes, checksum: 589fff5df748f66fa3f6b644cbc058db (MD5)
Previous issue date: 2018-03-02 / Na ?rea da Rob?tica M?vel, o problema da localiza??o ? definido como a determina??o da posi??o e orienta??o de um rob? em um espa?o tri-dimensional atrav?s de informa??es de seus sensores. A solu??o mais comum para esse problema ? utilizar um receptor de GPS (doingl?s, Global Positioning System), que reporta posi??o absoluta com rela??o a um sistema de coordenadas fixo e centralizado na Terra. Por?m, o sinal de GPS ? muito afetado por condi??es ambientais e oclus?o de linha de vis?o, por vezes fornecendo estimativas de posi??o de baixa qualidade, se houverem .Com inspira??o nestes problemas, este projeto prop?e um sistema de localiza??o para ser usado por um rob? terrestre em um ambiente externo n?o-controlado, onde h? indisponibilidade de GPS ou que suas medidas s?o de baixa qualidade. Tendo em vista que sensores de baixo custo apresentam medi??es imprecisas devido a fatores ambientais (e.g. terreno acidentado), ? proposta a
utiliza??o de pares receptor-transmissor de R?dio-Frequ?ncia, onde a medida do Indicador
de Pot?ncia de Sinal Recebido ? usada para estimar as dist?ncias entre receptor e trans-
missor, que s?o por sua vez usadas para posicionamento. Essa medida possuia vantagem
de ser independente da ilumina??o do ambiente e do estado do terreno, que afetam outros
m?todos de localiza??o como Odometria Visual ou por rodas. Um erro m?dio de posiciona-
mento de 0.41m foi alcan?ado atrav?s da fus?o de odometria por rodas, velocidade angular
de um girosc?pio e pot?ncia de sinal recebido, em um algoritmo de Filtro de Kalman Esten-
dido Aumentado, comum a melhoria de 82.66% referente ao erro m?dio de 2.38 m obtido
com um sensor GPS comum. / In the field of Mobile Robotics, the localization problem consists on determining a robot?s position and orientation in a three-dimensional space through sensor information. The most common solution to this problem is to employ a Global Positioning System receiver, also known as GPS, which reports absolute position in relation to an Earth-centered fixed coordinate system. However, GPS signals are greatly affected by atmospheric conditions and line-of-sight occlusion, sometimes providing very poor position estimates, if any at all. Inspired by these problems, this project proposes a localization system to be used by a robot in an uncontrolled outdoor environment, where GPS measurements are poor or unavailable. As common sensors provide inaccurate position estimates due to environmental factors (e.g. rough terrain), we propose the use of Radio-Frequency receiver-transmitter pairs, in which the Received Signal Strength Indicator is used for estimating the distances between receiver and transmitter, which in turn are used for positioning. This measurement has the advantage of being independent from lighting conditions or the state of the terrain, factors which affect other localization methods such as visual or wheel odometry. A mean positioning error of 0.41 m was achieved by fusing wheel odometry, angular velocity from a gyroscope and the received signal strength, in an Augmented Extended Kalman Filter algorithm, with an improvement of 82.66% relative to the mean error of 2.38 m obtained with a common GPS sensor.
|
14 |
Worst-case robot navigation in deterministic environmentsMudgal, Apurva 02 December 2009 (has links)
We design and analyze algorithms for the following two robot navigation problems:
1. TARGET SEARCH. Given a robot located at a point s in the plane, how will a robot navigate to a goal t in the presence of unknown
obstacles ?
2. LOCALIZATION. A robot is "lost" in an environment with a map of its surroundings. How will it find its true location by traveling the minimum distance ?
Since efficient algorithms for these two problems will make a robot completely autonomous, they have held the interest of both robotics and computer science communities.
Previous work has focussed mainly on designing competitive algorithms where the robot's performance is compared to that of an omniscient adversary. For example, a competitive algorithm for target search will compare the distance traveled by the robot with the shortest path from
s to t.
We analyze these problems from the worst-case perspective, which, in our view, is a more appropriate measure. Our results are :
1. For target search, we analyze an algorithm called Dynamic A*. The robot continuously moves to the goal on the shortest path which it recomputes on the discovery of obstacles. A variant of this algorithm has been employed in Mars Rover prototypes.
We show that D* takes O(n log n) time on planar graphs and also show a comparable bound on arbitrary graphs. Thus, our results show that D* combines the optimistic possibility of reaching the goal very soon while competing with depth-first search within a logarithmic factor.
2. For the localization problem, worst-case analysis compares the performance of the robot with the optimal decision tree over the set of possible locations.
No approximation algorithm has been known. We give a polylogarithmic approximation algorithm and also show a near-tight lower bound for the grid graphs commonly used in practice. The key idea is to plan travel on a "majority-rule map" which eliminates uncertainty and permits a link to the half-Group Steiner problem. We also extend the problem to polygonal maps by discretizing the domain using novel geometric techniques.
|
15 |
Vision-based Robot Localization Using Artificial And Natural LandmarksArican, Zafer 01 August 2004 (has links) (PDF)
In mobile robot applications, it is an important issue for a robot to know where it is. Accurate localization becomes crucial for navigation and map building applications because both route to follow and positions of the objects to be inserted into the map highly depend on the position of the robot in the environment.
For localization, the robot uses the measurements that it takes by various devices such as laser rangefinders, sonars, odometry devices and vision. Generally these devices give the distances of the objects in the environment to the robot and proceesing these distance information, the robot finds its location in the environment.
In this thesis, two vision-based robot localization algorithms are implemented. The first algorithm uses artificial landmarks as the objects around the robot and by measuring the positions of these landmarks with respect to the camera system, the robot locates itself in the environment. Locations of these landmarks are known. The second algorithm instead of using artificial landmarks, estimates its location by measuring the positions of the objects that naturally exist in the environment. These objects are treated as natural landmarks and locations of these landmarks are not
known initially.
A three-wheeled robot base on which a stereo camera system is mounted is used as the mobile robot unit. Processing and control tasks of the system is performed by a stationary PC. Experiments are performed on this robot system. The stereo camera system is the measurement device for this robot.
|
16 |
AUTONOMOUS NAVIGATION AND ROOM CATEGORIZATION FOR AN ASSISTANT ROBOTDoga Y Ozgulbas (10756674) 07 May 2021 (has links)
<div><div><div><p>Globally, there are more than 727 million people aged 65 years and older in the world, and the elderly population is expected to grow more than double in the next three decades. Families search for affordable and quality care for their senior loved ones will have an effect on the care-giving profession. A personal robot assistant could help with daily tasks such as carrying things for them and keeping track of their routines, relieving the burdens of human caregivers. Performing mentioned tasks usually requires the robot to autonomously navi- gate. An autonomous navigation robot should collect the knowledge of its surroundings by mapping the environment, find its position in the map and calculate trajectories by avoiding obstacles. Furthermore, to assign specific tasks which are in various locations, robot has to categorize the rooms in addition to memorizing the respective coordinates. In this research, methods have been developed to achieve autonomous navigation and room categorization of a mobile robot within indoor environments. A Simultaneously Localization and Map- ping (SLAM) algorithm has been used to build the map and localize the robot. Gmapping, a method of SLAM, was applied by utilizing an odometry and a 2D Light Detection and Ranging (LiDAR) sensor. The trajectory to achieve the goal position by an optimal path is provided by path planning algorithms, which is divided into two parts namely, global and local planners. Global path planning has been produced by DIJKSTRA and local path planning by Dynamic Window Approach (DWA). While exploring new environments with Gmapping and trajectory planning algorithms, rooms in the generated map were classified by a powerful deep learning algorithm called Convolutional Neural Network (CNN). Once the environment is explored, the robots localization in the 2D space is done by Adaptive Monte Carlo Localization (AMCL). To utilize and test the methods above, Gazebo software by The Robotic Operating System (ROS) was used and simulations were performed prior to real life experiments. After the trouble-shooting and feedback acquired from simulations, the robot was able to perform above tasks and later tested in various indoor environments. The environment was mapped successfully by Gmapping and the robot was located within the map by AMCL. Compared to the theoretical maximum efficient path, the robot was able to plan the trajectory with acceptable deviation. In addition, the room names were classified with minimum of 85% accuracy by CNN algorithm. Autonomous navigation results show that the robot can assist elderly people in their home environment by successfully exploring, categorizing and navigating between the rooms.</p></div></div></div>
|
17 |
Robot Localization Using Inertial and RF SensorsElesev, Aleksandr 14 August 2008 (has links)
No description available.
|
18 |
Localisation et détection de fermeture de boucle basées saillance visuelle : algorithmes et architectures matérielles / Localization and loop-closure detection based visual saliency : algorithms and hardware architecturesBirem, Merwan 12 March 2015 (has links)
Dans plusieurs tâches de la robotique, la vision est considérée comme l’élément essentiel avec lequel la perception de l’environnement ou l’interaction avec d’autres utilisateurs peut se réaliser. Néanmoins, les artefacts potentiellement présents dans les images capturées rendent la tâche de reconnaissance et d’interprétation de l’information visuelle extrêmement compliquée. Il est de ce fait, très important d’utiliser des primitives robustes, stables et ayant un taux de répétabilité élevé afin d’obtenir de bonnes performances. Cette thèse porte sur les problèmes de localisation et de détection de fermeture de boucle d’un robot mobile en utilisant la saillance visuelle. Les résultats en termes de précision et d’efficacité des applications de localisation et de détection de fermeture sont évalués et comparés aux résultats obtenus avec des approches de l’état de l’art sur différentes séquences d’images acquises en milieu extérieur. Le principal inconvénient avec les modèles proposés pour l’extraction de zones de saillance est leur complexité de calcul, ce qui conduit à des temps de traitement important. Afin d’obtenir un traitement en temps réel, nous présentons dans ce mémoire l’implémentation du détecteur de régions saillantes sur la plate forme reconfigurable DreamCam. / In several tasks of robotics, vision is considered to be the essential element by which the perception of the environment or the interaction with other users can be realized. However, the potential artifacts in the captured images make the task of recognition and interpretation of the visual information extremely complicated. It is therefore very important to use robust, stable and high repeatability rate primitives to achieve good performance. This thesis deals with the problems of localization and loop closure detection for a mobile robot using visual saliency. The results in terms of accuracy and efficiency of localization and closure detection applications are evaluated and compared to the results obtained with the approaches provided in literature, both applied on different sequences of images acquired in outdoor environnement. The main drawback with the models proposed for the extraction of salient regions is their computational complexity, which leads to significant processing time. To obtain a real-time processing, we present in this thesis also the implementation of the salient region detector on the reconfigurable platform DreamCam.
|
19 |
Použití mobilního robotu v inteligentním domě / Mobile robot in smart houseKuparowitz, Tomáš January 2013 (has links)
Aim of this thesis is to search the market for suitable autonomous robot to be used by smart house. The research in this work is partly done on the range of abilities of smart houses in matter of sensor systems, ability of data processing and their use by mobile robots. The output of this thesis is robotics application written using Microsoft Robotics Developer Studio (C#) and simulated using Visual Simulation Environment. Main feature of this robotic application is the interface between robot and smart house, and robot and user. This interface enables employer to directly control robot's movement or to use automated pathfinding. The robot is able to navigate in dynamic environment and to register, interact and eventually forget temporary obstacles.
|
20 |
Použití mobilního robotu v inteligentním domě / Mobile robot in smart houseKuparowitz, Tomáš January 2013 (has links)
Aim of this thesis is to search the market for suitable autonomous robot to be used by smart house. The research in this work is partly done on the range of abilities of smart houses in matter of sensor systems, ability of data processing and their use by mobile robots. The output of this thesis is robotics application written using Microsoft Robotics Developer Studio (C#) and simulated using Visual Simulation Environment. Main feature of this robotic application is the interface between robot and smart house, and robot and user. This interface enables employer to directly control robot's movement or to use automated pathfinding. The robot is able to navigate in dynamic environment and to register, interact and eventually forget temporary obstacles.
|
Page generated in 0.1337 seconds