Spelling suggestions: "subject:"amobile robot navigation"" "subject:"0mobile robot navigation""
1 |
Building safety maps using vision for safe local mobile robot navigationMurarka, Aniket 18 March 2011 (has links)
In this work we focus on building local maps to enable wheeled mobile robots to navigate safely and autonomously in urban environments. Urban environments present a variety of hazards that mobile robots have to detect and represent in their maps to navigate safely. Examples of hazards include obstacles such as furniture, drop-offs such as at downward stairs, and inclined surfaces such as wheelchair ramps. We address two shortcomings perceived in the literature on mapping. The first is the extensive use of expensive laser-based sensors for mapping, and the second is the focus on only detecting obstacles when clearly other hazards such as drop-offs need to be detected to ensure safety. Therefore, in this work we develop algorithms for building maps using only relatively inexpensive stereo cameras, that allow safe local navigation by detecting and modeling hazards such as overhangs, drop-offs, and ramps in addition to static obstacles. The hazards are represented using 2D annotated grid maps called local safety maps. Each cell in the map is annotated with one of several labels: Level, Inclined, Non-ground, or, Unknown. Level cells are safe for travel whereas Inclined cells require caution. Non-ground cells are unsafe for travel and represent obstacles, overhangs, or regions lower than safe ground. Level and Inclined cells can be further annotated as being Drop-off Edges. The process of building safety maps consists of three main steps: (i) computing a stereo depth map; (ii) building a 3D model using the stereo depths; and, (iii) analyzing the 3D model for safety to construct the safety map. We make significant contributions to each of the three steps: we develop global stereo methods for computing disparity maps that use edge and color information; we introduce a probabilistic data association method for building 3D models using stereo range points; and we devise a novel method for segmenting and fitting planes to 3D models allowing for a precise safety analysis. In addition, we also develop a stand-alone method for detecting drop-offs in front of the robot that uses motion and occlusion cues and only relies on monocular images. We introduce an evaluation framework for evaluating (and comparing) our algorithms on real world data sets, collected by driving a robot in various environments. Accuracy is measured by comparing the constructed safety maps against ground truth safety maps and computing error rates. The ground truth maps are obtained by manually annotating maps built using laser data. As part of the framework we also estimate latencies introduced by our algorithms and the accuracy of the plane fitting process. We believe this framework can be used for comparing the performance of a variety of vision-based mapping systems and for this purpose we make our datasets, ground truth maps, and evaluation code publicly available. We also implement a real-time version of one of the safety map algorithms on a wheelchair robot and demonstrate it working in various environments. The constructed safety maps allow safe local motion planning and also support the extraction of local topological structures that can be used to build global maps. / text
|
2 |
Autonomous navigation of a wheeled mobile robot in farm settings2014 February 1900 (has links)
This research is mainly about autonomously navigation of an agricultural wheeled mobile robot in an unstructured outdoor setting. This project has four distinct phases defined as: (i) Navigation and control of a wheeled mobile robot for a point-to-point motion. (ii) Navigation and control of a wheeled mobile robot in following a given path (path following problem). (iii) Navigation and control of a mobile robot, keeping a constant proximity distance with the given paths or plant rows (proximity-following). (iv) Navigation of the mobile robot in rut following in farm fields. A rut is a long deep track formed by the repeated passage of wheeled vehicles in soft terrains such as mud, sand, and snow.
To develop reliable navigation approaches to fulfill each part of this project, three main steps are accomplished: literature review, modeling and computer simulation of wheeled mobile robots, and actual experimental tests in outdoor settings. First, point-to-point motion planning of a mobile robot is studied; a fuzzy-logic based (FLB) approach is proposed for real-time autonomous path planning of the robot in unstructured environment. Simulation and experimental evaluations shows that FLB approach is able to cope with different dynamic and unforeseen situations by tuning a safety margin. Comparison of FLB results with vector field histogram (VFH) and preference-based fuzzy (PBF) approaches, reveals that FLB approach produces shorter and smoother paths toward the goal in almost all of the test cases examined. Then, a novel human-inspired method (HIM) is introduced. HIM is inspired by human behavior in navigation from one point to a specified goal point. A human-like reasoning ability about the situations to reach a predefined goal point while avoiding any static, moving and unforeseen obstacles are given to the robot by HIM. Comparison of HIM results with FLB suggests that HIM is more efficient and effective than FLB.
Afterward, navigation strategies are built up for path following, rut following, and proximity-following control of a wheeled mobile robot in outdoor (farm) settings and off-road terrains. The proposed system is composed of different modules which are: sensor data analysis, obstacle detection, obstacle avoidance, goal seeking, and path tracking. The capabilities of the proposed navigation strategies are evaluated in variety of field experiments; the results show that the proposed approach is able to detect and follow rows of bushes robustly. This action is used for spraying plant rows in farm field.
Finally, obstacle detection and obstacle avoidance modules are developed in navigation system. These modules enables the robot to detect holes or ground depressions (negative obstacles), that are inherent parts of farm settings, and also over ground level obstacles (positive obstacles) in real-time at a safe distance from the robot. Experimental tests are carried out on two mobile robots (PowerBot and Grizzly) in outdoor and real farm fields. Grizzly utilizes a 3D-laser range-finder to detect objects and perceive the environment, and a RTK-DGPS unit for localization. PowerBot uses sonar sensors and a laser range-finder for obstacle detection. The experiments demonstrate the capability of the proposed technique in successfully detecting and avoiding different types of obstacles both positive and negative in variety of scenarios.
|
3 |
Localização e navegação de robô autônomo através de odometria e visão estereoscópica / Localization and navigation of an autonomous mobile robot trough odometry and stereoscopic visionDelgado Vargas, Jaime Armando, 1986- 20 August 2018 (has links)
Orientador: Paulo Roberto Gardel Kurka / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-20T13:27:04Z (GMT). No. of bitstreams: 1
DelgadoVargas_JaimeArmando_M.pdf: 4350704 bytes, checksum: 8e7dab5b1630b88bde95e287a62b2f7e (MD5)
Previous issue date: 2012 / Resumo: Este trabalho apresenta a implementação de um sistema de navegação com visão estereoscópica em um robô móvel, que permite a construção de mapa de ambiente e localização. Para isto é necessário conhecer o modelo cinemático do robô, técnicas de controle, algoritmos de identificação de características em imagens (features), reconstrução 3D com visão estereoscópica e algoritmos de navegação. Utilizam-se métodos para a calibração de câmera desenvolvida no âmbito do grupo de pesquisa da FEM/UNICAMP e da literatura. Resultados de análises experimentais e teóricas são comparados. Resultados adicionais mostram a validação do algoritmo de calibração de câmera, acurácia dos sensores, resposta do sistema de controle, e reconstrução 3D. Os resultados deste trabalho são de importância para futuros estudos de navegação robótica e calibração de câmeras / Abstract: This paper presents a navigation system with stereoscopic vision on a mobile robot, which allows the construction of environment map and location. In that way must know the kinematic model of the robot, algorithms for identifying features in images (features) as a Sift, 3D reconstruction with stereoscopic vision and navigation algorithms. Methods are used to calibrate the camera developed within the research group of the FEM / UNICAMP and literature. Results of experimental and theoretical analyzes are compared. Additional results show the validation of the algorithm for camera calibration, accuracy of sensors, control system response, and 3D reconstruction. These results are important for future studies of robotic navigation and calibration of cameras / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
|
4 |
Návrh a realizace řídících systému pro mobilní robot / Proposal and implementation of mobile robots control systemsKrysl, Jakub January 2016 (has links)
This thesis deals with the design and implementation of autonomous robot with using of the platform ROS. Its goal is to get to know the ROS and use it to implement autonomous control of real robot Leela.
|
5 |
Navigation of Mobile Robots in Human Environments with Deep Reinforcement Learning / Navigering av mobila robotar i mänskliga miljöer med deep reinforcement learningCoors, Benjamin January 2016 (has links)
For mobile robots which operate in human environments it is not sufficient to simply travel to their target destination as quickly as possible. Instead, mobile robots in human environments need to travel to their destination safely, keeping a comfortable distance to humans and not colliding with any obstacles along the way. As the number of possible human-robot interactions is very large, defining a rule-based navigation approach is difficult in such highly dynamic environments. Current approaches solve this task by predicting the trajectories of humans in the scene and then planning a collision-free path. However, this requires separate components for detecting and predicting human motion and does not scale well to densely populated environments. Therefore, this work investigates the use of deep reinforcement learning for the navigation of mobile robots in human environments. This approach is based on recent research on utilizing deep neural networks in reinforcement learning to successfully play Atari 2600 video games on human level. A deep convolutional neural network is trained end-to-end from one-dimensional laser scan data to command velocities. Discrete and continuous action space implementations are evaluated in a simulation and are shown to outperform a Social Force Model baseline approach on the navigation problem for mobile robots in human environments.
|
6 |
Vision based navigation in a dynamic environment / Navigation référencée vision dans un environnement dynamiqueFutterlieb, Marcus 10 July 2017 (has links)
Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée. / This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques.
|
7 |
Dynamic Programming: An Optimization tool Applied to Mobile Robot Navigation and Resource Allocation for Wildfire FightingKrothapalli, Ujwal Karthik 29 November 2010 (has links)
No description available.
|
8 |
A snake-based scheme for path planning and control with constraints by distributed visual sensorsCheng, Yongqiang, Jiang, Ping, Hu, Yim Fun 09 August 2013 (has links)
Yes / This paper proposes a robot navigation scheme using wireless visual sensors deployed in an environment.
Different from the conventional autonomous robot approaches, the scheme intends to relieve massive on-board
information processing required by a robot to its environment so that a robot or a vehicle with less intelligence can
exhibit sophisticated mobility. A three-state snake mechanism is developed for coordinating a series of sensors to
form a reference path. Wireless visual sensors communicate internal forces with each other along the reference snake
for dynamic adjustment, react to repulsive forces from obstacles, and activate a state change in the snake body from a
flexible state to a rigid or even to a broken state due to kinematic or environmental constraints. A control snake is
further proposed as a tracker of the reference path, taking into account the robot’s non-holonomic constraint and
limited steering power. A predictive control algorithm is developed to have an optimal velocity profile under robot
dynamic constraints for the snake tracking. They together form a unified solution for robot navigation by distributed
sensors to deal with the kinematic and dynamic constraints of a robot and to react to dynamic changes in advance.
Simulations and experiments demonstrate the capability of a wireless sensor network to carry out low-level control
activities for a vehicle. / Royal Society, Natural Science Funding Council (China)
|
9 |
Navigation and Control of an Autonomous VehicleSchworer, Ian Josef 19 May 2005 (has links)
The navigation and control of an autonomous vehicle is a highly complex task. Making a vehicle intelligent and able to operate "unmanned" requires extensive theoretical as well as practical knowledge. An autonomous vehicle must be able to make decisions and respond to situations completely on its own. Navigation and control serves as the major limitation of the overall performance, accuracy and robustness of an autonomous vehicle. This thesis will address this problem and propose a unique navigation and control scheme for an autonomous lawn mower (ALM).
Navigation is a key aspect when designing an autonomous vehicle. An autonomous vehicle must be able to sense its location, navigate its way toward its destination, and avoid obstacles it encounters. Since this thesis attempts to automate the lawn mowing process, it will present a navigational algorithm that covers a bounded region in a systematic way, while avoiding obstacles. This algorithm has many applications including search and rescue, floor cleaning, and lawn mowing. Furthermore, the robustness and utility of this algorithm is demonstrated in a 3D simulation.
This thesis will specifically study the dynamics of a two-wheeled differential drive vehicle. Using this dynamic model, various control techniques can then be applied to control the movement of the vehicle. This thesis will consider both open loop and closed loop control schemes. Optimal control, path following, and trajectory tracking are all considered, simulated, and evaluated as practical solutions for control of an ALM.
To design and build an autonomous vehicle requires the integration of many sensors, actuators, and controllers. Software serves as the glue to fuse all these devices together. This thesis will suggest various sensors and actuators that could be used to physically implement an ALM. This thesis will also describe the operation of each sensor and actuator, present the software used to control the system, and discuss physical limitations and constraints that might be encountered while building an ALM. / Master of Science
|
10 |
Construção de mapas de ambiente para navegação de robôs móveis com visão omnidirecional estéreo. / Map building for mobile robot navigation with omnidirectional stereo vision.Cláudia Cristina Ghirardello Deccó 23 April 2004 (has links)
O problema de navegação de robôs móveis tem sido estudado ao longo de vários anos, com o objetivo de se construir um robô com elevado grau de autonomia. O aumento da autonomia de um robô móvel está relacionado com a capacidade de aquisição de informações e com a automatização de tarefas, tal como a construção de mapas de ambiente. Sistemas de visão são amplamente utilizados em tarefas de robôs autônomos devido a grande quantidade de informação contida em uma imagem. Além disso, sensores omnidirecionais catadióptricos permitem ainda a obtenção de informação visual em uma imagem de 360º, dispensando o movimento da câmera em direções de interesse para a tarefa do robô. Mapas de ambiente podem ser construídos para a implementação de estratégias de navegações mais autônomas. Nesse trabalho desenvolveu-se uma metodologia para a construção de mapas para navegação, os quais são a representação da geometria do ambiente. Contém a informação adquirida por um sensor catadióptrico omnidirecional estéreo, construído por uma câmera e um espelho hiperbólico. Para a construção de mapas, os processos de alinhamento, correspondência e integração, são efetuados utilizando-se métricas de diferença angular e de distância entre os pontos. A partir da fusão dos mapas locais cria-se um mapa global do ambiente. O processo aqui desenvolvido para a construção do mapa global permite a adequação de algoritmos de planejamento de trajetória, estimativa de espaço livre e auto-localização, de maneira a obter uma navegação autônoma. / The problem of mobile robot navigation has been studied for many years, aiming at build a robot with an high degree of autonomy. The increase in autonomy of a mobile robot is related to its capacity of acquisition of information and the automation of tasks, such as the environment map building. In this aspect vision has been widely used due to the great amount of information in an image. Besides that catadioptric omnidirectional sensors allow to get visual information in a 360o image, discharging the need of camera movement in directions of interest for the robot task. Environment maps may be built for an implementation of strategies of more autonomous navigations. In this work a methodology is developed for building maps for robot navigations, which are the representation of the environment geometry. The map contains the information received by a stereo omnidirectional catadioptric sensor built by a camera and a hyperbolic mirror. For the map building, the processes of alignment, registration and integration are performed using metric of angular difference and distance between the points. From the fusion of local maps a global map of the environment is created. The method developed in this work for global map building allows to be coupled with algorithms of path planning, self-location and free space estimation, so that autonomous robot navigation can be obtained.
|
Page generated in 0.1196 seconds