• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 12
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 62
  • 62
  • 32
  • 24
  • 16
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Autonomous Robotic Strategies for Urban Search and Rescue

Ryu, Kun Jin 16 November 2012 (has links)
This dissertation proposes autonomous robotic strategies for urban search and rescue (USAR) which are map-based semi-autonomous robot navigation and fully-autonomous robotic search, tracking, localization and mapping (STLAM) using a team of robots. Since the prerequisite for these solutions is accurate robot localization in the environment, this dissertation first presents a novel grid-based scan-to-map matching technique for accurate simultaneous localization and mapping (SLAM). At every acquisition of a new scan and estimation of the robot pose, the proposed technique corrects the estimation error by matching the new scan to the globally defined grid map. To improve the accuracy of the correction, each grid cell of the map is represented by multiple normal distributions (NDs). The new scan to be matched to the map is also represented by NDs, which achieves the scan-to-map matching by the ND-to-ND matching. In the map-based semi-autonomous robot navigation strategy, a robot placed in an environment creates the map of the environment and sends it to the human operator at a distant location. The human operator then makes decisions based on the map and controls the robot via tele-operation. In case of communication loss, the robot semi-autonomously returns to the home position by inversely tracking its trajectory with additional optimal path planning. In the fully-autonomous robotic solution to USAR, multiple robots communicate one another while operating together as a team. The base station collects information from each robot and assigns tasks to the robots. Unlike the semi-autonomous strategy there is no control from the human operator. To further enhance the efficiency of their cooperation each member of the team specifically works on its own task. A series of numerical and experimental studies were conducted to demonstrate the applicability of the proposed solutions to USAR scenarios. The effectiveness of the scan-to-map matching with the multi-ND representation was confirmed by analyzing the error accumulation and by comparing with the single-ND representation. The applicability of the scan-to-map matching to the real SLAM problem was also verified in three different real environments. The results of the map-based semi-autonomous robot navigation showed the effectiveness of the approach as an immediately usable solution to USAR. The effectiveness of the proposed fully- autonomous solution was first confirmed by two real robots in a real environment. The cooperative performance of the strategy was further investigated using the developed platform- and hardware-in-the-loop simulator. The results showed significant potential as the future solution to USAR. / Ph. D.
12

Generalized Landmark Recognition in Robot Navigation

Zhou, Qiang January 2004 (has links)
No description available.
13

Visual navigation for mobile robots using the Bag-of-Words algorithm

Botterill, Tom January 2011 (has links)
Robust long-term positioning for autonomous mobile robots is essential for many applications. In many environments this task is challenging, as errors accumulate in the robot’s position estimate over time. The robot must also build a map so that these errors can be corrected when mapped regions are re-visited; this is known as Simultaneous Localisation and Mapping, or SLAM. Successful SLAM schemes have been demonstrated which accurately map tracks of tens of kilometres, however these schemes rely on expensive sensors such as laser scanners and inertial measurement units. A more attractive, low-cost sensor is a digital camera, which captures images that can be used to recognise where the robot is, and to incrementally position the robot as it moves. SLAM using a single camera is challenging however, and many contemporary schemes suffer complete failure in dynamic or featureless environments, or during erratic camera motion. An additional problem, known as scale drift, is that cameras do not directly measure the scale of the environment, and errors in relative scale accumulate over time, introducing errors into the robot’s speed and position estimates. Key to a successful visual SLAM system is the ability to continue operation despite these difficulties, and to recover from positioning failure when it occurs. This thesis describes the development of such a scheme, which is known as BoWSLAM. BoWSLAM enables a robot to reliably navigate and map previously unknown environments, in real-time, using only a single camera. In order to position a camera in visually challenging environments, BoWSLAM combines contemporary visual SLAM techniques with four new components. Firstly, a new Bag-of-Words (BoW) scheme is developed, which allows a robot to recognise places it has visited previously, without any prior knowledge of its environment. This BoW scheme is also used to select the best set of frames to reconstruct positions from, and to find efficient wide-baseline correspondences between many pairs of frames. Secondly, BaySAC, a new outlier- robust relative pose estimation scheme based on the popular RANSAC framework, is developed. BaySAC allows the efficient computation of multiple position hypotheses for each frame. Thirdly, a graph-based representation of these position hypotheses is proposed, which enables the selection of only reliable position estimates in the presence of gross outliers. Fourthly, as the robot explores, objects in the world are recognised and measured. These measurements enable scale drift to be corrected. BoWSLAM is demonstrated mapping a 25 minute 2.5km trajectory through a challenging and dynamic outdoor environment in real-time, and without any other sensor input; considerably further than previous single camera SLAM schemes.
14

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Guizilini, Vitor Campanholo 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
15

Mobile Robot Localization Using Sonar

Drumheller, Michael 01 January 1985 (has links)
This paper describes a method by which range data from a sonar or other type of rangefinder can be used to determine the 2-dimensional position and orientation of a mobile robot inside a room. The plan of the room is modeled as a list of segments indicating the positions of walls. The method works by extracting straight segments from the range data and examining all hypotheses about pairings between the segments and walls in the model of the room. Inconsistent pairings are discarded efficiently by using local constraints based on distances between walls, angles between walls, and ranges between walls along their normal vectors. These constraints are used to obtain a small set of possible positions, which is further pruned using a test for physical consistency. The approach is extremely tolerant of noise and clutter. Transient objects such as furniture and people need not be included in the room model, and very noisy, low-resolution sensors can be used. The algorithm's performance is demonstrated using Polaroid Ultrasonic Rangefinder, which is a low-resolution, high-noise sensor.
16

Cooperative Navigation for Teams of Mobile Robots

Peasgood, Mike January 2007 (has links)
Teams of mobile robots have numerous applications, such as space exploration, underground mining, warehousing, and building security. Multi-robot teams can provide a number of practical benefits in such applications, including simultaneous presence in multiple locations, improved system performance, and greater robustness and redundancy compared to individual robots. This thesis addresses three aspects of coordination and navigation for teams of mobile robots: localization, the estimation of the position of each robot in the environment; motion planning, the process of finding collision-free trajectories through the environment; and task allocation, the selection of appropriate goals to be assigned to each robot. Each of these topics are investigated in the context of many robots working in a common environment. A particle-filter based system for cooperative global localization is presented. The system combines the sensor data from three robots, including measurements of the distances between robots, to cooperatively estimate the global position of each robot in the environment. The method is developed for a single triad of robots, then extended to larger groups of robots. The algorithm is demonstrated in a simulation of robots equipped with only simple range sensors, and is shown to successfully achieve global localization of robots that are unable to localize using only their own local sensor data. Motion planning is investigated for large teams of robots operating in tunnel and corridor environments, where coordinated planning is often required to avoid collision or deadlock conditions. A complete and scalable motion planning algorithm is presented and evaluated in simulation with up to 150 robots. In contrast to popular decoupled approaches to motion planning (which cannot guarantee a solution), this algorithm uses a multi-phase approach to create and maintain obstacle-free paths through a graph representation of the environment. The resulting plan is a set of collision-free trajectories, guaranteeing that every robot will reach its goal. The problem of task allocation is considered in the same type of tunnel and corridor environments, where tasks are defined as locations in the environment that must be visited by one of the robots in the team. To find efficient solutions to the task allocation problem, an optimization approach is used to generate potential task assignments, and select the best solution. The multi-phase motion planner is applied within this system as an efficient method of evaluating potential task assignments for many robots in a large environment. The algorithm is evaluated in simulations with up to 20 robots in a map of large underground mine. A real-world implementation of 3 physical robots was used to demonstrate the implementation of the multi-phase motion planning and task allocation systems. A centralized motion planning and task allocation system was developed, incorporating localization and time-dependent trajectory tracking on the robot processors, enabling cooperative navigation in a shared hallway environment.
17

Cooperative Navigation for Teams of Mobile Robots

Peasgood, Mike January 2007 (has links)
Teams of mobile robots have numerous applications, such as space exploration, underground mining, warehousing, and building security. Multi-robot teams can provide a number of practical benefits in such applications, including simultaneous presence in multiple locations, improved system performance, and greater robustness and redundancy compared to individual robots. This thesis addresses three aspects of coordination and navigation for teams of mobile robots: localization, the estimation of the position of each robot in the environment; motion planning, the process of finding collision-free trajectories through the environment; and task allocation, the selection of appropriate goals to be assigned to each robot. Each of these topics are investigated in the context of many robots working in a common environment. A particle-filter based system for cooperative global localization is presented. The system combines the sensor data from three robots, including measurements of the distances between robots, to cooperatively estimate the global position of each robot in the environment. The method is developed for a single triad of robots, then extended to larger groups of robots. The algorithm is demonstrated in a simulation of robots equipped with only simple range sensors, and is shown to successfully achieve global localization of robots that are unable to localize using only their own local sensor data. Motion planning is investigated for large teams of robots operating in tunnel and corridor environments, where coordinated planning is often required to avoid collision or deadlock conditions. A complete and scalable motion planning algorithm is presented and evaluated in simulation with up to 150 robots. In contrast to popular decoupled approaches to motion planning (which cannot guarantee a solution), this algorithm uses a multi-phase approach to create and maintain obstacle-free paths through a graph representation of the environment. The resulting plan is a set of collision-free trajectories, guaranteeing that every robot will reach its goal. The problem of task allocation is considered in the same type of tunnel and corridor environments, where tasks are defined as locations in the environment that must be visited by one of the robots in the team. To find efficient solutions to the task allocation problem, an optimization approach is used to generate potential task assignments, and select the best solution. The multi-phase motion planner is applied within this system as an efficient method of evaluating potential task assignments for many robots in a large environment. The algorithm is evaluated in simulations with up to 20 robots in a map of large underground mine. A real-world implementation of 3 physical robots was used to demonstrate the implementation of the multi-phase motion planning and task allocation systems. A centralized motion planning and task allocation system was developed, incorporating localization and time-dependent trajectory tracking on the robot processors, enabling cooperative navigation in a shared hallway environment.
18

Intention prediction for interactive navigation in distributed robotic systems

Bordallo Micó, Alejandro January 2017 (has links)
Modern applications of mobile robots require them to have the ability to safely and effectively navigate in human environments. New challenges arise when these robots must plan their motion in a human-aware fashion. Current methods addressing this problem have focused mainly on the activity forecasting aspect, aiming at improving predictions without considering the active nature of the interaction, i.e. the robot’s effect on the environment and consequent issues such as reciprocity. Furthermore, many methods rely on computationally expensive offline training of predictive models that may not be well suited to rapidly evolving dynamic environments. This thesis presents a novel approach for enabling autonomous robots to navigate socially in environments with humans. Following formulations of the inverse planning problem, agents reason about the intentions of other agents and make predictions about their future interactive motion. A technique is proposed to implement counterfactual reasoning over a parametrised set of light-weight reciprocal motion models, thus making it more tractable to maintain beliefs over the future trajectories of other agents towards plausible goals. The speed of inference and the effectiveness of the algorithms is demonstrated via physical robot experiments, where computationally constrained robots navigate amongst humans in a distributed multi-sensor setup, able to infer other agents’ intentions as fast as 100ms after the first observation. While intention inference is a key aspect of successful human-robot interaction, executing any task requires planning that takes into account the predicted goals and trajectories of other agents, e.g., pedestrians. It is well known that robots demonstrate unwanted behaviours, such as freezing or becoming sluggishly responsive, when placed in dynamic and cluttered environments, due to the way in which safety margins according to simple heuristics end up covering the entire feasible space of motion. The presented approach makes more refined predictions about future movement, which enables robots to find collision-free paths quickly and efficiently. This thesis describes a novel technique for generating "interactive costmaps", a representation of the planner’s costs and rewards across time and space, providing an autonomous robot with the information required to navigate socially given the estimate of other agents’ intentions. This multi-layered costmap deters the robot from obstructing while encouraging social navigation respectful of other agents’ activity. Results show that this approach minimises collisions and near-collisions, minimises travel times for agents, and importantly offers the same computational cost as the most common costmap alternatives for navigation. A key part of the practical deployment of such technologies is their ease of implementation and configuration. Since every use case and environment is different and distinct, the presented methods use online adaptation to learn parameters of the navigating agents during runtime. Furthermore, this thesis includes a novel technique for allocating tasks in distributed robotics systems, where a tool is provided to maximise the performance on any distributed setup by automatic parameter tuning. All of these methods are implemented in ROS and distributed as open-source. The ultimate aim is to provide an accessible and efficient framework that may be seamlessly deployed on modern robots, enabling widespread use of intention prediction for interactive navigation in distributed robotic systems.
19

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Vitor Campanholo Guizilini 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
20

Localização e navegação de robô autônomo através de odometria e visão estereoscópica / Localization and navigation of an autonomous mobile robot trough odometry and stereoscopic vision

Delgado Vargas, Jaime Armando, 1986- 20 August 2018 (has links)
Orientador: Paulo Roberto Gardel Kurka / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-20T13:27:04Z (GMT). No. of bitstreams: 1 DelgadoVargas_JaimeArmando_M.pdf: 4350704 bytes, checksum: 8e7dab5b1630b88bde95e287a62b2f7e (MD5) Previous issue date: 2012 / Resumo: Este trabalho apresenta a implementação de um sistema de navegação com visão estereoscópica em um robô móvel, que permite a construção de mapa de ambiente e localização. Para isto é necessário conhecer o modelo cinemático do robô, técnicas de controle, algoritmos de identificação de características em imagens (features), reconstrução 3D com visão estereoscópica e algoritmos de navegação. Utilizam-se métodos para a calibração de câmera desenvolvida no âmbito do grupo de pesquisa da FEM/UNICAMP e da literatura. Resultados de análises experimentais e teóricas são comparados. Resultados adicionais mostram a validação do algoritmo de calibração de câmera, acurácia dos sensores, resposta do sistema de controle, e reconstrução 3D. Os resultados deste trabalho são de importância para futuros estudos de navegação robótica e calibração de câmeras / Abstract: This paper presents a navigation system with stereoscopic vision on a mobile robot, which allows the construction of environment map and location. In that way must know the kinematic model of the robot, algorithms for identifying features in images (features) as a Sift, 3D reconstruction with stereoscopic vision and navigation algorithms. Methods are used to calibrate the camera developed within the research group of the FEM / UNICAMP and literature. Results of experimental and theoretical analyzes are compared. Additional results show the validation of the algorithm for camera calibration, accuracy of sensors, control system response, and 3D reconstruction. These results are important for future studies of robotic navigation and calibration of cameras / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica

Page generated in 0.0754 seconds