• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 8
  • 5
  • 3
  • 3
  • 1
  • Tagged with
  • 46
  • 46
  • 13
  • 11
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Řídící systém pro autonomního robota / Autonomous Robot Control System

Pilát, Ondřej January 2015 (has links)
This master thesis describes the design and implementation of control sys- tem for autonomous robot which is able to run through user defined points in unknown environment without colliding with obstacles. The work contains analysis of the available hardware and software solutions, modular design with control system implementation divided into separate subsystems (control, lo- calization, route planning, driving the robot using Hermit curves and low-level hardware control). The work also contains explanation of rework of the school robotic platform. The implementation was tested on a created robotic platform. Driving the robot along the Hermit curve allows smooth and in some cases quicker passage through defined points, than passage consisting of rotations on the spot and direct movements. 1
12

Une architecture de contrôle distribuée pour l'autonomie des robots / A distributed control architecture for the autonomy of robots

Degroote, Arnaud 05 October 2012 (has links)
Pour des tâches simples ou dans un environnement contrôlé, la coordination des différents processus internes d’un robot est un problème relativement trivial, souvent implémenté de manière ad-hoc. Toutefois, avec le développement de robots plus complexes travaillant dans des environnements non contrôlés et dynamiques, le robot doit en permanence se reconfigurer afin de s’adapter aux conditions extérieures et à ses objectifs. La définition d’une architecture de contrôle efficace permettant de gérer ces reconfigurations devient alors primordiale pour l’autonomie de tels robots. Dans ces travaux, nous avons d’abord étudié les différentes architectures proposées dans la littérature, dont l’analyse a permis d’identifier les grandes problématiques qu’une architecture de contrôle doit résoudre. Cette analyse nous a mené à proposer une nouvelle architecture de contrôle décentralisée, générique et réutilisable, selon une démarche qui intègre une approche "intelligence artificielle" (utilisation de raisonneur logique, propagation dynamique de contraintes) et une approche "génie logiciel" (programmation par contrats, agents). Après une présentation des concepts qui sous-tendent cette architecture et une description approfondie de son fonctionnement, nous en décrivons une implémentation, qui est exploitée pour assurer le contrôle d’un robot terrestre d’extérieur dans le cadre de tâches de navigation, d’exploration ou de suivi. Des résultats sont présentés et analysés. Dans une seconde partie, nous nous sommes penchés sur la modélisation et la vérifiabilité d’une telle architecture de contrôle. Après avoir analysé différentes solutions, nous décrivons un modèle complet de l’architecture qui utilise la logique linéaire. Nous discutons ensuite des différentes approches possibles pour montrer des propriétés d’atteignabilité et de sûreté de fonctionnement en exploitant ce modèle. Enfin nous abordons différentes voies d’enrichissement de ces travaux. En particulier, nous discutons des extensions possibles pour le contrôle d’un ensemble de robots coopérants entre eux, mais aussi de la nécessité d’avoir des liens plus forts entre cette couche de contrôle, et les approches de modélisation des fonctionnalités sous-jacentes. / For simple tasks in a controlled environment, the coordination of the internal processes of a robot is a relatively trivial task, often implemented in an ad-hoc basis. However, with the development of more complex robots that must operate in uncontrolled and dynamic environments, the robot must constantly reconfigure itself to adapt to the external conditions and its own goals. The definition of a control architecture to manage these reconfigurations becomes of paramount importance for the autonomy of such robots. In this work, we first study the different architectures proposed in the literature, and analyse the major issues that a control architecture must address. This analysis led us to propose a new architecture, decentralized, generic and reusable, integrating an artificial intelligence approach (use of logical reasoning, dynamic propagation of constraints) and a software engineering approach (programming by contract, agents). After a presentation of the concepts underlying this architecture and an in-depth description of its operation, we describe an implementation which is used to control of a ground robot for navigation, exploration and monitoring tasks. Results are presented and analyzed. In a second part, we focus on the modeling and verifiability of such a control architecture. After analyzing different solutions, we present a comprehensive model of the proposed architecture that uses linear logic. We then discuss the different possible approaches to assess the properties of reachability and safety within this model. Finally we discuss different ways to enrich this work. In particular, we discuss possible extensions to the control of a multiple cooperating robots, but also the need for stronger links between the control layer and the modeling.
13

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Guizilini, Vitor Campanholo 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
14

On Multiple Moving Objects

Erdmann, Michael, Lozano-Perez, Tomas 01 May 1986 (has links)
This paper explores the motion planning problem for multiple moving objects. The approach taken consists of assigning priorities to the objects, then planning motions one object at a time. For each moving object, the planner constructs a configuration space-time that represents the time-varying constraints imposed on the moving object by the other moving and stationary objects. The planner represents this space-time approximately, using two-dimensional slices. The space-time is then searched for a collision-free path. The paper demonstrates this approach in two domains. One domain consists of translating planar objects; the other domain consists of two-link planar articulated arms.
15

Robust Agent Control of an Autonomous Robot with Many Sensors and Actuators

Ferrell, Cynthia 01 May 1993 (has links)
This thesis presents methods for implementing robust hexpod locomotion on an autonomous robot with many sensors and actuators. The controller is based on the Subsumption Architecture and is fully distributed over approximately 1500 simple, concurrent processes. The robot, Hannibal, weighs approximately 6 pounds and is equipped with over 100 physical sensors, 19 degrees of freedom, and 8 on board computers. We investigate the following topics in depth: distributed control of a complex robot, insect-inspired locomotion control for gait generation and rough terrain mobility, and fault tolerance. The controller was implemented, debugged, and tested on Hannibal. Through a series of experiments, we examined Hannibal's gait generation, rough terrain locomotion, and fault tolerance performance. These results demonstrate that Hannibal exhibits robust, flexible, real-time locomotion over a variety of terrain and tolerates a multitude of hardware failures.
16

Intention prediction for interactive navigation in distributed robotic systems

Bordallo Micó, Alejandro January 2017 (has links)
Modern applications of mobile robots require them to have the ability to safely and effectively navigate in human environments. New challenges arise when these robots must plan their motion in a human-aware fashion. Current methods addressing this problem have focused mainly on the activity forecasting aspect, aiming at improving predictions without considering the active nature of the interaction, i.e. the robot’s effect on the environment and consequent issues such as reciprocity. Furthermore, many methods rely on computationally expensive offline training of predictive models that may not be well suited to rapidly evolving dynamic environments. This thesis presents a novel approach for enabling autonomous robots to navigate socially in environments with humans. Following formulations of the inverse planning problem, agents reason about the intentions of other agents and make predictions about their future interactive motion. A technique is proposed to implement counterfactual reasoning over a parametrised set of light-weight reciprocal motion models, thus making it more tractable to maintain beliefs over the future trajectories of other agents towards plausible goals. The speed of inference and the effectiveness of the algorithms is demonstrated via physical robot experiments, where computationally constrained robots navigate amongst humans in a distributed multi-sensor setup, able to infer other agents’ intentions as fast as 100ms after the first observation. While intention inference is a key aspect of successful human-robot interaction, executing any task requires planning that takes into account the predicted goals and trajectories of other agents, e.g., pedestrians. It is well known that robots demonstrate unwanted behaviours, such as freezing or becoming sluggishly responsive, when placed in dynamic and cluttered environments, due to the way in which safety margins according to simple heuristics end up covering the entire feasible space of motion. The presented approach makes more refined predictions about future movement, which enables robots to find collision-free paths quickly and efficiently. This thesis describes a novel technique for generating "interactive costmaps", a representation of the planner’s costs and rewards across time and space, providing an autonomous robot with the information required to navigate socially given the estimate of other agents’ intentions. This multi-layered costmap deters the robot from obstructing while encouraging social navigation respectful of other agents’ activity. Results show that this approach minimises collisions and near-collisions, minimises travel times for agents, and importantly offers the same computational cost as the most common costmap alternatives for navigation. A key part of the practical deployment of such technologies is their ease of implementation and configuration. Since every use case and environment is different and distinct, the presented methods use online adaptation to learn parameters of the navigating agents during runtime. Furthermore, this thesis includes a novel technique for allocating tasks in distributed robotics systems, where a tool is provided to maximise the performance on any distributed setup by automatic parameter tuning. All of these methods are implemented in ROS and distributed as open-source. The ultimate aim is to provide an accessible and efficient framework that may be seamlessly deployed on modern robots, enabling widespread use of intention prediction for interactive navigation in distributed robotic systems.
17

Hormonal modulation of developmental plasticity in an epigenetic robot

Lones, John January 2017 (has links)
In autonomous robotics, there is still a trend to develop and tune controllers with highly explicit goals and environments in mind. However, this tuning means that these robotic models often lack the developmental and behavioral flexibility seen in biological organisms. The lack of flexibility in these controllers leaves the robot vulnerable to changes in environmental condition. Whereby any environmental change may lead to the behaviors of the robots becoming unsuitable or even dangerous. In this manuscript we look at a potential biologically plausible mechanism which may be used in robotic controllers in order to allow them to adapt to different environments. This mechanism consists of a hormone driven epigenetic mechanism which regulates a robot's internal environment in relation to its current environmental conditions. As we will show in our early chapters, this epigenetic mechanism allows an autonomous robot to rapidly adapt to a range of different environmental conditions. This adaption is achieved without the need for any explicit knowledge of the environment. Allowing a single architecture to adapt to a range of challenges and develop unique behaviors. In later chapters however, we find that this mechanism not only allows for regulation of short term behavior, but also long development. Here we show how this system permits a robot to develop in a way that is suitable for its current environment. Further during this developmental process we notice similarities to infant development, along with acquisition of unplanned skills and abilities. The unplanned developments appears to leads to the emergence of unplanned potential cognitive abilities such as object permanence, which we assess using a range of different real world tests.
18

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Vitor Campanholo Guizilini 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
19

Integrating actions of perception to the electric field approach

Samuelsson, Ted January 2003 (has links)
The Electric Field Approach, EFA, is a heuristic for behavior selection for autonomous robot control. This thesis deals with the problem of using the EFA as a heuristics for the perceptive behaviors. The integration is done by extending the EFA with having three electric fields and each having its own probes and charges. The extended EFA was tested in the RoboCup domain.
20

Autonomous Fire Detection Robot Using Modified Voting Logic

Rehman, Adeel ur January 2015 (has links)
Recent developments at Fukushima Nuclear Power Plant in Japan have created urgency in the scientist community to come up with solutions for hostile industrial environment in case of a breakdown or natural disaster. There are many hazardous scenarios in an indoor industrial environment such as risk of fire, failure of high speed rotary machines, chemical leaks, etc. Fire is one of the leading causes for workplace injuries and fatalities. The current fire protection systems available in the market mainly consist of a sprinkler systems and personnel on duty. In the case of a sprinkler system there could be several things that could go wrong, such as spraying water on a fire created by an oil leak may even spread it, the water from the sprinkler system may harm the machinery in use that is not under the fire threat and the water could potentially destroy expensive raw material, finished goods and valuable electronic and printed data. There is a dire need of an inexpensive autonomous system that can detect and approach the source of these hazardous scenarios. This thesis focuses mainly on industrial fires but, using same or similar techniques on different sensors, may allow it to detect and approach other hostile situations in industrial workplace. Autonomous robots can be equipped to detect potential threats of fire and find out the source while avoiding the obstacles during navigation. The proposed system uses Modified Voting Logic Fusion to approach and declare a potential fire source autonomously. The robot follows the increasing gradient of light and heat intensity to identify the threat and approach the source.

Page generated in 0.1146 seconds