• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 12
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 62
  • 62
  • 32
  • 24
  • 16
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Navigation of Mobile Robots in Human Environments with Deep Reinforcement Learning / Navigering av mobila robotar i mänskliga miljöer med deep reinforcement learning

Coors, Benjamin January 2016 (has links)
For mobile robots which operate in human environments it is not sufficient to simply travel to their target destination as quickly as possible. Instead, mobile robots in human environments need to travel to their destination safely,  keeping a comfortable distance to humans and not colliding with any obstacles along the way. As the number of possible human-robot interactions is very large, defining a rule-based navigation approach is difficult in such highly dynamic environments. Current approaches solve this task by predicting the trajectories of humans in the scene and then planning a collision-free path. However, this requires separate components for detecting and predicting human motion and does not scale well to densely populated environments. Therefore, this work investigates the use of deep reinforcement learning for the navigation of mobile robots in human environments. This approach is based on recent research on utilizing deep neural networks in reinforcement learning to successfully play Atari 2600 video games on human level. A deep convolutional neural network is trained end-to-end from one-dimensional laser scan data to command velocities. Discrete and continuous action space implementations are evaluated in a simulation and are shown to outperform a Social Force Model baseline approach on the navigation problem for mobile robots in human environments.
32

Vision based navigation in a dynamic environment / Navigation référencée vision dans un environnement dynamique

Futterlieb, Marcus 10 July 2017 (has links)
Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée. / This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques.
33

Dynamic Programming: An Optimization tool Applied to Mobile Robot Navigation and Resource Allocation for Wildfire Fighting

Krothapalli, Ujwal Karthik 29 November 2010 (has links)
No description available.
34

A snake-based scheme for path planning and control with constraints by distributed visual sensors

Cheng, Yongqiang, Jiang, Ping, Hu, Yim Fun 09 August 2013 (has links)
Yes / This paper proposes a robot navigation scheme using wireless visual sensors deployed in an environment. Different from the conventional autonomous robot approaches, the scheme intends to relieve massive on-board information processing required by a robot to its environment so that a robot or a vehicle with less intelligence can exhibit sophisticated mobility. A three-state snake mechanism is developed for coordinating a series of sensors to form a reference path. Wireless visual sensors communicate internal forces with each other along the reference snake for dynamic adjustment, react to repulsive forces from obstacles, and activate a state change in the snake body from a flexible state to a rigid or even to a broken state due to kinematic or environmental constraints. A control snake is further proposed as a tracker of the reference path, taking into account the robot’s non-holonomic constraint and limited steering power. A predictive control algorithm is developed to have an optimal velocity profile under robot dynamic constraints for the snake tracking. They together form a unified solution for robot navigation by distributed sensors to deal with the kinematic and dynamic constraints of a robot and to react to dynamic changes in advance. Simulations and experiments demonstrate the capability of a wireless sensor network to carry out low-level control activities for a vehicle. / Royal Society, Natural Science Funding Council (China)
35

Navigation and Control of an Autonomous Vehicle

Schworer, Ian Josef 19 May 2005 (has links)
The navigation and control of an autonomous vehicle is a highly complex task. Making a vehicle intelligent and able to operate "unmanned" requires extensive theoretical as well as practical knowledge. An autonomous vehicle must be able to make decisions and respond to situations completely on its own. Navigation and control serves as the major limitation of the overall performance, accuracy and robustness of an autonomous vehicle. This thesis will address this problem and propose a unique navigation and control scheme for an autonomous lawn mower (ALM). Navigation is a key aspect when designing an autonomous vehicle. An autonomous vehicle must be able to sense its location, navigate its way toward its destination, and avoid obstacles it encounters. Since this thesis attempts to automate the lawn mowing process, it will present a navigational algorithm that covers a bounded region in a systematic way, while avoiding obstacles. This algorithm has many applications including search and rescue, floor cleaning, and lawn mowing. Furthermore, the robustness and utility of this algorithm is demonstrated in a 3D simulation. This thesis will specifically study the dynamics of a two-wheeled differential drive vehicle. Using this dynamic model, various control techniques can then be applied to control the movement of the vehicle. This thesis will consider both open loop and closed loop control schemes. Optimal control, path following, and trajectory tracking are all considered, simulated, and evaluated as practical solutions for control of an ALM. To design and build an autonomous vehicle requires the integration of many sensors, actuators, and controllers. Software serves as the glue to fuse all these devices together. This thesis will suggest various sensors and actuators that could be used to physically implement an ALM. This thesis will also describe the operation of each sensor and actuator, present the software used to control the system, and discuss physical limitations and constraints that might be encountered while building an ALM. / Master of Science
36

Vision & laser for road based navigation

Napier, Ashley A. January 2014 (has links)
This thesis presents novel solutions for two fundamental problems associated with autonomous road driving. The first is accurate and persistent localisation and the second is automatic extrinsic sensor calibration. We start by describing a stereo Visual Odometry (VO) system, which forms the basis of later chapters. This sparse approach to ego-motion estimation leverages the efficacy and speed of the BRIEF descriptor to measure frame-to-frame correspondences and infer subsequent motion. The system is able to output locally metric trajectory estimates as demonstrated on many kilometres of data. We then present a robust vision only localisation system based on a two-stage approach. Firstly we gather a representative survey in ideal weather and lighting conditions. We then leverage locally accurate VO trajectories to synthesise a high resolution orthographic image strip of the road surface. This road image provides a highly descriptive and stable template against which to match subsequent traversals. During the second phase, localisation, we use the VO to provide high frequency pose updates, but correct for the drift inherent in all locally derived pose estimates with low frequency updates from a dense image matching technique. Here a live image stream is registered against synthesised views of the road image generated form the survey. We use an information theoretic measure, Mutual Information, to determine the alignment of live images and synthesised views. Using this measure we are able to successfully localise subsequent traversals of surveyed routes under even the most intense lighting changes expected in outdoor applications. We demonstrate our system localising in multiple environments with accuracy commensurate to that of an Inertial Navigation System. Finally we present a technique for automatically determining the extrinsic calibration between a camera and Light Detection And Ranging (LIDAR) sensor in natural scenes. Rather than requiring a stationary platform as with prior art, we actually exploit platform motion allowing us to aggregate data and adopt a retrospective approach to calibration. Coupled with accurate timing this retrospective approach allows for sensors with non-overlapping fields of view to be calibrated as long as at some point the observed workspaces overlap. We then show how we can improve the accuracy of our calibration estimates by treating each single shot estimate as a noisy measurement and fusing them together using a recursive Bayes filter. We evaluate the calibration algorithm in multiple environments and demonstrate millimetre precision in translation and deci-degrees in rotation.
37

Construção de mapas de ambiente para navegação de robôs móveis com visão omnidirecional estéreo. / Map building for mobile robot navigation with omnidirectional stereo vision.

Cláudia Cristina Ghirardello Deccó 23 April 2004 (has links)
O problema de navegação de robôs móveis tem sido estudado ao longo de vários anos, com o objetivo de se construir um robô com elevado grau de autonomia. O aumento da autonomia de um robô móvel está relacionado com a capacidade de aquisição de informações e com a automatização de tarefas, tal como a construção de mapas de ambiente. Sistemas de visão são amplamente utilizados em tarefas de robôs autônomos devido a grande quantidade de informação contida em uma imagem. Além disso, sensores omnidirecionais catadióptricos permitem ainda a obtenção de informação visual em uma imagem de 360º, dispensando o movimento da câmera em direções de interesse para a tarefa do robô. Mapas de ambiente podem ser construídos para a implementação de estratégias de navegações mais autônomas. Nesse trabalho desenvolveu-se uma metodologia para a construção de mapas para navegação, os quais são a representação da geometria do ambiente. Contém a informação adquirida por um sensor catadióptrico omnidirecional estéreo, construído por uma câmera e um espelho hiperbólico. Para a construção de mapas, os processos de alinhamento, correspondência e integração, são efetuados utilizando-se métricas de diferença angular e de distância entre os pontos. A partir da fusão dos mapas locais cria-se um mapa global do ambiente. O processo aqui desenvolvido para a construção do mapa global permite a adequação de algoritmos de planejamento de trajetória, estimativa de espaço livre e auto-localização, de maneira a obter uma navegação autônoma. / The problem of mobile robot navigation has been studied for many years, aiming at build a robot with an high degree of autonomy. The increase in autonomy of a mobile robot is related to its capacity of acquisition of information and the “automation” of tasks, such as the environment map building. In this aspect vision has been widely used due to the great amount of information in an image. Besides that catadioptric omnidirectional sensors allow to get visual information in a 360o image, discharging the need of camera movement in directions of interest for the robot task. Environment maps may be built for an implementation of strategies of more autonomous navigations. In this work a methodology is developed for building maps for robot navigations, which are the representation of the environment geometry. The map contains the information received by a stereo omnidirectional catadioptric sensor built by a camera and a hyperbolic mirror. For the map building, the processes of alignment, registration and integration are performed using metric of angular difference and distance between the points. From the fusion of local maps a global map of the environment is created. The method developed in this work for global map building allows to be coupled with algorithms of path planning, self-location and free space estimation, so that autonomous robot navigation can be obtained.
38

Construção de mapas de ambiente para navegação de robôs móveis com visão omnidirecional estéreo. / Map building for mobile robot navigation with omnidirectional stereo vision.

Deccó, Cláudia Cristina Ghirardello 23 April 2004 (has links)
O problema de navegação de robôs móveis tem sido estudado ao longo de vários anos, com o objetivo de se construir um robô com elevado grau de autonomia. O aumento da autonomia de um robô móvel está relacionado com a capacidade de aquisição de informações e com a automatização de tarefas, tal como a construção de mapas de ambiente. Sistemas de visão são amplamente utilizados em tarefas de robôs autônomos devido a grande quantidade de informação contida em uma imagem. Além disso, sensores omnidirecionais catadióptricos permitem ainda a obtenção de informação visual em uma imagem de 360º, dispensando o movimento da câmera em direções de interesse para a tarefa do robô. Mapas de ambiente podem ser construídos para a implementação de estratégias de navegações mais autônomas. Nesse trabalho desenvolveu-se uma metodologia para a construção de mapas para navegação, os quais são a representação da geometria do ambiente. Contém a informação adquirida por um sensor catadióptrico omnidirecional estéreo, construído por uma câmera e um espelho hiperbólico. Para a construção de mapas, os processos de alinhamento, correspondência e integração, são efetuados utilizando-se métricas de diferença angular e de distância entre os pontos. A partir da fusão dos mapas locais cria-se um mapa global do ambiente. O processo aqui desenvolvido para a construção do mapa global permite a adequação de algoritmos de planejamento de trajetória, estimativa de espaço livre e auto-localização, de maneira a obter uma navegação autônoma. / The problem of mobile robot navigation has been studied for many years, aiming at build a robot with an high degree of autonomy. The increase in autonomy of a mobile robot is related to its capacity of acquisition of information and the “automation" of tasks, such as the environment map building. In this aspect vision has been widely used due to the great amount of information in an image. Besides that catadioptric omnidirectional sensors allow to get visual information in a 360o image, discharging the need of camera movement in directions of interest for the robot task. Environment maps may be built for an implementation of strategies of more autonomous navigations. In this work a methodology is developed for building maps for robot navigations, which are the representation of the environment geometry. The map contains the information received by a stereo omnidirectional catadioptric sensor built by a camera and a hyperbolic mirror. For the map building, the processes of alignment, registration and integration are performed using metric of angular difference and distance between the points. From the fusion of local maps a global map of the environment is created. The method developed in this work for global map building allows to be coupled with algorithms of path planning, self-location and free space estimation, so that autonomous robot navigation can be obtained.
39

The Hippocampus code : a computational study of the structure and function of the hippocampus

Rennó Costa, César 17 September 2012 (has links)
Actualment, no hi ha consens científic respecte a la informació representada en la activitat de les célules del hipocamp. D'una banda, experiments amb humans sostenen una visión de la funció de l'hipocamp com a un sistema per l'emmagatzematge de memóries episódiques, mentre que la recerca amb rodents enfatitza una visió com a sistema cognitiu espacial. Tot i que existeix abundant evidència experimental que indica una possible sobreposició d'ambdues teories, aquesta dissociació també es manté en part en base a dades fisiològiques aparentment incompatibles. Aquesta tèsi poposa que l'hippocamp té un rol funcional que s'hauría d'analitzar en termes de la seva estructura i funció, enlloc de mitjança estudis correlació entre activitat neuronal i comportament. La identificació d'un codi a l'hipocamp, es a dir, el conjunt de principis computacionals que conformen les transformacions d'entrada i sortida de l'activitat neuronal, hauría de proporcionar un explicació unificada de la seva funció. En aquesta tèsi presentem un model teòric que descriu quantitativament i que interpreta la selectivitat de certes regions de l'hipocamp en funció de variables espaials i no-espaials, tal i com observada en experiments amb rates. Aquest resultat suggereix que multiples aspectes de la memòria expressada en humans i rodents deriven d'uns mateixos principis. Per aquest motius, proposem nous principis per la memòria, l'auto-completat de patrons i plasticitat. A més, mitjançant aplicacions robòtiques, creem d'un nexe causal entre el circuit neural i el comportament amb el que demostrem la naturalesa conjuntiva de la selectivitat neuronal observada en el hipocamp es necessària per la solució de problemes pràctics comuns, com per example la cerca d'aliments. Tot plegat, aquests resultats avancen en l'idea general de que el codi de l'hipocamp es genèric i aplicable als diversos tipus de memòries estudiades en la literatura. / There is no consensual understanding on what the activity of the hippocampus neurons represents. While experiments with humans foster a dominant view of an episodic memory system, experiments with rodents promote its role as a spatial cognitive system. Although there is abundant evidence pointing to an overlap between these two theories, the dissociation is sustained by conflicting physiological data. This thesis proposes that the functional role of the hippocampus should be analyzed in terms of its structure and function rather than by the correlation of neuronal activity and behavioral performance. The identification of the hippocampus code, i.e. the set of computational principles underlying the input-output transformations of neural activity, might ultimately provide a unifying understanding of its role. In this thesis we present a theoretical model that quantitatively describes and interprets the selectivity of regions of the hippocampus to spatial and non-spatial variables observed in experiments with rats. The results suggest that the multiple aspects of memory expressed in human and rodent data are derived form similar principles. This approach suggests new principles for memory, pattern completion and plasticity. In addition, by creating a causal tie between the neural circuitry and behavior through a robotic control framework we show that the conjunctive nature of neural selectivity observed in the hippocampus is needed for effective problem solving in real-world tasks such as foraging. Altogether, these results advance the concept that the hippocampal code is generic to the different aspects of memory highlighted in the literature.
40

Navegação robótica em redes de sensores sem fio baseada no RSSI

Carvalho Júnior, Antônio Ramos de 27 March 2013 (has links)
Made available in DSpace on 2015-04-11T14:02:54Z (GMT). No. of bitstreams: 1 antonio ramos.pdf: 1755825 bytes, checksum: 768c541e148d73e5b3ca6302d0df68f4 (MD5) Previous issue date: 2013-03-27 / Wireless Sensor Networks (WSNs) are commonly used in monitoring applications due to its capacity to sensing, processing and communicating, and its low cost. However, one limitation of WSN is on energy, because each device (sensor node) of the network needs to have low energy consumption, not allowing the use of extra hardware such as GPS. On the other hand, robots can assist in monitoring made by WSN. One possible application using robots in WSN is to search for events of interest, in which a robot browsing the network to find a specific event, using the signal strength (RSSI) as a reference for navigation. Solutions to this problem have been found in the literature. However, such works assume a devised propagation model, in which the RSSI regression curve versus distance is ideal for that scenario. We present in this dissertation two algorithms that solve the problem of robot navigation based on RSSI in search of an event. The first algorithm is based on the node signal coverage detection and the second uses probability to estimate distance and direction of the target node. Therefore, we conducted experiments to measure the RSSI value according to the distance in Amazon rainforest and represent the signal model propagation obtained in a simulator. Simulations based on the solutions of the literature showed that the percentage of arrival of these solutions is inversely related to the distance of departure from its target when subjected to propagation model detected in experiments. The two algorithms presented have been developed considering the propagation model of the signal obtained in the experiments and both find their target 100 % of the cases. / Redes de Sensores Sem Fio (RSSF) são comumente utilizadas em aplicações de monitoramento, devido à sua capacidade de sensoriar, processar, comunicar e seu baixo custo. No entanto, uma das limitações de RSSF é quanto à energia, pois cada dispositivo (nó sensor) dessa rede precisa ter baixo consumo de energia, não permitindo a utilização de hardwares extras, como o GPS. Por outro lado, robôs podem auxiliar no monitoramento feito por RSSF. Uma possível aplicação utilizando robôs em RSSF é a busca de eventos de interesse, na qual um robô navega na rede até encontrar um determinado evento, utilizando a potência de sinal (RSSI) como referência para sua navegação. Encontramos na literatura trabalhos que solucionam tal problema. Entretanto, tais trabalhos consideram um modelo de propagação idealizado, na qual a curva de regressão do RSSI em função da distância é ideal para aquele cenário. Apresentaremos nesta dissertação dois algoritmos que solucionam o problema de navegação do robô baseada na RSSI em busca de um evento. O primeiro algoritmo é baseado na detecção da borda de cobertura do sinal do nó e o segundo utiliza probabilidade para estimar distância e direção do nó alvo. Para o tal, realizamos experimentos para medir a valor de RSSI de acordo com a distância, na floresta Amazônica e representamos o modelo de propagação de sinal obtido em um simulador. Simulações baseadas nas soluções da literatura mostraram que o percentual de chegada destas soluções é inversamente relacionada à distancia de partida de seu alvo quando submetidas ao modelo de propagação detectado nos experimentos. Os dois algoritmos apresentados foram desenvolvidos considerando o modelo de propagação do sinal obtido nos experimentos e ambos encontram seu alvo em 100% dos casos.

Page generated in 0.1477 seconds