• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 25
  • 12
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 96
  • 96
  • 30
  • 27
  • 21
  • 21
  • 19
  • 19
  • 18
  • 17
  • 17
  • 16
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Deep Learning Approach to Autonomous Relative Terrain Navigation

Campbell, Tanner, Campbell, Tanner January 2017 (has links)
Autonomous relative terrain navigation is a problem at the forefront of many space missions involving close proximity operations to any target body. With no definitive answer, there are many techniques to help cope with this issue using both passive and active sensors, but almost all require high fidelity models of the associated dynamics in the environment. Convolutional Neural Networks (CNNs) trained with images rendered from a digital terrain map (DTM) of the body’s surface can provide a way to side-step the issue of unknown or complex dynamics while still providing reliable autonomous navigation. This is achieved by directly mapping an image to a relative position to the target body. The portability of trained CNNs allows “offline” training that can yield a matured network capable of being loaded onto a spacecraft for real-time position acquisition. In this thesis the lunar surface is used as the proving ground for this optical navigation technique, but the methods used are not unique to the Moon, and are applicable in general.
32

Geração de mapas de ambientes utilizando um sistema de percepção LIDAR - 3D / Environmental maps generation using LIDAR - 3D perception system

Alvarez-Jácobo, Justo Emilio, 1973- 12 June 2013 (has links)
Orientador: Pablo Siqueira Meirelles / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-26T13:45:42Z (GMT). No. of bitstreams: 1 Alvarez-Jacobo_JustoEmilio_D.pdf: 7664743 bytes, checksum: 72e7ab67912a544a9178da02711121d9 (MD5) Previous issue date: 2013 / Resumo: Este trabalho apresenta o estudo e desenvolvimento de um Sistema de Percepção baseado na utilização de sensores telemétricos tipo LIDAR. Uma plataforma de escaneamento a laser em três dimensões LMS-3D é construída a fim da navegação autônoma de robôs. A área navegável é obtida a partir de mapas telemétricos, caracterizados com algoritmos de grades de ocupação (GO) (em duas dimensões com a terceira colorida e 3D) e com o cálculo de gradientes vetoriais. Dois tipos de áreas navegáveis são caracterizadas: (i) área de navegação primária representada por uma área livre dentro da GO; e (ii) área de navegação continua representada pela soma das áreas continuas e gradientes classificados com um determinado limiar. Este limiar indica se uma área é passível de navegação considerando as características do robô. A proposta foi avaliada experimentalmente em ambiente real, contemplou a detecção de obstáculos e a identificação de descontinuidades / Abstract: This thesis was proposed to demonstrate the study and development of a Perception System based on the utilization of a LIDAR telemetric sensors. It was proposed to create a LMS-3D three dimension laser scanning platform, in an attempt to promote the Autonomous Robot Navigation. The scanned area was obtained based on telemetric maps, which was characterized with Occupancy Grid algorithms (OG) (in two dimensions with the third colored and 3D) and Vector Gradients calculation. Two different navigation areas were characterized: (i) primary area of navigation, that represents the free area inside a OG, and (ii) continuous navigation area, that represents the navigated area composed by the sum of continuous areas and the gradients classified by a determined threshold, which indicates the possible navigated area, based on the robot characteristics. The proposition of this thesis was evaluated in a real environment and was able to identify the obstacles detection and also the discontinuance / Doutorado / Mecanica dos Sólidos e Projeto Mecanico / Doutor em Engenharia Mecânica
33

Next generation low-cost automated guided vehicle

Dzezhyts, Yevheniy January 2020 (has links)
Automated guided vehicles (AGVs) are the key equipment of flexible production systems and an important means for realizing a modern logistics system that meets the demands of Industry 4.0. AGVs are used from the mid 50th to delegate monotonous work of delivering products from the human to the automated device. In the long run, the usage of AGVs brings huge benefits to the manufacturing companies. But the purchase and installation of these devices significantly increase operational costs. This fact halts small and medium-sized enterprises from adopting this technology on their shop floors. The idea of this thesis work is to design and create a device that can be retailed at a significantly lower price without compromising flexibility and functional properties, to be used by smaller businesses. For this mater are used more affordable parts that can bring the cost down of a final product. This work describes the process of developing a differential drive mobile platform under the control of the robotic operating system. The process includes the development of a virtual model; selection of required components and investigation of their compatibility; development of chassis, suspension, and gear system; development of a hardware interface to interact with hardware components; configuration of different algorithms of control, cartography, and navigation; evaluation of the device. The research method is used in this work is design and creation due to the necessity of creating a physical prototype. The budget specification for the project was set to 50000 SEK and the desired payload capacity was set to 100kg. The work has resulted in the creation of a prototype of the AGV. The cost of the project is 20595 SEK. The evaluation of a prototype resulted in a maximum towing force of 300N. The load capacity is limited by the mobile base is 400kg. Safety sensors are not used in this project as the device was meant to operate in a controlled environment. The work also gives an evaluation of the Gmapping algorithm in case of using the laser scanner (RPlidar A1) and two algorithms of navigation stack: TrajectoryPlannerROS and DWA planner. The final prototype is evaluated to support an autonomous movement within a controlled environment.
34

Angles-Only EKF Navigation for Hyperbolic Flybys

Matheson, Iggy 01 August 2019 (has links)
Space travelers in science fiction can drop out of hyperspace and make a pinpoint landing on any strange new world without stopping to get their bearings, but real-life space navigation is an art characterized by limited information and complex mathematics that yield no easy answers. This study investigates, for the first time ever, what position and velocity estimation errors can be expected by a starship arriving at a distant star - specifically, a miniature probe like those proposed by the Breakthrough Starshot initiative arriving at Proxima Centauri. Such a probe consists of nothing but a small optical camera and a small microprocessor, and must therefore rely on relatively simple methods to determine its position and velocity, such as observing the angles between its destination and certain guide stars and processing them in an algorithm known as an extended Kalman filter. However, this algorithm is designed for scenarios in which the position and velocity are already known to high accuracy. This study shows that the extended Kalman filter can reliably estimate the position and velocity of the Starshot probe at speeds characteristic of current space probes, but does not attempt to model the filter’s performance at speeds characteristic of Starshot-style proposals. The gravity of the target star is also estimated using the same methods.
35

Contributions to the use of 3D lidars for autonomous navigation : calibration and qualitative localization / Contributions à l'exploitation de lidar 3D pour la navigation autonome : calibrage et localisation qualitative

Muhammad, Naveed 01 February 2012 (has links)
Afin de permettre une navigation autonome d'un robot dans un environnement, le robot doit être capable de percevoir son environnement. Dans la littérature, d'une manière générale, les robots perçoivent leur environnement en utilisant des capteurs de type sonars, cameras et lidar 2D. L'introduction de nouveaux capteurs, nommés lidar 3D, tels que le Velodyne HDL-64E S2, a permis aux robots d'acquérir plus rapidement des données 3D à partir de leur environnement. La première partie de cette thèse présente une technique pour la calibrage des capteurs lidar 3D. La technique est basée sur la comparaison des données lidar à un modèle de vérité de terrain afin d'estimer les valeurs optimales des paramètres de calibrage. La deuxième partie de la thèse présente une technique pour la localisation et la détection de fermeture de boucles pour les robots autonomes. La technique est basée sur l'extraction et l'indexation des signatures de petite-taille à partir de données lidar 3D. Les signatures sont basées sur les histogrammes de l'information de normales de surfaces locale extraite à partir des données lidar en exploitant la disposition des faisceaux laser dans le dispositif lidar / In order to autonomously navigate in an environment, a robot has to perceive its environment correctly. Rich perception information from the environment enables the robot to perform tasks like avoiding obstacles, building terrain maps, and localizing itself. Classically, outdoor robots have perceived their environment using vision or 2D lidar sensors. The introduction of novel 3D lidar sensors such as the Velodyne device has enabled the robots to rapidly acquire rich 3D data about their surroundings. These novel sensors call for the development of techniques that efficiently exploit their capabilities for autonomous navigation.The first part of this thesis presents a technique for the calibration of 3D lidar devices. The calibration technique is based on the comparison of acquired 3D lidar data to a ground truth model in order to estimate the optimal values of the calibration parameters. The second part of the thesis presents a technique for qualitative localization and loop closure detection for autonomous mobile robots, by extracting and indexing small-sized signatures from 3D lidar data. The signatures are based on histograms of local surface normal information that is efficiently extracted from the lidar data. Experimental results illustrate the developments throughout the manuscript
36

Comparing Learned Representations Between Unpruned and Pruned Deep Convolutional Neural Networks

Mitchell, Parker 01 June 2022 (has links) (PDF)
While deep neural networks have shown impressive performance in computer vision tasks, natural language processing, and other domains, the sizes and inference times of these models can often prevent them from being used on resource-constrained systems. Furthermore, as these networks grow larger in size and complexity, it can become even harder to understand the learned representations of the input data that these networks form through training. These issues of growing network size, increasing complexity and runtime, and ambiguity in the understanding of internal representations serve as guiding points for this work. In this thesis, we create a neural network that is capable of predicting up to three path waypoints given an input image. This network will be used in conjunction with other networks to help guide an autonomous robotic vehicle. Since this neural network will be deployed to an embedded system, it is important that our network is efficient. As such, we use a network compression technique known as L1 norm pruning to reduce the size of the network and speed up the inference time, while retaining similar loss. Furthermore, we investigate the effects that pruning has on the internal learned representations of models by comparing unpruned and pruned network layers using projection weighted canonical correlation analysis (PWCCA). Our results show that for deep convolutional neural networks (CNN), PWCCA similarity scores between early convolutional layers start low and then gradually increase towards the final layers of the network, with some peaks in the intermediate layers. We also show that for our deep CNN, linear layers at the end of the network also exhibit very high similarity, serving to guide the dissimilar representations from intermediate convolutional layers to a common representation that yields similar network performance between unpruned and pruned networks.
37

Omnidirectional Vision for an Autonomous Surface Vehicle

Gong, Xiaojin 07 February 2009 (has links)
Due to the wide field of view, omnidirectional cameras have been extensively used in many applications, including surveillance and autonomous navigation. In order to implement a fully autonomous system, one of the essential problems is construction of an accurate, dynamic environment model. In Computer Vision this is called structure from stereo or motion (SFSM). The work in this dissertation addresses omnidirectional vision based SFSM for the navigation of an autonomous surface vehicle (ASV), and implements a vision system capable of locating stationary obstacles and detecting moving objects in real time. The environments where the ASV navigates are complex and fully of noise, system performance hence is a primary concern. In this dissertation, we thoroughly investigate the performance of range estimation for our omnidirectional vision system, regarding to different omnidirectional stereo configurations and considering kinds of noise, for instance, disturbances in calibration, stereo configuration, and image processing. The result of performance analysis is very important for our applications, which not only impacts the ASV's navigation, also guides the development of our omnidirectional stereo vision system. Another big challenge is to deal with noisy image data attained from riverine environments. In our vision system, a four-step image processing procedure is designed: feature detection, feature tracking, motion detection, and outlier rejection. The choice of point-wise features and outlier rejection based method makes motion detection and stationary obstacle detection efficient. Long run outdoor experiments are conducted in real time and show the effectiveness of the system. / Ph. D.
38

[pt] DETECÇÃO VISUAL DE FILEIRA DE PLANTAÇÃO COM TAREFA AUXILIAR DE SEGMENTAÇÃO PARA NAVEGAÇÃO DE ROBÔS MÓVEIS / [en] VISUAL CROP ROW DETECTION WITH AUXILIARY SEGMENTATION TASK FOR MOBILE ROBOT NAVIGATION

IGOR FERREIRA DA COSTA 07 November 2023 (has links)
[pt] Com a evolução da agricultura inteligente, robôs autônomos agrícolas têm sido pesquisados de forma extensiva nos últimos anos, ao passo que podem resultar em uma grande melhoria na eficiência do campo. No entanto, navegar em um campo de cultivo aberto ainda é um grande desafio. O RTKGNSS é uma excelente ferramenta para rastrear a posição do robô, mas precisa de mapeamento e planejamento precisos, além de ser caro e dependente de qualidade do sinal. Como tal, sistemas on-board que podem detectar o campo diretamente para guiar o robô são uma boa alternativa. Esses sistemas detectam as linhas com técnicas de processamento de imagem e estimam a posição aplicando algoritmos à máscara obtida, como a transformada de Hough ou regressão linear. Neste trabalho, uma abordagem direta é apresentada treinando um modelo de rede neural para obter a posição das linhas de corte diretamente de uma imagem RGB. Enquanto a câmera nesses sistemas está, geralmente, voltada para o campo, uma câmera próxima ao solo é proposta para aproveitar túneis ou paredes de plantas formadas entre as fileiras. Um ambiente de simulação para avaliar o desempenho do modelo e o posicionamento da câmera foi desenvolvido e disponibilizado no Github. Também são propostos quatro conjuntos de dados para treinar os modelos, sendo dois para as simulações e dois para os testes do mundo real. Os resultados da simulação são mostrados em diferentes resoluções e estágios de crescimento da planta, indicando as capacidades e limitações do sistema e algumas das melhores configurações são verificadas em dois tipos de ambientes agrícolas. / [en] Autonomous robots for agricultural tasks have been researched to great extent in the past years as they could result in a great improvement of field efficiency. Navigating an open crop field still is a great challenge. RTKGNSS is a excellent tool to track the robot’s position, but it needs precise mapping and planning while also being expensive and signal dependent. As such, onboard systems that can sense the field directly to guide the robot are a good alternative. Those systems detect the rows with adequate image processing techniques and estimate the position by applying algorithms to the obtained mask, such as the Hough transform or linear regression. In this work, a direct approach is presented by training a neural network model to obtain the position of crop lines directly from an RGB image. While, usually, the camera in these kinds of systems is looking down to the field, a camera near the ground is proposed to take advantage of tunnels or walls of plants formed between rows. A simulation environment for evaluating both the model’s performance and camera placement was developed and made available on Github, also four datasets to train the models are proposed, being two for the simulations and two for the real world tests. The results from the simulation are shown across different resolutions and stages of plant growth, indicating the system’s capabilities and limitations. Some of the best configurations are then verified in two types of agricultural environments.
39

Navegação de veículos autônomos em ambientes externos não estruturados baseada em visão computacional / Autonomous vehicles navigation on external unstructured terrains based in computer vision

Klaser, Rafael Luiz 06 June 2014 (has links)
Este trabalho apresenta um sistema de navegação autônoma para veículos terrestres com foco em ambientes não estruturados, tendo como principal meta aplicações em campos abertos com vegetação esparsa e em cenário agrícola. É aplicada visão computacional como sistema de percepção principal utilizando uma câmera estéreo em um veículo com modelo cinemático de Ackermann. A navegação é executada de forma deliberativa por um planejador baseado em malha de estados sobre um mapa de custos e localização por odometria e GPS. O mapa de custos é obtido através de um modelo de ocupação probabilístico desenvolvido fazendo uso de uma OctoMap. É descrito um modelo sensorial para atualizar esta OctoMap a partir da informação espacial proveniente de nuvens de pontos obtidas a partir do método de visão estéreo. Os pontos são segmentados e filtrados levando em consideração os ruídos inerentes da aquisição de imagens e do processo de cálculo de disparidade para obter a distância dos pontos. Os testes foram executados em ambiente de simulação, permitindo a replicação e repetição dos experimentos. A modelagem do veículo foi descrita para o simulador físico Gazebo de acordo com a plataforma real CaRINA I (veículo elétrico automatizado do LRM-ICMC/USP), levando-se em consideração o modelo cinemático e as limitações deste veículo. O desenvolvimento foi baseado no ROS (Robot Operating System) sendo utilizada a arquitetura básica de navegação deste framework a partir da customização dos seus componentes. Foi executada a validação do sistema no ambiente real em cenários com terreno irregular e obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas uma câmera estéreo. Nesta dissertação são apresentados os principais componentes de um sistema de navegação autônoma e as etapas necessárias para a sua concepção, assim como resultados de experimentos simulados e com o uso de um veículo autônomo real / This work presents a system for autonomous vehicle navigation focusing on unstructured environments, with the primary goal applications in open fields with sparse vegetation, unstructured environments and agricultural scenario. Computer vision is applied as the main perception system using a stereo camera in a car-like vehicle with Ackermann kinematic model. Navigation is performed deliberatively using a path planner based on a lattice state space over a cost map with localization by odometry and GPS. The cost map is obtained through a probabilistic occupation model developed making use of an OctoMap. It is described a sensor model to update the spatial occupancy information of the OctoMap from a point cloud obtained by stereo vision. The points are segmented and filtered taking into account the noise inherent in the image acquisition and calculation of disparity to obtain the distance from points. Tests are performed in simulation, allowing replication and repetition of experiments. The modeling of the vehicle is described to be used in the Gazebo physics simulator in accordance with the real platform CaRINA I (LRM-ICMC/USP automated electrical vehicle) taking into account the kinematic model and the limitations of this vehicle. The development is based on ROS (Robot Operating System) and its basic navigation architecture is customized. System validation is performed on real environment in scenarios with different obstacles and uneven terrain. The system shows satisfactory performance considering a simple configuration and an approach based on only one stereo camera. This dissertation presents the main components of an autonomous navigation system and the necessary steps for its conception as well as results of experiments in simulated and using a real autonomous vehicle
40

NeuroFSM: aprendizado de Autômatos Finitos através do uso de Redes Neurais Artificiais aplicadas à robôs móveis e veículos autônomos / NeuroFSM: finite state machines learning using artificial neural networks applied to mobile robots and autonomous vehicles

Sales, Daniel Oliva 23 July 2012 (has links)
A navegação autônoma é uma tarefa fundamental na robótica móvel. Para que esta tarefa seja realizada corretamente é necessário um sistema inteligente de controle e navegação associado ao sistema sensorial. Este projeto apresenta o desenvolvimento de um sistema de controle para a navegação de veículos e robôs móveis autônomos. A abordagem utilizada neste trabalho utiliza Redes Neurais Artificiais para o aprendizado de Autômatos Finitos de forma que os robôs possam lidar com os dados provenientes de seus sensores mesmo estando sujeitos a imprecisões e erros e ao mesmo tempo permite que sejam consideradas as diferentes situações e estados em que estes robôs se encontram (contexto). Dessa forma, é possível decidir como agir para realizar o controle da sua movimentação, e assim executar tarefas de controle e navegação das mais simples até as mais complexas e de alto nível. Portanto, esta dissertação visa utilizar Redes Neurais Artificiais para reconhecer o estado atual (contexto) do robô em relação ao ambiente em que está inserido. Uma vez que seja identificado seu estado, o que pode inclusive incluir a identificação de sua posição em relação aos elementos presentes no ambiente, o robô será capaz de decidir qual a ação/comportamento que deverá ser executado. O sistema de controle e navegação irá implementar um Autômato Finito que a partir de um estado atual define uma ação corrente, sendo capaz de identificar a mudança de estados, e assim alternar entre diferentes comportamentos previamente definidos. De modo a validar esta proposta, diversos experimentos foram realizados através do uso de um simulador robótico (Player-Stage), e através de testes realizados com robôs reais (Pioneer P3-AT, SRV-1 e veículos automatizados) / Autonomous navigation is a fundamental task in mobile robotics. In order to accurately perform this task it is necessary an intelligent navigation and control system associated to the sensorial system. This project presents the development of a control system for autonomous mobile robots and vehicles navigation. The adopted approach uses Artificial Neural Networks for Finite State Machine learning, allowing the robots to deal with sensorial data even when this data is not precise and correct. Simultaneously, it allows the robots to consider the different situations and states they are inserted in (context detection). This way, it is possible to decide how to proceed with motion control and then execute navigation and control tasks from the most simple ones until the most complex and high level tasks. So, this work uses Artificial Neural Networks to recognize the robots current state (context) at the environment where it is inserted. Once the state is detected, including identification of robots position according to environment elements, the robot will be able to determine the action/- behavior to be executed. The navigation and control system implements a Finite State Machine deciding the current action from current state, being able to identify state changes, alternating between different previously defined behaviors. In order to validade this approach, many experiments were performed with the use of a robotic simulator (Player-Stage), and carrying out tests with real robots (Pioneer P3-AT, SRV-1 and autonomous vehicles)

Page generated in 0.0568 seconds