• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 29
  • 12
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 110
  • 110
  • 37
  • 29
  • 27
  • 25
  • 25
  • 24
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Contributions to the use of 3D lidars for autonomous navigation : calibration and qualitative localization / Contributions à l'exploitation de lidar 3D pour la navigation autonome : calibrage et localisation qualitative

Muhammad, Naveed 01 February 2012 (has links)
Afin de permettre une navigation autonome d'un robot dans un environnement, le robot doit être capable de percevoir son environnement. Dans la littérature, d'une manière générale, les robots perçoivent leur environnement en utilisant des capteurs de type sonars, cameras et lidar 2D. L'introduction de nouveaux capteurs, nommés lidar 3D, tels que le Velodyne HDL-64E S2, a permis aux robots d'acquérir plus rapidement des données 3D à partir de leur environnement. La première partie de cette thèse présente une technique pour la calibrage des capteurs lidar 3D. La technique est basée sur la comparaison des données lidar à un modèle de vérité de terrain afin d'estimer les valeurs optimales des paramètres de calibrage. La deuxième partie de la thèse présente une technique pour la localisation et la détection de fermeture de boucles pour les robots autonomes. La technique est basée sur l'extraction et l'indexation des signatures de petite-taille à partir de données lidar 3D. Les signatures sont basées sur les histogrammes de l'information de normales de surfaces locale extraite à partir des données lidar en exploitant la disposition des faisceaux laser dans le dispositif lidar / In order to autonomously navigate in an environment, a robot has to perceive its environment correctly. Rich perception information from the environment enables the robot to perform tasks like avoiding obstacles, building terrain maps, and localizing itself. Classically, outdoor robots have perceived their environment using vision or 2D lidar sensors. The introduction of novel 3D lidar sensors such as the Velodyne device has enabled the robots to rapidly acquire rich 3D data about their surroundings. These novel sensors call for the development of techniques that efficiently exploit their capabilities for autonomous navigation.The first part of this thesis presents a technique for the calibration of 3D lidar devices. The calibration technique is based on the comparison of acquired 3D lidar data to a ground truth model in order to estimate the optimal values of the calibration parameters. The second part of the thesis presents a technique for qualitative localization and loop closure detection for autonomous mobile robots, by extracting and indexing small-sized signatures from 3D lidar data. The signatures are based on histograms of local surface normal information that is efficiently extracted from the lidar data. Experimental results illustrate the developments throughout the manuscript
42

[pt] DETECÇÃO VISUAL DE FILEIRA DE PLANTAÇÃO COM TAREFA AUXILIAR DE SEGMENTAÇÃO PARA NAVEGAÇÃO DE ROBÔS MÓVEIS / [en] VISUAL CROP ROW DETECTION WITH AUXILIARY SEGMENTATION TASK FOR MOBILE ROBOT NAVIGATION

IGOR FERREIRA DA COSTA 07 November 2023 (has links)
[pt] Com a evolução da agricultura inteligente, robôs autônomos agrícolas têm sido pesquisados de forma extensiva nos últimos anos, ao passo que podem resultar em uma grande melhoria na eficiência do campo. No entanto, navegar em um campo de cultivo aberto ainda é um grande desafio. O RTKGNSS é uma excelente ferramenta para rastrear a posição do robô, mas precisa de mapeamento e planejamento precisos, além de ser caro e dependente de qualidade do sinal. Como tal, sistemas on-board que podem detectar o campo diretamente para guiar o robô são uma boa alternativa. Esses sistemas detectam as linhas com técnicas de processamento de imagem e estimam a posição aplicando algoritmos à máscara obtida, como a transformada de Hough ou regressão linear. Neste trabalho, uma abordagem direta é apresentada treinando um modelo de rede neural para obter a posição das linhas de corte diretamente de uma imagem RGB. Enquanto a câmera nesses sistemas está, geralmente, voltada para o campo, uma câmera próxima ao solo é proposta para aproveitar túneis ou paredes de plantas formadas entre as fileiras. Um ambiente de simulação para avaliar o desempenho do modelo e o posicionamento da câmera foi desenvolvido e disponibilizado no Github. Também são propostos quatro conjuntos de dados para treinar os modelos, sendo dois para as simulações e dois para os testes do mundo real. Os resultados da simulação são mostrados em diferentes resoluções e estágios de crescimento da planta, indicando as capacidades e limitações do sistema e algumas das melhores configurações são verificadas em dois tipos de ambientes agrícolas. / [en] Autonomous robots for agricultural tasks have been researched to great extent in the past years as they could result in a great improvement of field efficiency. Navigating an open crop field still is a great challenge. RTKGNSS is a excellent tool to track the robot’s position, but it needs precise mapping and planning while also being expensive and signal dependent. As such, onboard systems that can sense the field directly to guide the robot are a good alternative. Those systems detect the rows with adequate image processing techniques and estimate the position by applying algorithms to the obtained mask, such as the Hough transform or linear regression. In this work, a direct approach is presented by training a neural network model to obtain the position of crop lines directly from an RGB image. While, usually, the camera in these kinds of systems is looking down to the field, a camera near the ground is proposed to take advantage of tunnels or walls of plants formed between rows. A simulation environment for evaluating both the model’s performance and camera placement was developed and made available on Github, also four datasets to train the models are proposed, being two for the simulations and two for the real world tests. The results from the simulation are shown across different resolutions and stages of plant growth, indicating the system’s capabilities and limitations. Some of the best configurations are then verified in two types of agricultural environments.
43

Comparing Learned Representations Between Unpruned and Pruned Deep Convolutional Neural Networks

Mitchell, Parker 01 June 2022 (has links) (PDF)
While deep neural networks have shown impressive performance in computer vision tasks, natural language processing, and other domains, the sizes and inference times of these models can often prevent them from being used on resource-constrained systems. Furthermore, as these networks grow larger in size and complexity, it can become even harder to understand the learned representations of the input data that these networks form through training. These issues of growing network size, increasing complexity and runtime, and ambiguity in the understanding of internal representations serve as guiding points for this work. In this thesis, we create a neural network that is capable of predicting up to three path waypoints given an input image. This network will be used in conjunction with other networks to help guide an autonomous robotic vehicle. Since this neural network will be deployed to an embedded system, it is important that our network is efficient. As such, we use a network compression technique known as L1 norm pruning to reduce the size of the network and speed up the inference time, while retaining similar loss. Furthermore, we investigate the effects that pruning has on the internal learned representations of models by comparing unpruned and pruned network layers using projection weighted canonical correlation analysis (PWCCA). Our results show that for deep convolutional neural networks (CNN), PWCCA similarity scores between early convolutional layers start low and then gradually increase towards the final layers of the network, with some peaks in the intermediate layers. We also show that for our deep CNN, linear layers at the end of the network also exhibit very high similarity, serving to guide the dissimilar representations from intermediate convolutional layers to a common representation that yields similar network performance between unpruned and pruned networks.
44

Omnidirectional Vision for an Autonomous Surface Vehicle

Gong, Xiaojin 07 February 2009 (has links)
Due to the wide field of view, omnidirectional cameras have been extensively used in many applications, including surveillance and autonomous navigation. In order to implement a fully autonomous system, one of the essential problems is construction of an accurate, dynamic environment model. In Computer Vision this is called structure from stereo or motion (SFSM). The work in this dissertation addresses omnidirectional vision based SFSM for the navigation of an autonomous surface vehicle (ASV), and implements a vision system capable of locating stationary obstacles and detecting moving objects in real time. The environments where the ASV navigates are complex and fully of noise, system performance hence is a primary concern. In this dissertation, we thoroughly investigate the performance of range estimation for our omnidirectional vision system, regarding to different omnidirectional stereo configurations and considering kinds of noise, for instance, disturbances in calibration, stereo configuration, and image processing. The result of performance analysis is very important for our applications, which not only impacts the ASV's navigation, also guides the development of our omnidirectional stereo vision system. Another big challenge is to deal with noisy image data attained from riverine environments. In our vision system, a four-step image processing procedure is designed: feature detection, feature tracking, motion detection, and outlier rejection. The choice of point-wise features and outlier rejection based method makes motion detection and stationary obstacle detection efficient. Long run outdoor experiments are conducted in real time and show the effectiveness of the system. / Ph. D.
45

A Collection of Computer Vision Algorithms Capable of Detecting Linear Infrastructure for the Purpose of UAV Control

Smith, Evan McLean 06 July 2016 (has links)
One of the major application areas for UAVs is the automated traversing and inspection of infrastructure. Much of this infrastructure is linear, such as roads, pipelines, rivers, and railroads. Rather than hard coding all of the GPS coordinates along these linear components into a flight plan for the UAV to follow, one could take advantage of computer vision and machine learning techniques to detect and travel along them. With regards to roads and railroads, two separate algorithms were developed to detect the angle and distance offset of the UAV from these linear infrastructure components to serve as control inputs for a flight controller. The road algorithm relied on applying a Gaussian SVM to segment road pixels from rural farmland using color plane and texture data. This resulted in a classification accuracy of 96.6% across a 62 image dataset collected at Kentland Farm. A trajectory can then be generated by fitting the classified road pixels to polynomial curves. These trajectories can even be used to take specific turns at intersections based on a user defined turn direction and have been proven through hardware-in-the-loop simulation to produce a mean cross track error of only one road width. The combined segmentation and trajectory algorithm was then implemented on a PC (i7-4720HQ 2.6 GHz, 16 GB RAM) at 6.25 Hz and a myRIO 1900 at 1.5 Hz proving its capability for real time UAV control. As for the railroad algorithm, template matching was first used to detect railroad patterns. Upon detection, a region of interest around the matched pattern was used to guide a custom edge detector and Hough transform to detect the straight lines on the rails. This algorithm has been shown to detect rails correctly, and thus the angle and distance offset error, on all images related to the railroad pattern template and can run at 10 Hz on the aforementioned PC. / Master of Science
46

Multistage Localization for High Precision Mobile Manipulation Tasks

Mobley, Christopher James 03 March 2017 (has links)
This paper will present a multistage localization approach for an autonomous industrial mobile manipulator (AIMM). This approach allows tasks with an operational scope outside the range of the robot's manipulator to be completed without having to recalibrate the position of the end-effector each time the robot's mobile base moves to another position. This is achieved by localizing the AIMM within its area of operation (AO) using adaptive Monte Carlo localization (AMCL), which relies on the fused odometry and sensor messages published by the robot, as well as a 2-D map of the AO, which is generated using an optimization-based smoothing simultaneous localization and mapping (SLAM) technique. The robot navigates to a predefined start location in the map incorporating obstacle avoidance through the use of a technique called trajectory rollout. Once there, the robot uses its RGB-D sensor to localize an augmented reality (AR) tag in the map frame. Once localized, the identity and the 3-D position and orientation, collectively known as pose, of the tag are used to generate a list of initial feature points and their locations based on a priori knowledge. After the end-effector moves to the approximate location of a feature point provided by the AR tag localization, the feature point's location, as well as the end-effector's pose are refined to within a user specified tolerance through the use of a control loop, which utilizes images from a calibrated machine vision camera and a laser pointer, simulating stereo vision, to localize the feature point in 3-D space using computer vision techniques and basic geometry. This approach was implemented on two different ROS enabled robots, the Clearpath Robotics' Husky and the Fetch Robotics' Fetch, in order to show the utility of the multistage localization approach in executing two tasks which are prevalent in both manufacturing and construction: drilling and sealant application. The proposed approach was able to achieve an average accuracy of ± 1 mm in these operations, verifying its efficacy for tasks which have a larger operational scope than that of the range of the AIMM's manipulator and its robustness to general applications in manufacturing. / Master of Science
47

Metodologia para detecção de obstáculos para navegação de embarcações autônomas usando visão computacional / Methodology to detect obstacles for autonomous navigation of vessels using computer vision

Munhoz, Alexandre 03 September 2010 (has links)
Este trabalho apresenta um novo método de detecção de obstáculos usados para navegação de um barco autônomo. O método desenvolvido é baseado em visão computacional e permite medir a distância e direção do obstáculo à câmera de video. A distância do possível obstáculo à câmera de vídeo, e o vetor de contorno predominante da imagem são os parâmetros usados para identificar os obstáculos. Imagens estereoscópicas adquiridas nas margens da lago do clube Náutico de Araraquara, usando bóias de navegação como obstáculos, foram usadas para extrair as características significantes das experiências. Para validar a experiência, foram utilizadas imagens do Reservatório do Broa (Itirapina, SP). A proposta desenvolvida mostrou ser mais eficiente que o método tradicional usando a teoria de Campos Potenciais. As imagens foram propositadamente tomadas contra o sol, onde o brilho das ondas são erroneamente indicadas como obstáculos pelo método de campos potenciais. Esta proposta filtra as ondas de forma a diminuir sua interferência no diagnóstico. / This work presents the results of new obstacle detection methods used for an autonomous boat navigation. The developed method is based on computer vision and allows to measure the distance and direction of the obstacle to the boat. The distance of the possible obstacle to the camera, and the obstacle outline predominant vector are the parameters used to identify the obstacles. Stereo images acquired from the margins of the Nautical Araraquara lake, using navigation buoys as obstacles, were used to extract the meaningful characteristics of the experiments. To validate the experiment, images from the Broa Reservoir (Itirapina, SP) where used. The developed proposal showed to be more efficient than the traditional method using the potential fields theory. The images were taken willfully against the sun, where the brightness of the waves are erroneously identified as obstacles by the method of potential fields. This method filters the waves so as to reduce its interference in the diagnosis.
48

Desenvolvimento de sistema de navegação autônoma por GNSS. / Development of autonomous navigation system through GNSS.

Gonçalves, Luiz Felipe Sartori 15 April 2011 (has links)
Veículos autônomos são objeto de crescente estudo em todo o mundo. Face à Engenharia de Transportes, é tema que deve provocar uma revolução nas próximas décadas, pois é concreta a tendência ao uso destes veículos na sociedade. Podem se citar como grandes beneficiados a segurança, a logística, o fluxo de trânsito, o meio ambiente e também os portadores de deficiências. Com o objetivo de fazer um veículo atingir um ponto com coordenadas conhecidas de forma autônoma, uma plataforma veicular terrestre em escala foi utilizada, a qual recebeu um sistema computacional micro controlado e tecnologias para proporcionar mobilidade através de motores elétricos para tração e servo-motores para direcionamento; posicionamento por satélite através de receptor GNSS e bússola eletrônica para orientação; sensoriamento por ultra-som para evitar colisões; e comunicação sem fio, a fim de se realizar remotamente monitoramento e instrução em tempo real através de um aplicativo para computador pessoal (PC). Foi desenvolvido um algoritmo de navegação que, fazendo uso dos recursos disponíveis, proporcionou autonomia ao veículo, de forma a navegar para pontos com coordenadas conhecidas sem controle humano. Os testes realizados visaram avaliar a capacidade de autonomia do veículo, a trajetória de navegação realizada e a acurácia de chegada aos pontos de destino. O veículo foi capaz de atingir os pontos em todos os testes realizados, sendo considerado funcional seu algoritmo de navegação e também os sistemas de mobilidade, posicionamento, sensoriamento e comunicação. / Autonomous vehicles are an on growing research target around the world. Face to Transports Engineering, it is a subject which is expected to make a revolution on the next decades. The great benefits are on security, logistic, traffic flow, environment and handicap. With the goal to make a vehicle navigate autonomously to known geodesics coordinates, a reduced scale terrestrial vehicular platform was used. This platform received a microcontrolled computational system and technologies to give it mobility, through electrical motors for traction and servo-motors for direction; satellite positioning, through a GNSS receiver and magnetic compass for orientation; ultrasound sensing in order to avoid collision; and wireless communication, in order to do remote monitoring and instruction at real time through a PC application. It was developed a navigation algorithm which, from the available resources, gave autonomy to the vehicle, in order to navigate to known geodesics coordinates without human control. The test set was intended to evaluate the autonomy capacity of the vehicle, the navigation trajectory that was done and the arrival accuracy to the destination points. The vehicle reached the destination points on all tests done, being evaluated as functional its navigation algorithm and also the mobility, positioning, sensing and communication systems.
49

Navegação de veículos autônomos em ambientes externos não estruturados baseada em visão computacional / Autonomous vehicles navigation on external unstructured terrains based in computer vision

Klaser, Rafael Luiz 06 June 2014 (has links)
Este trabalho apresenta um sistema de navegação autônoma para veículos terrestres com foco em ambientes não estruturados, tendo como principal meta aplicações em campos abertos com vegetação esparsa e em cenário agrícola. É aplicada visão computacional como sistema de percepção principal utilizando uma câmera estéreo em um veículo com modelo cinemático de Ackermann. A navegação é executada de forma deliberativa por um planejador baseado em malha de estados sobre um mapa de custos e localização por odometria e GPS. O mapa de custos é obtido através de um modelo de ocupação probabilístico desenvolvido fazendo uso de uma OctoMap. É descrito um modelo sensorial para atualizar esta OctoMap a partir da informação espacial proveniente de nuvens de pontos obtidas a partir do método de visão estéreo. Os pontos são segmentados e filtrados levando em consideração os ruídos inerentes da aquisição de imagens e do processo de cálculo de disparidade para obter a distância dos pontos. Os testes foram executados em ambiente de simulação, permitindo a replicação e repetição dos experimentos. A modelagem do veículo foi descrita para o simulador físico Gazebo de acordo com a plataforma real CaRINA I (veículo elétrico automatizado do LRM-ICMC/USP), levando-se em consideração o modelo cinemático e as limitações deste veículo. O desenvolvimento foi baseado no ROS (Robot Operating System) sendo utilizada a arquitetura básica de navegação deste framework a partir da customização dos seus componentes. Foi executada a validação do sistema no ambiente real em cenários com terreno irregular e obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas uma câmera estéreo. Nesta dissertação são apresentados os principais componentes de um sistema de navegação autônoma e as etapas necessárias para a sua concepção, assim como resultados de experimentos simulados e com o uso de um veículo autônomo real / This work presents a system for autonomous vehicle navigation focusing on unstructured environments, with the primary goal applications in open fields with sparse vegetation, unstructured environments and agricultural scenario. Computer vision is applied as the main perception system using a stereo camera in a car-like vehicle with Ackermann kinematic model. Navigation is performed deliberatively using a path planner based on a lattice state space over a cost map with localization by odometry and GPS. The cost map is obtained through a probabilistic occupation model developed making use of an OctoMap. It is described a sensor model to update the spatial occupancy information of the OctoMap from a point cloud obtained by stereo vision. The points are segmented and filtered taking into account the noise inherent in the image acquisition and calculation of disparity to obtain the distance from points. Tests are performed in simulation, allowing replication and repetition of experiments. The modeling of the vehicle is described to be used in the Gazebo physics simulator in accordance with the real platform CaRINA I (LRM-ICMC/USP automated electrical vehicle) taking into account the kinematic model and the limitations of this vehicle. The development is based on ROS (Robot Operating System) and its basic navigation architecture is customized. System validation is performed on real environment in scenarios with different obstacles and uneven terrain. The system shows satisfactory performance considering a simple configuration and an approach based on only one stereo camera. This dissertation presents the main components of an autonomous navigation system and the necessary steps for its conception as well as results of experiments in simulated and using a real autonomous vehicle
50

NeuroFSM: aprendizado de Autômatos Finitos através do uso de Redes Neurais Artificiais aplicadas à robôs móveis e veículos autônomos / NeuroFSM: finite state machines learning using artificial neural networks applied to mobile robots and autonomous vehicles

Sales, Daniel Oliva 23 July 2012 (has links)
A navegação autônoma é uma tarefa fundamental na robótica móvel. Para que esta tarefa seja realizada corretamente é necessário um sistema inteligente de controle e navegação associado ao sistema sensorial. Este projeto apresenta o desenvolvimento de um sistema de controle para a navegação de veículos e robôs móveis autônomos. A abordagem utilizada neste trabalho utiliza Redes Neurais Artificiais para o aprendizado de Autômatos Finitos de forma que os robôs possam lidar com os dados provenientes de seus sensores mesmo estando sujeitos a imprecisões e erros e ao mesmo tempo permite que sejam consideradas as diferentes situações e estados em que estes robôs se encontram (contexto). Dessa forma, é possível decidir como agir para realizar o controle da sua movimentação, e assim executar tarefas de controle e navegação das mais simples até as mais complexas e de alto nível. Portanto, esta dissertação visa utilizar Redes Neurais Artificiais para reconhecer o estado atual (contexto) do robô em relação ao ambiente em que está inserido. Uma vez que seja identificado seu estado, o que pode inclusive incluir a identificação de sua posição em relação aos elementos presentes no ambiente, o robô será capaz de decidir qual a ação/comportamento que deverá ser executado. O sistema de controle e navegação irá implementar um Autômato Finito que a partir de um estado atual define uma ação corrente, sendo capaz de identificar a mudança de estados, e assim alternar entre diferentes comportamentos previamente definidos. De modo a validar esta proposta, diversos experimentos foram realizados através do uso de um simulador robótico (Player-Stage), e através de testes realizados com robôs reais (Pioneer P3-AT, SRV-1 e veículos automatizados) / Autonomous navigation is a fundamental task in mobile robotics. In order to accurately perform this task it is necessary an intelligent navigation and control system associated to the sensorial system. This project presents the development of a control system for autonomous mobile robots and vehicles navigation. The adopted approach uses Artificial Neural Networks for Finite State Machine learning, allowing the robots to deal with sensorial data even when this data is not precise and correct. Simultaneously, it allows the robots to consider the different situations and states they are inserted in (context detection). This way, it is possible to decide how to proceed with motion control and then execute navigation and control tasks from the most simple ones until the most complex and high level tasks. So, this work uses Artificial Neural Networks to recognize the robots current state (context) at the environment where it is inserted. Once the state is detected, including identification of robots position according to environment elements, the robot will be able to determine the action/- behavior to be executed. The navigation and control system implements a Finite State Machine deciding the current action from current state, being able to identify state changes, alternating between different previously defined behaviors. In order to validade this approach, many experiments were performed with the use of a robotic simulator (Player-Stage), and carrying out tests with real robots (Pioneer P3-AT, SRV-1 and autonomous vehicles)

Page generated in 0.5118 seconds