• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 45
  • 8
  • 8
  • 8
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 148
  • 148
  • 43
  • 36
  • 36
  • 32
  • 27
  • 27
  • 25
  • 20
  • 20
  • 19
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Monocular Vision based Particle Filter Localization in Urban Environments

Leung, Keith Yu Kit 17 September 2007 (has links)
This thesis presents the design and experimental result of a monocular vision based particle filter localization system for urban settings that uses aerial orthoimagery as a reference map. The topics of perception and localization are reviewed along with their modeling using a probabilistic framework. Computer vision techniques used to create the feature map and to extract features from camera images are discussed. Localization results indicate that the design is viable.
22

Monocular Vision based Particle Filter Localization in Urban Environments

Leung, Keith Yu Kit 17 September 2007 (has links)
This thesis presents the design and experimental result of a monocular vision based particle filter localization system for urban settings that uses aerial orthoimagery as a reference map. The topics of perception and localization are reviewed along with their modeling using a probabilistic framework. Computer vision techniques used to create the feature map and to extract features from camera images are discussed. Localization results indicate that the design is viable.
23

Monocular Vision-Based Obstacle Detection for Unmanned Systems

Wang, Carlos January 2011 (has links)
Many potential indoor applications exist for autonomous vehicles, such as automated surveillance, inspection, and document delivery. A key requirement for autonomous operation is for the vehicles to be able to detect and map obstacles in order to avoid collisions. This work develops a comprehensive 3D scene reconstruction algorithm based on known vehicle motion and vision data that is specifically tailored to the indoor environment. Visible light cameras are one of the many sensors available for capturing information from the environment, and their key advantages over other sensors are that they are light weight, power efficient, cost effective, and provide abundant information about the scene. The emphasis on 3D indoor mapping enables the assumption that a large majority of the area to be mapped is comprised of planar surfaces such as floors, walls and ceilings, which can be exploited to simplify the complex task of dense reconstruction of the environment from monocular vision data. In this thesis, the Planar Surface Reconstruction (PSR) algorithm is presented. It extracts surface information from images and combines it with 3D point estimates in order to generate a reliable and complete environment map. It was designed to be used for single cameras with the primary assumptions that the objects in the environment are flat, static and chromatically unique. The algorithm finds and tracks Scale Invariant Feature Transform (SIFT) features from a sequence of images to calculate 3D point estimates. The individual surface information is extracted using a combination of the Kuwahara filter and mean shift segmentation, which is then coupled with the 3D point estimates to fit these surfaces in the environment map. The resultant map consists of both surfaces and points that are assumed to represent obstacles in the scene. A ground vehicle platform was developed for the real-time implementation of the algorithm and experiments were done to assess the PSR algorithm. Both clean and cluttered scenarios were used to evaluate the quality of the surfaces generated from the algorithm. The clean scenario satisfies the primary assumptions underlying the PSR algorithm, and as a result produced accurate surface details of the scene, while the cluttered scenario generated lower quality, but still promising, results. The significance behind these findings is that it is shown that incorporating object surface recognition into dense 3D reconstruction can significantly improve the overall quality of the environment map.
24

Structure from motion using omni-directional vision and certainty grids

Ortiz, Steven Rey 15 November 2004 (has links)
This thesis describes a method to create local maps from an omni-directional vision system (ODVS) mounted on a mobile robot. Range finding is performed by a structure-from-motion method, which recovers the three-dimensional position of objects in the environment from omni-directional images. This leads to map-making, which is accomplished using certainty grids to fuse information from multiple readings into a two-dimensional world model. The system is demonstrated both on noise-free data from a custom-built simulator and on real data from an omni-directional vision system on-board a mobile robot. Finally, to account for the particular error characteristics of a real omni-directional vision sensor, a new sensor model for the certainty grid framework is also created and compared to the traditional sonar sensor model.
25

Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration

McManus, Colin 14 December 2011 (has links)
Future missions to Mars will place heavy emphasis on scientific sample and return operations, which will require a rover to revisit sites of interest. Visual Teach and Repeat (VT&R) has proven to be an effective method to enable autonomous repeating of any previously driven route without a global positioning system. However, one of the major challenges in recognizing previously visited locations is lighting change, as this can drastically change the appearance of the scene. In an effort to achieve lighting invariance, this thesis details the design of a VT&R system that uses a laser scanner as the primary sensor. The key novelty is to apply appearance-based vision techniques traditionally used with camera systems to laser intensity images for motion estimation. Field tests were conducted in an outdoor environment over an entire diurnal cycle, covering more than 11km with an autonomy rate of 99.7% by distance.
26

Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration

McManus, Colin 14 December 2011 (has links)
Future missions to Mars will place heavy emphasis on scientific sample and return operations, which will require a rover to revisit sites of interest. Visual Teach and Repeat (VT&R) has proven to be an effective method to enable autonomous repeating of any previously driven route without a global positioning system. However, one of the major challenges in recognizing previously visited locations is lighting change, as this can drastically change the appearance of the scene. In an effort to achieve lighting invariance, this thesis details the design of a VT&R system that uses a laser scanner as the primary sensor. The key novelty is to apply appearance-based vision techniques traditionally used with camera systems to laser intensity images for motion estimation. Field tests were conducted in an outdoor environment over an entire diurnal cycle, covering more than 11km with an autonomy rate of 99.7% by distance.
27

Monocular Vision-Based Obstacle Detection for Unmanned Systems

Wang, Carlos January 2011 (has links)
Many potential indoor applications exist for autonomous vehicles, such as automated surveillance, inspection, and document delivery. A key requirement for autonomous operation is for the vehicles to be able to detect and map obstacles in order to avoid collisions. This work develops a comprehensive 3D scene reconstruction algorithm based on known vehicle motion and vision data that is specifically tailored to the indoor environment. Visible light cameras are one of the many sensors available for capturing information from the environment, and their key advantages over other sensors are that they are light weight, power efficient, cost effective, and provide abundant information about the scene. The emphasis on 3D indoor mapping enables the assumption that a large majority of the area to be mapped is comprised of planar surfaces such as floors, walls and ceilings, which can be exploited to simplify the complex task of dense reconstruction of the environment from monocular vision data. In this thesis, the Planar Surface Reconstruction (PSR) algorithm is presented. It extracts surface information from images and combines it with 3D point estimates in order to generate a reliable and complete environment map. It was designed to be used for single cameras with the primary assumptions that the objects in the environment are flat, static and chromatically unique. The algorithm finds and tracks Scale Invariant Feature Transform (SIFT) features from a sequence of images to calculate 3D point estimates. The individual surface information is extracted using a combination of the Kuwahara filter and mean shift segmentation, which is then coupled with the 3D point estimates to fit these surfaces in the environment map. The resultant map consists of both surfaces and points that are assumed to represent obstacles in the scene. A ground vehicle platform was developed for the real-time implementation of the algorithm and experiments were done to assess the PSR algorithm. Both clean and cluttered scenarios were used to evaluate the quality of the surfaces generated from the algorithm. The clean scenario satisfies the primary assumptions underlying the PSR algorithm, and as a result produced accurate surface details of the scene, while the cluttered scenario generated lower quality, but still promising, results. The significance behind these findings is that it is shown that incorporating object surface recognition into dense 3D reconstruction can significantly improve the overall quality of the environment map.
28

Estudo comparativo de métodos de localização para robôs móveis baseados em mapa / Comparative study of localization methods for mobile robots based on map

Rodrigues, Diego Pereira, 1986- 07 December 2013 (has links)
Orientador: Eleri Cardozo / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-23T09:08:00Z (GMT). No. of bitstreams: 1 Rodrigues_DiegoPereira_M.pdf: 3946538 bytes, checksum: ac8c8839aec7a57de03eb07622e7896d (MD5) Previous issue date: 2013 / Resumo: Esta dissertação trata de uma pesquisa sobre localização robótica baseada em mapas, analisando aspectos importantes do tema e discutindo alternativas para alguns dos problemas corriqueiramente encontrados. Todos os métodos trabalhados nesta dissertação utilizam uma abordagem probabilística, visto que isto permite ao sistema incorporar a incerteza existente em diversas partes do processo, tornando-o mais robusto e confiável. As técnicas trabalhadas são o Filtro de Bayes, Localização por Markov, Filtro de Kalman dinâmico, Filtro de Kalman estendido e Filtro de Partículas (Monte Carlo), realizando os experimentos em ambiente real, e as aplicando em mapas do tipo métrico e em grade. São abordados também alguns pontos cruciais para o funcionamento das técnicas, como o modelamento dos erros dos sensores e como a extração de características do ambiente foi implementada. Por fim, é feita uma comparação entre estas técnicas, explorando pontos chaves com o propósito de contribuir para a literatura do tema. Esta comparação discute aspectos quantitativos, como erro de estimação originado de cada método e o tempo de execução de seus algoritmos; e aspectos qualitativos, como a dificuldade de implementação e deficiências de cada uma delas. Para a realização deste estudo, utilizou-se de uma plataforma robótica / Abstract: This dissertation deals with a research about robotics localization based on maps, analyzing important aspects of the subject and discussing alternatives to some of the problems routinely found. All methods employed in this paper use a probabilistic approach, since it allows the system to incorporate the uncertainty that exist in different parts of the process, making it more robust and reliable. The implemented techniques are the Bayesian filter, Localization by Markov, dynamic Kalman Filter, extended Kalman filter and Particle Filter (Monte Carlo), performing experiments in a real environment, using metric and grid maps. It also addressed some crucial points for the well functioning of the techniques such as the error modeling of sensors and the explanation on how the feature extraction from the environment was fulfilled. Finally, a comparison between these techniques is made, exploring key issues in order to contribute to the theme. This comparison discusses quantitative aspects such as the estimated error originated from each method and the runtime of their algorithm, and also qualitative aspects, such as the difficulty of implementation and deficiencies of each. For this study, a robotic platform was used / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
29

Design and Implementation of Control Techniques for Differential Drive Mobile Robots: An RFID Approach

Miah, Suruz January 2012 (has links)
Localization and motion control (navigation) are two major tasks for a successful mobile robot navigation. The motion controller determines the appropriate action for the robot’s actuator based on its current state in an operating environment. A robot recognizes its environment through some sensors and executes physical actions through actuation mechanisms. However, sensory information is noisy and hence actions generated based on this information may be non-deterministic. Therefore, a mobile robot provides actions to its actuators with a certain degree of uncertainty. Moreover, when no prior knowledge of the environment is available, the problem becomes even more difficult, as the robot has to build a map of its surroundings as it moves to determine the position. Skilled navigation of a differential drive mobile robot (DDMR) requires solving these tasks in conjunction, since they are inter-dependent. Having resolved these tasks, mobile robots can be employed in many contexts in indoor and outdoor environments such as delivering payloads in a dynamic environment, building safety, security, building measurement, research, and driving on highways. This dissertation exploits the use of the emerging Radio Frequency IDentification (RFID) technology for the design and implementation of cost-effective and modular control techniques for navigating a mobile robot in an indoor environment. A successful realization of this process has been addressed with three separate navigation modules. The first module is devoted to the development of an indoor navigation system with a customized RFID reader. This navigation system is mainly pioneered by mounting a multiple antenna RFID reader on the robot and placing the RFID tags in three dimensional workspace, where the tags’ orthogonal position on the ground define the desired positions that the robot is supposed to reach. The robot generates control actions based on the information provided by the RFID reader for it to navigate those pre-defined points. On the contrary, the second and third navigation modules employ custom-made RFID tags (instead of the RFID reader) which are attached at different locations in the navigation environment (on the ceiling of an indoor office, or on posts, for instance). The robot’s controller generates appropriate control actions for it’s actuators based on the information provided by the RFID tags in order to reach target positions or to track pre-defined trajectory in the environment. All three navigation modules were shown to have the ability to guide a mobile robot in a highly reverberant environment with variant degrees of accuracy.
30

Sistema para localização robótica de veículos autônomos baseado em visão computacional por pontos de referência / Autonomous robotic vehicle localization system based on computer vision though distinctive features

Leandro Nogueira Couto 18 May 2012 (has links)
A integração de sistemas de Visão Computacional com a Robótica Móvel é um campo de grande interesse na pesquisa. Este trabalho demonstra um método de localização global para Robôs Móveis Autônomos baseado na criação de uma memória visual, através da detecção e descrição de pontos de referência de imagens capturadas, com o método SURF, associados a dados de odometria, em um ambiente interno. O procedimento proposto, associado com conhecimento específico sobre o ambiente, permite que a localização seja obtida posteriormente pelo pareamento entre quadros memorizados e a cena atual observada pelo robô. Experimentos são conduzidos para mostrar a efetividade do método na localização robótica. Aprimoramentos para situações difíceis como travessia de portas são apresentados. Os resultados são analisados, e alternativas para navegação e possíveis futuros refinamentos discutidos / Integration of Computer Vision and Mobile Robotics systems is a field of great interest in research. This work demonstrates a method of global localization for Autonomous Mobile Robots based on the creation of a visual memory map, through detection and description of reference points from captured images, using the SURF method, associated to odometer data in a specific environment. The proposed procedure, coupled with specific knowledge of the environment, allows for localization to be achieved at a later stage through pairing of these memorized features with the scene being observed in real time. Experiments are conducted to show the effectiveness of the method for the localization of mobile robots in indoor environments. Improvements aimed at difficult situations such as traversing doors are presented. Results are analyzed and navigation alternatives and possible future refinements are discussed

Page generated in 0.0315 seconds