• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Guizilini, Vitor Campanholo 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
2

Low-Cost Visual/Inertial Hybrid Motion Capture System for Wireless 3D Controllers

Wong, Alexander 02 May 2007 (has links)
It is my thesis that a cost-effective motion capture system for wireless 3D controllers can be developed through the use of low-cost inertial measurement devices and camera systems. Current optical motion capture systems require a number of expensive high-speed cameras. The use of such systems is impractical for many applications due to its high cost. This is particularly true for consumer-level wireless 3D controllers. More importantly, optical systems are capable of directly tracking an object with only three degrees of freedom. The proposed system attempts to solve these issues by combining a low-cost camera system with low-cost micro-machined inertial measurement devices such as accelerometers and gyro sensors to provide accurate motion tracking with a full six degrees of freedom. The proposed system combines the data collected from the various sensors in the system to obtain position information about the wireless 3D controller with 6 degrees of freedom. The system utilizes a number of calibration, error correction, and sensor fusion techniques to accomplish this task. The key advantage of the proposed system is that it combines the high long-term accuracy and low frequency nature of the camera system and complements it with the low long-term accuracy and high frequency nature of the inertial measurement devices to produce a system with a high level of long-term accuracy with detailed high frequency information about the motion of the wireless 3D controller.
3

Low-Cost Visual/Inertial Hybrid Motion Capture System for Wireless 3D Controllers

Wong, Alexander 02 May 2007 (has links)
It is my thesis that a cost-effective motion capture system for wireless 3D controllers can be developed through the use of low-cost inertial measurement devices and camera systems. Current optical motion capture systems require a number of expensive high-speed cameras. The use of such systems is impractical for many applications due to its high cost. This is particularly true for consumer-level wireless 3D controllers. More importantly, optical systems are capable of directly tracking an object with only three degrees of freedom. The proposed system attempts to solve these issues by combining a low-cost camera system with low-cost micro-machined inertial measurement devices such as accelerometers and gyro sensors to provide accurate motion tracking with a full six degrees of freedom. The proposed system combines the data collected from the various sensors in the system to obtain position information about the wireless 3D controller with 6 degrees of freedom. The system utilizes a number of calibration, error correction, and sensor fusion techniques to accomplish this task. The key advantage of the proposed system is that it combines the high long-term accuracy and low frequency nature of the camera system and complements it with the low long-term accuracy and high frequency nature of the inertial measurement devices to produce a system with a high level of long-term accuracy with detailed high frequency information about the motion of the wireless 3D controller.
4

Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.

Vitor Campanholo Guizilini 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
5

A snake-based scheme for path planning and control with constraints by distributed visual sensors

Cheng, Yongqiang, Jiang, Ping, Hu, Yim Fun 09 August 2013 (has links)
Yes / This paper proposes a robot navigation scheme using wireless visual sensors deployed in an environment. Different from the conventional autonomous robot approaches, the scheme intends to relieve massive on-board information processing required by a robot to its environment so that a robot or a vehicle with less intelligence can exhibit sophisticated mobility. A three-state snake mechanism is developed for coordinating a series of sensors to form a reference path. Wireless visual sensors communicate internal forces with each other along the reference snake for dynamic adjustment, react to repulsive forces from obstacles, and activate a state change in the snake body from a flexible state to a rigid or even to a broken state due to kinematic or environmental constraints. A control snake is further proposed as a tracker of the reference path, taking into account the robot’s non-holonomic constraint and limited steering power. A predictive control algorithm is developed to have an optimal velocity profile under robot dynamic constraints for the snake tracking. They together form a unified solution for robot navigation by distributed sensors to deal with the kinematic and dynamic constraints of a robot and to react to dynamic changes in advance. Simulations and experiments demonstrate the capability of a wireless sensor network to carry out low-level control activities for a vehicle. / Royal Society, Natural Science Funding Council (China)
6

Vision-Based Obstacle Avoidance for Multiple Vehicles Performing Time-Critical Missions

Dippold, Amanda 11 June 2009 (has links)
This dissertation discusses vision-based static obstacle avoidance for a fleet of nonholonomic robots tasked to arrive at a final destination simultaneously. Path generation for each vehicle is computed using a single polynomial function that incorporates the vehicle constraints on velocity and acceleration and satisfies boundary conditions by construction. Furthermore, the arrival criterion and a preliminary obstacle avoidance scheme is incorporated into the path generation. Each robot is equipped with an inertial measurement unit that provides measurements of the vehicle's position and velocity, and a monocular camera that detects obstacles. The obstacle avoidance algorithm deforms the vehicle's original path around at most one obstacle per vehicle in a direction that minimizes an obstacle avoidance potential function. Deconfliction of the vehicles during obstacle avoidance is achieved by imposing a separation condition at the path generation level. Two estimation schemes are applied to estimate the unknown obstacle parameters. The first is an existing method known in the literature as Identifier-Based Observer and the second is a recently-developed fast estimator. It is shown that the performance of the fast estimator and its effect on the obstacle avoidance algorithm can be arbitrarily improved by the appropriate choice of parameters as compared to the Identifier-Based Observer method. Coordination in time of all vehicles is completed in an outer loop which adjusts the desired velocity profile of each vehicle in order to meet the simultaneous arrival constraints. Simulation results illustrate the theoretical findings. / Ph. D.

Page generated in 0.0619 seconds