• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 14
  • 10
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 126
  • 92
  • 45
  • 37
  • 34
  • 34
  • 29
  • 27
  • 24
  • 21
  • 21
  • 21
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Localização baseada em odometria visual / Localization based on visual odometry

Nishitani, André Toshio Nogueira 26 June 2015 (has links)
O problema da localização consiste em estimar a posição de um robô com relação a algum referencial externo e é parte essencial de sistemas de navegação de robôs e veículos autônomos. A localização baseada em odometria visual destaca-se em relação a odometria de encoders na obtenção da rotação e direção do movimento do robô. Esse tipo de abordagem é também uma escolha atrativa para sistemas de controle de veículos autônomos em ambientes urbanos, onde a informação visual é necessária para a extração de informações semânticas de placas, semáforos e outras sinalizações. Neste contexto este trabalho propõe o desenvolvimento de um sistema de odometria visual utilizando informação visual de uma câmera monocular baseado em reconstrução 3D para estimar o posicionamento do veículo. O problema da escala absoluta, inerente ao uso de câmeras monoculares, é resolvido utilizando um conhecimento prévio da relação métrica entre os pontos da imagem e pontos do mundo em um mesmo plano. / The localization problem consists of estimating the position of the robot with regards to some external reference and it is an essential part of robots and autonomous vehicles navigation systems. Localization based on visual odometry, compared to encoder based odometry, stands out at the estimation of rotation and direction of the movement. This kind of approach is an interesting choice for vehicle control systems in urban environment, where the visual information is mandatory for the extraction of semantic information contained in the street signs and marks. In this context this project propose the development of a visual odometry system based on structure from motion using visual information acquired from a monocular camera to estimate the vehicle pose. The absolute scale problem, inherent with the use of monocular cameras, is achieved using som previous known information regarding the metric relation between image points and points lying on a same world plane.
22

High precision monocular visual odometry / Estimação 3D aplicada a odometria visual

Pereira, Fabio Irigon January 2018 (has links)
Extrair informação de profundidade a partir de imagens bidimensionais é um importante problema na área de visão computacional. Diversas aplicações se beneficiam desta classe de algoritmos tais como: robótica, a indústria de entretenimento, aplicações médicas para diagnóstico e confecção de próteses e até mesmo exploração interplanetária. Esta aplicação pode ser dividida em duas etapas interdependentes: a estimação da posição e orientação da câmera no momento em que a imagem foi gerada, e a estimativa da estrutura tridimensional da cena. Este trabalho foca em técnicas de visão computacional usadas para estimar a trajetória de um veículo equipado com uma câmera, problema conhecido como odometria visual. Para obter medidas objetivas de eficiência e precisão, e poder comparar os resultados obtidos com o estado da arte, uma base de dados de alta precisão, bastante utilizada pela comunidade científica foi utilizada. No curso deste trabalho novas técnicas para rastreamento de detalhes, estimativa de posição de câmera, cálculo de posição 3D de pontos e recuperação de escala são propostos. Os resultados alcançados superam os mais bem ranqueados trabalhos na base de dados escolhida até o momento da publicação desta tese. / Recovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
23

Um sistema de localização robótica para ambientes internos baseado em redes neurais. / An indoor robot localization system based on neural networks.

Sanches, Vitor Luiz Martinez 15 April 2009 (has links)
Nesta pesquisa são estudados aspectos relacionados à problemática da localização robótica, e um sistema de localização robótica é construído. Para determinação da localização de um robô móvel em relação a um mapa topológico do ambiente, é proposta uma solução determinística. Esta solução é empregada a fim de prover localização para problemas de rastreamento de posição, embora seja de interesse também a observação da eficácia, do método proposto, frente a problemas de localização global. O sistema proposto baseia-se no uso de vetores de atributos, compostos de medições momentâneas extraídas do ambiente através de sensoriamentos pertencentes à percepção do robô. Estimativas feitas a partir da odometria e leitura de sensores de ultra-som são utilizadas em conjunto nestes vetores de atributos, de forma a caracterizar as observações feitas pelo robô. Uma bússola magnética também é empregada na solução. O problema de localização é então resolvido como um problema de reconhecimento de padrões. A topologia do ambiente é conhecida, e a correlação entre cada local neste ambiente e seus atributos são armazenados através do uso de redes neurais artificiais. O sistema de localização foi avaliado de maneira experimental, em campo, em uma plataforma robótica real, e resultados promissores foram obtidos e são apresentados. / In this research aspects related to the robot localization problem have been studied. In order to determine the localization of a mobile robot in relation to a topological map of its environment, a deterministic solution has been proposed. This solution is applied to provide localization for position tracking problems, although it is also of interest to observe the performance of the proposed method applied to global localization problems. The proposed system is based on feature vectors, which are composed of momentaneous measures extracted from sensory data of the robots perception. Estimative made from odometry, sonars and magnetic compass readings are used together in these feature vectors, in order to characterize observed scenes by the robot. Thus, the localization problem is solved as a pattern recognition problem. The topology of the environment is known, and the correlation between each place of this environment and its features is stored using an artificial neural network. The localization system was experimentally evaluated, in a real robotic platform. The results obtained allow validation of the methodology.
24

Motion Conflict Detection and Resolution in Visual-Inertial Localization Algorithm

Wisely Babu, Benzun 30 July 2018 (has links)
In this dissertation, we have focused on conflicts that occur due to disagreeing motions in multi-modal localization algorithms. In spite of the recent achievements in robust localization by means of multi-sensor fusion, these algorithms are not applicable to all environments. This is primarily attributed to the following fundamental assumptions: (i) the environment is predominantly stationary, (ii) only ego-motion of the sensor platform exists, and (iii) multiple sensors are always in agreement with each other regarding the observed motion. Recently, studies have shown how to relax the static environment assumption using outlier rejection techniques and dynamic object segmentation. Additionally, to handle non ego-motion, approaches that extend the localization algorithm to multi-body tracking have been studied. However, there has been no attention given to the conditions where multiple sensors contradict each other with regard to the motions observed. Vision based localization has become an attractive approach for both indoor and outdoor applications due to the large information bandwidth provided by images and reduced cost of the cameras used. In order to improve the robustness and overcome the limitations of vision, an Inertial Measurement Unit (IMU) may be used. Even though visual-inertial localization has better accuracy and improved robustness due to the complementary nature of camera and IMU sensor, they are affected by disagreements in motion observations. We term such dynamic situations as environments with motion conflictbecause these are caused when multiple different but self- consistent motions are observed by different sensors. Tightly coupled visual inertial fusion approaches that disregard such challenging situations exhibit drift that can lead to catastrophic errors. We have provided a probabilistic model for motion conflict. Additionally, a novel algorithm to detect and resolve motion conflicts is also presented. Our method to detect motion conflicts is based on per-frame positional estimate discrepancy and per- landmark reprojection errors. Motion conflicts were resolved by eliminating inconsistent IMU and landmark measurements. Finally, a Motion Conflict aware Visual Inertial Odometry (MC- VIO) algorithm that combined both detection and resolution of motion conflict was implemented. Both quantitative and qualitative evaluation of MC-VIO on visually and inertially challenging datasets were obtained. Experimental results indicated that MC-VIO algorithm reduced the absolute trajectory error by 70% and the relative pose error by 34% in scenes with motion conflict, in comparison to the reference VIO algorithm. Motion conflict detection and resolution enables the application of visual inertial localization algorithms to real dynamic environments. This paves the way for articulate object tracking in robotics. It may also find numerous applications in active long term augmented reality.
25

Stereo Vision-based Autonomous Vehicle Navigation

Meira, Guilherme Tebaldi 26 April 2016 (has links)
Research efforts on the development of autonomous vehicles date back to the 1920s and recent announcements indicate that those cars are close to becoming commercially available. However, the most successful prototypes that are currently being demonstrated rely on an expensive set of sensors. This study investigates the use of an affordable vision system as a planner for the Robocart, an autonomous golf cart prototype developed by the Wireless Innovation Laboratory at WPI. The proposed approach relies on a stereo vision system composed of a pair of Raspberry Pi computers, each one equipped with a Camera Module. They are connected to a server and their clocks are synchronized using the Precision Time Protocol (PTP). The server uses timestamps to obtain a pair of simultaneously captured images. Images are processed to generate a disparity map using stereo matching and points in this map are reprojected to the 3D world as a point cloud. Then, an occupancy grid is built and used as input for an A* graph search that finds a collision-free path for the robot. Due to the non-holonomic constraints of a car-like robot, a Pure Pursuit algorithm is used as the control method to guide the robot along the computed path. The cameras are also used by a Visual Odometry algorithm that tracks points on a sequence of images to estimate the position and orientation of the vehicle. The algorithms were implemented using the C++ language and the open source library OpenCV. Tests in a controlled environment show promising results and the interfaces between the server and the Robocart have been defined, so that the proposed method can be used on the golf cart as soon as the mechanical systems are fully functional.
26

Cooperation stereo mouvement pour la detection des objets dynamiques / Stereo-Motion Cooperation - Dynamic Objects Detection

Bak, Adrien 14 October 2011 (has links)
Un grand nombre d'applications de robotique embarquées pourrait bénéficier d'une détection explicite des objets mobiles. A ce jour, la majorité des approches présentées repose sur la classification, ou sur une analyse structurelle de la scène (la V-Disparité est un bon exemple de ces approches). Depuis quelques années, nous sommes témoins d'un intérêt croissant pour les méthodes faisant collaborer activement l'analyse structurelle et l'analyse du mouvement. Ces deux processus sont en effet étroitement liés. Dans ce contexte, nous proposons, à travers de travail de thèse, deux approches différentes. Si la première fait appel à l'intégralité de l'information stéréo/mouvement, la seconde se penche sur le cas des capteurs monoculaires, et permet de retrouver une information partielle.La première approche présentée consiste en un système innovation d'odométrie visuelle. Nous avons en effet démontré que le problème d'odométrie visuelle peut être posé de façon linéaire, alors que l'immense majorité des auteurs sont contraint de faire appel à des méthodes d'optimisation non-linéaires. Nous avons également montré que notre approche permet d'atteindre, voire de dépasser le niveau de performances présenté par des système matériels haut de gamme (type centrale inertielle). A partir de ce système d'odométrie visuelle, nous définissons une procédure permettant de détecter les objets mobiles. Cette procédure repose sur une compensation de l'influence de l'égo-mouvement, puis une mesure du mouvement résiduel. Nous avons ensuite mené une réflexion de fond sur les limitations et les sources d'amélioration de ce système. Il nous est apparu que les principaux paramètres du système de vision (base, focale) ont un impact de premier plan sur les performances du détecteur. A notre connaissance, cet impact n'a jamais été décrit dans la littérature. Il nous semble cependant que nos conclusions peuvent constituer un ensemble de recommandations utiles à tout concepteur de système de vision intelligent.La seconde partie de ce travail porte sur les systèmes de vision monoculaire, et plus précisément sur le concept de C-Vélocité. Alors que la V-Disparité a défini une transformée de la carte de disparité permettant de mettre en avant certains plans de l'image, la C-Vélocité défini une transformée du champ de flot optique, et qui utilise la position du FoE, qui permet une détection facile de certains plans spécifiques de l'image. Dans ce travail, nous présentons une modification de la C-Vélocité. Au lieu d'utiliser un a priori sur l'égo-mouvement (la position du FoE) afin d'inférer la structure de la scène, nous utilisons un a priori sur la structure de la scène afin de localiser le FoE, donc d'estimer l'égo-mouvement translationnel. Les premiers résultats de ce travail sont encourageants et nous permettent d'ouvrir plusieurs pistes de recherches futures. / Many embedded robotic applications could benefit from an explicit detection of mobile objects. To this day, most approaches rely on classification, or on some structural scene analysis (for instance, V-Disparity). During the last few years, we've witnessed a growing interest for collaboration methods, that use actively btw structural analysis and motion analysis. These two processes are, indeed, closely related. In this context, we propose, through this study, two novel approaches that address this issue. While the first one use information from stereo and motion, the second one focuses on monocular systems, and allows us to retrieve a partial information.The first presented approach consists in a novel visual odometry system. We have shown that, even though the wide majority of authors tackle the visual odometry problem as non-linear, it can be shown to be purely linear. We have also shown that our approach achieves performances, as good as, or even better than the ones achieved by high-end IMUs. Given this visual odometry system, we then define a procedure allowing us to detect mobile objects. This procedure relies on a compensation of the ego-motion and a measure of the residual motion. We then lead a reflexion on the causes of limitation and the possible sources of improvement of this system. It appeared that the main parameters of the vision system (baseline, focal length) have a major impact on the performances of our detector. To the best of our knowledge, this impact had never been discussed, prior to our study. However, we think that our conclusion could be used as a set of recommendations, useful for every designer of intelligent vision system.the second part of this work focuses on monocular systems, and more specifically on the concept of C-Velocity. When V-Disparity defined a disparity map transform, allowing an easy detection of specific planes, C-Velocity defines a transform of the optical flow field, using the position of the FoE, allowing an easy detection of specific planes. Through this work, we present a modification of the C-Velocity concept. Instead of using a priori knowledge of the ego-motion (the position of the FoE) in order to determine the scene structure, we use a prior knowledge of the scene structure in order to localize the FoE, thus the translational ego-motion. the first results of this work are promising, and allow us to define several future works.
27

Visual odometry from omnidirectional camera / Visual odometry from omnidirectional camera

Diviš, Jiří January 2012 (has links)
We present a system that estimates the motion of a robot relying solely on images from onboard omnidirectional camera (visual odometry). Compared to other visual odometry hardware, ours is unusual in utilizing high resolution, low frame-rate (1 to 3 Hz) omnidirectional camera mounted on a robot that is propelled using continuous tracks. We focus on high precision estimates in scenes, where objects are far away from the camera. This is achieved by utilizing omnidirectional camera that is able to stabilize the motion estimates between camera frames that are known to be ill-conditioned for narrow field of view cameras and the fact that low frame-rate of the imaging system allows us to focus computational resources on utilizing high resolution images. We employ feature based-approach for estimation camera motion. Given our hardware, possibly high ammounts of camera rotation between frames can occur. Thus we use techniques of feature matching rather than feature tracking.
28

Stereo Camera Pose Estimation to Enable Loop Detection / Estimering av kamera-pose i stereo för att återupptäcka besökta platser

Ringdahl, Viktor January 2019 (has links)
Visual Simultaneous Localization And Mapping (SLAM) allows for three dimensionalreconstruction from a camera’s output and simultaneous positioning of the camera withinthe reconstruction. With use cases ranging from autonomous vehicles to augmentedreality, the SLAM field has garnered interest both commercially and academically. A SLAM system performs odometry as it estimates the camera’s movement throughthe scene. The incremental estimation of odometry is not error free and exhibits driftover time with map inconsistencies as a result. Detecting the return to a previously seenplace, a loop, means that this new information regarding our position can be incorporatedto correct the trajectory retroactively. Loop detection can also facilitate relocalization ifthe system loses tracking due to e.g. heavy motion blur. This thesis proposes an odometric system making use of bundle adjustment within akeyframe based stereo SLAM application. This system is capable of detecting loops byutilizing the algorithm FAB-MAP. Two aspects of this system is evaluated, the odometryand the capability to relocate. Both of these are evaluated using the EuRoC MAV dataset,with an absolute trajectory RMS error ranging from 0.80 m to 1.70 m for the machinehall sequences. The capability to relocate is evaluated using a novel methodology that intuitively canbe interpreted. Results are given for different levels of strictness to encompass differentuse cases. The method makes use of reprojection of points seen in keyframes to definewhether a relocalization is possible or not. The system shows a capability to relocate inup to 85% of all cases when a keyframe exists that can project 90% of its points intothe current view. Errors in estimated poses were found to be correlated with the relativedistance, with errors less than 10 cm in 23% to 73% of all cases. The evaluation of the whole system is augmented with an evaluation of local imagedescriptors and pose estimation algorithms. The descriptor SIFT was found to performbest overall, but demanding to compute. BRISK was deemed the best alternative for afast yet accurate descriptor. Conclusions that can be drawn from this thesis is that FAB-MAP works well fordetecting loops as long as the addition of keyframes is handled appropriately.
29

An Optimization Based Approach to Visual Odometry Using Infrared Images

Nilsson, Emil January 2010 (has links)
<p>The goal of this work has been to improve the accuracy of a pre-existing algorithm for vehicle pose estimation, which uses intrinsic measurements of vehicle motion and measurements derived from far infrared images.</p><p>Estimating the pose of a vehicle, based on images from an on-board camera and intrinsic measurements of vehicle motion, is a problem of simultanoeus localization and mapping (SLAM), and it can be solved using the extended Kalman filter (EKF). The EKF is a causal filter, so if the pose estimation problem is to be solved off-line acausal methods are expected to increase estimation accuracy significantly. In this work the EKF has been compared with an acausal method for solving the SLAM problem called smoothing and mapping (SAM) which is an optimization based method that minimizes process and measurement noise.</p><p>Analyses of how improvements in the vehicle motion model, using a number of different model extensions, affects accuracy of pose estimates have also been performed.</p>
30

Navigering och styrning av ett autonomt markfordon / Navigation and control of an autonomous ground vehicle

Johansson, Sixten January 2006 (has links)
<p>I detta examensarbete har ett system för navigering och styrning av ett autonomt fordon implementerats. Syftet med detta arbete är att vidareutveckla fordonet som ska användas vid utvärdering av banplaneringsalgoritmer och studier av andra autonomifunktioner. Med hjälp av olika sensormodeller och sensorkonfigurationer går det även att utvärdera olika strategier för navigering. Arbetet har utförts utgående från en given plattform där fordonet endast använder sig av enkla ultraljudssensorer samt pulsgivare på hjulen för att mäta förflyttningar. Fordonet kan även autonomt navigera samt följa en enklare given bana i en känd omgivning. Systemet använder ett partikelfilter för att skatta fordonets tillstånd med hjälp av modeller för fordon och sensorer.</p><p>Arbetet är en fortsättning på projektet Collision Avoidance för autonomt fordon som genomfördes vid Linköpings universitet våren 2005.</p> / <p>In this thesis a system for navigation and control of an autonomous ground vehicle has been implemented. The purpose of this thesis is to further develop the vehicle that is to be used in studies and evaluations of path planning algorithms as well as studies of other autonomy functions. With different sensor configurations and sensor models it is also possible to evaluate different strategies for navigation. The work has been performed using a given platform which measures the vehicle’s movement using only simple ultrasonic sensors and pulse encoders. The vehicle is able to navigate autonomously and follow a simple path in a known environment. The state estimation is performed using a particle filter.</p><p>The work is a continuation of a previous project, Collision Avoidance för autonomt fordon, at Linköpings University in the spring of 2005.</p>

Page generated in 0.0309 seconds