Spelling suggestions: "subject:"cisual odometry"" "subject:"4visual odometry""
71 |
Cartographie dense basée sur une représentation compacte RGB-D dédiée à la navigation autonome / A compact RGB-D map representation dedicated to autonomous navigationGokhool, Tawsif Ahmad Hussein 05 June 2015 (has links)
Dans ce travail, nous proposons une représentation efficace de l’environnement adaptée à la problématique de la navigation autonome. Cette représentation topométrique est constituée d’un graphe de sphères de vision augmentées d’informations de profondeur. Localement la sphère de vision augmentée constitue une représentation égocentrée complète de l’environnement proche. Le graphe de sphères permet de couvrir un environnement de grande taille et d’en assurer la représentation. Les "poses" à 6 degrés de liberté calculées entre sphères sont facilement exploitables par des tâches de navigation en temps réel. Dans cette thèse, les problématiques suivantes ont été considérées : Comment intégrer des informations géométriques et photométriques dans une approche d’odométrie visuelle robuste ; comment déterminer le nombre et le placement des sphères augmentées pour représenter un environnement de façon complète ; comment modéliser les incertitudes pour fusionner les observations dans le but d’augmenter la précision de la représentation ; comment utiliser des cartes de saillances pour augmenter la précision et la stabilité du processus d’odométrie visuelle. / Our aim is concentrated around building ego-centric topometric maps represented as a graph of keyframe nodes which can be efficiently used by autonomous agents. The keyframe nodes which combines a spherical image and a depth map (augmented visual sphere) synthesises information collected in a local area of space by an embedded acquisition system. The representation of the global environment consists of a collection of augmented visual spheres that provide the necessary coverage of an operational area. A "pose" graph that links these spheres together in six degrees of freedom, also defines the domain potentially exploitable for navigation tasks in real time. As part of this research, an approach to map-based representation has been proposed by considering the following issues : how to robustly apply visual odometry by making the most of both photometric and ; geometric information available from our augmented spherical database ; how to determine the quantity and optimal placement of these augmented spheres to cover an environment completely ; how tomodel sensor uncertainties and update the dense infomation of the augmented spheres ; how to compactly represent the information contained in the augmented sphere to ensure robustness, accuracy and stability along an explored trajectory by making use of saliency maps.
|
72 |
Détection d’obstacles par stéréovision en environnement non structuré / Obstacles detection by stereovision in unstructured environmentsDujardin, Aymeric 03 July 2018 (has links)
Les robots et véhicules autonomes représentent le futur des modes de déplacements et de production. Les enjeux de l’avenir reposent sur la robustesse de leurs perceptions et flexibilité face aux environnements changeant et situations inattendues. Les capteurs stéréoscopiques sont des capteurs passifs qui permettent d'obtenir à la fois image et information 3D de la scène à la manière de la vision humaine. Dans ces travaux nous avons développé un système de localisation, par odométrie visuelle permettant de déterminer la position dans l'espace du capteur de façon efficace et performante en tirant partie de la carte de profondeur dense mais également associé à un système de SLAM, rendant la localisation robuste aux perturbations et aux décalages potentiels. Nous avons également développé plusieurs solutions de cartographie et interprétation d’obstacles, à la fois pour le véhicule aérien et terrestre. Ces travaux sont en partie intégrés dans des produits commerciaux. / Autonomous vehicles and robots represent the future of transportation and production industries. The challenge ahead will come from the robustness of perception and flexibility from unexpected situations and changing environments. Stereoscopic cameras are passive sensors that provide color images and depth information of the scene by correlating 2 images like the human vision. In this work, we developed a localization system, by visual odometry that can determine efficiently the position in space of the sensor by exploiting the dense depth map. It is also combined with a SLAM system that enables robust localization against disturbances and potentials drifts. Additionally, we developed a few mapping and obstacles detections solutions, both for aerial and terrestrial vehicles. These algorithms are now partly integrated into commercial products.
|
73 |
[en] USING DENSE 3D RECONSTRUCTION FOR VISUAL ODOMETRY BASED ON STRUCTURE FROM MOTION TECHNIQUES / [pt] UTILIZANDO RECONSTRUÇÃO 3D DENSA PARA ODOMETRIA VISUAL BASEADA EM TÉCNICAS DE STRUCTURE FROM MOTIONMARCELO DE MATTOS NASCIMENTO 08 April 2016 (has links)
[pt] Alvo de intenso estudo da visão computacional, a reconstrução densa
3D teve um importante marco com os primeiros sistemas em tempo real
a alcançarem precisão milimétrica com uso de câmeras RGBD e GPUs.
Entretanto estes métodos não são aplicáveis a dispositivos de menor poder
computacional. Tendo a limitação de recursos computacionais como requisito, o
objetivo deste trabalho é apresentar um método de odometria visual utilizando
câmeras comuns e sem a necessidade de GPU, baseado em técnicas de Structure
from Motion (SFM) com features esparsos, utilizando as informações de uma
reconstrução densa. A Odometria visual é o processo de estimar a orientação
e posição de um agente (um robô, por exemplo), a partir das imagens. Esta
dissertação fornece uma comparação entre a precisão da odometria calculada
pelo método proposto e pela reconstrução densa utilizando o Kinect Fusion.
O resultado desta pesquisa é diretamente aplicável na área de realidade
aumentada, tanto pelas informações da odometria que podem ser usadas para
definir a posição de uma câmera, como pela reconstrução densa, que pode
tratar aspectos como oclusão dos objetos virtuais com reais. / [en] Aim of intense research in the field computational vision, dense 3D reconstruction achieves an important landmark with first methods running in real time with millimetric precision, using RGBD cameras and GPUs. However these methods are not suitable for low computational resources. Having low computational resources as requirement, the goal of this work is to show a method of visual odometry using regular cameras, without using a GPU. The proposed method is based on technics of sparse Structure From Motion (SFM), using data provided by dense 3D reconstruction. Visual odometry is the process of estimating the position and orientation of an agent (a robot, for instance), based on images. This dissertation compares the proposed method with the odometry calculated by Kinect Fusion. Results of this research are applicable in augmented reality. Odometry provided by this work can be used to model a camera and the data from dense 3D reconstruction, can be used to handle occlusion between virtual and real objects.
|
74 |
Event-based Visual Odometryusing Asynchronous CornerFeature Detection and Tracking : A Master of Science Thesis / Eventbaserad Visuell Odometri med Asynkron Detektion ochSpårning av Hörn : En Masteruppsats i Datorseende och SignalanalysTorberntsson, William January 2024 (has links)
This master thesis, conducted at SAAB Dynamics Linköping, studies Visual Odometry (VO), or camera pose estimation, using a monocular event camera. Event cameras are not yet widely used in the industry, and there is significant interest in understanding the methodological differences between performing Visual Odometry (VO) with event cameras compared to traditional frame cameras. Event cameras have the potential to capture information between frames, which may include data that is lost or not captured by frame cameras. This thesis compares two different feature detectors and evaluates their performance against a frame-based method. Visual Odometry was conducted both with and without known 3D points. However, attempts to perform VO without known 3D points did not yield a robust pipeline within the limited time frame of this thesis, and thus was not further developed to function with a purely event-based method. On the other hand, VO performance with known 3D points achieved continues 6-DoF pose estimation. The results demonstrate that event cameras have the potential to detect and track features in challenging scenes, such as those with dark or bright lighting conditions, for example, objects passing by the sun. This thesis suggests that implementing a robust 6-DoF pose estimation would be feasible with a more reliable 3D-2D point pair matching technique or a more sophisticated VO pipeline.
|
75 |
Vision based control and landing of Micro aerial vehicles / Visionsbaserad styrning och landning av drönareKarlsson, Christoffer January 2019 (has links)
This bachelors thesis presents a vision based control system for the quadrotor aerial vehicle,Crazy ie 2.0, developed by Bitcraze AB. The main goal of this thesis is to design andimplement an o-board control system based on visual input, in order to control the positionand orientation of the vehicle with respect to a single ducial marker. By integrating a cameraand wireless video transmitter onto the MAV platform, we are able to achieve autonomousnavigation and landing in relatively close proximity to the dedicated target location.The control system was developed in the programming language Python and all processing ofthe vision-data take place on an o-board computer. This thesis describes the methods usedfor developing and implementing the control system and a number of experiments have beencarried out in order to determine the performance of the overall vision control system. Withthe proposed method of using ducial markers for calculating the control demands for thequadrotor, we are able to achieve autonomous targeted landing within a radius of 10centimetres away from the target location. / I detta examensarbete presenteras ett visionsbaserat kontrollsystem for dronaren Crazy ie 2.0som har utvecklats av Bitcraze AB. Malet med detta arbete ar att utforma och implementeraett externt kontrollsystem baserat pa data som inhamtas av en kamera for att reglera fordonetsposition och riktning med avseende pa en markor placerad i synfaltet av kameran. Genom attintegrera kameran tillsammans med en tradlos videosandare pa plattformen, visar vi i dennaavhandling att det ar mojligt att astadkomma autonom navigering och landning i narheten avmarkoren.Kontrollsystemet utvecklades i programmeringsspraket Python och all processering avvisions-datan sker pa en extern dator. Metoderna som anvands for att utvecklakontrollsystemet och som beskrivs i denna rapport har testats under ett ertal experiment somvisar pa hur val systemet kan detektera markoren och hur val de olika ingaendekomponenterna samspelar for att kunna utfora den autonoma styrningen. Genom den metodsom presenteras i den har rapporten for att berakna styrsignalerna till dronaren med hjalp avvisuell data, visar vi att det ar mojligt att astadkomma autonom styrning och landning motmalet inom en radie av 10 centimeter.
|
76 |
Relative Navigation of Micro Air Vehicles in GPS-Degraded EnvironmentsWheeler, David Orton 01 December 2017 (has links)
Most micro air vehicles rely heavily on reliable GPS measurements for proper estimation and control, and therefore struggle in GPS-degraded environments. When GPS is not available, the global position and heading of the vehicle is unobservable. This dissertation establishes the theoretical and practical advantages of a relative navigation framework for MAV navigation in GPS-degraded environments. This dissertation explores how the consistency, accuracy, and stability of current navigation approaches degrade during prolonged GPS dropout and in the presence of heading uncertainty. Relative navigation (RN) is presented as an alternative approach that maintains observability by working with respect to a local coordinate frame. RN is compared with several current estimation approaches in a simulation environment and in hardware experiments. While still subject to global drift, RN is shown to produce consistent state estimates and stable control. Estimating relative states requires unique modifications to current estimation approaches. This dissertation further provides a tutorial exposition of the relative multiplicative extended Kalman filter, presenting how to properly ensure observable state estimation while maintaining consistency. The filter is derived using both inertial and body-fixed state definitions and dynamics. Finally, this dissertation presents a series of prolonged flight tests, demonstrating the effectiveness of the relative navigation approach for autonomous GPS-degraded MAV navigation in varied, unknown environments. The system is shown to utilize a variety of vision sensors, work indoors and outdoors, run in real-time with onboard processing, and not require special tuning for particular sensors or environments. Despite leveraging off-the-shelf sensors and algorithms, the flight tests demonstrate stable front-end performance with low drift. The flight tests also demonstrate the onboard generation of a globally consistent, metric, and localized map by identifying and incorporating loop-closure constraints and intermittent GPS measurements. With this map, mission objectives are shown to be autonomously completed.
|
77 |
Stereo vision and LIDAR based Dynamic Occupancy Grid mapping : Application to scenes analysis for Intelligent Vehicles / Cartographie dynamique occupation grille basée sur la vision stéréo et LIDAR : Application à l'analyse de scènes pour les véhicules intelligentsLi, You 03 December 2013 (has links)
Les systèmes de perception, qui sont à la base du concept du véhicule intelligent, doivent répondre à des critères de performance à plusieurs niveaux afin d’assurer des fonctions d’aide à la conduite et/ou de conduite autonome. Aujourd’hui, la majorité des systèmes de perception pour véhicules intelligents sont basés sur la combinaison de données issues de plusieurs capteurs (caméras, lidars, radars, etc.). Les travaux de cette thèse concernent le développement d’un système de perception à base d’un capteur de vision stéréoscopique et d’un capteur lidar pour l’analyse de scènes dynamiques en environnement urbain. Les travaux présentés sont divisés en quatre parties.La première partie présente une méthode d’odométrie visuelle basée sur la stéréovision, avec une comparaison de différents détecteurs de primitives et différentes méthodes d’association de ces primitives. Un couple de détecteur et de méthode d’association de primitives a été sélectionné sur la base d’évaluation de performances à base de plusieurs critères. Dans la deuxième partie, les objets en mouvement sont détectés et segmentés en utilisant les résultats d’odométrie visuelle et l’image U-disparité. Ensuite, des primitives spatiales sont extraites avec une méthode basée sur la technique KPCA et des classifieurs sont enfin entrainés pour reconnaitre les objets en mouvement (piétons, cyclistes, véhicules). La troisième partie est consacrée au calibrage extrinsèque d’un capteur stéréoscopique et d’un Lidar. La méthode de calibrage proposée, qui utilise une mire plane, est basée sur l’exploitation d’une relation géométrique entre les caméras du capteur stéréoscopique. Pour une meilleure robustesse, cette méthode intègre un modèle de bruit capteur et un processus d’optimisation basé sur la distance de Mahalanobis. La dernière partie de cette thèse présente une méthode de construction d’une grille d’occupation dynamique en utilisant la reconstruction 3D de l’environnement, obtenue des données de stéréovision et Lidar de manière séparée puis conjointement. Pour une meilleure précision, l’angle entre le plan de la chaussée et le capteur stéréoscopique est estimé. Les résultats de détection et de reconnaissance (issus des première et deuxième parties) sont incorporés dans la grille d’occupation pour lui associer des connaissances sémantiques. Toutes les méthodes présentées dans cette thèse sont testées et évaluées avec la simulation et avec de données réelles acquises avec la plateforme expérimentale véhicule intelligent SetCar” du laboratoire IRTES-SET. / Intelligent vehicles require perception systems with high performances. Usually, perception system consists of multiple sensors, such as cameras, 2D/3D lidars or radars. The works presented in this Ph.D thesis concern several topics on cameras and lidar based perception for understanding dynamic scenes in urban environments. The works are composed of four parts.In the first part, a stereo vision based visual odometry is proposed by comparing several different approaches of image feature detection and feature points association. After a comprehensive comparison, a suitable feature detector and a feature points association approach is selected to achieve better performance of stereo visual odometry. In the second part, independent moving objects are detected and segmented by the results of visual odometry and U-disparity image. Then, spatial features are extracted by a kernel-PCA method and classifiers are trained based on these spatial features to recognize different types of common moving objects e.g. pedestrians, vehicles and cyclists. In the third part, an extrinsic calibration method between a 2D lidar and a stereoscopic system is proposed. This method solves the problem of extrinsic calibration by placing a common calibration chessboard in front of the stereoscopic system and 2D lidar, and by considering the geometric relationship between the cameras of the stereoscopic system. This calibration method integrates also sensor noise models and Mahalanobis distance optimization for more robustness. At last, dynamic occupancy grid mapping is proposed by 3D reconstruction of the environment, obtained from stereovision and Lidar data separately and then conjointly. An improved occupancy grid map is obtained by estimating the pitch angle between ground plane and the stereoscopic system. The moving object detection and recognition results (from the first and second parts) are incorporated into the occupancy grid map to augment the semantic meanings. All the proposed and developed methods are tested and evaluated with simulation and real data acquired by the experimental platform “intelligent vehicle SetCar” of IRTES-SET laboratory.
|
78 |
Analýza vlastností stereokamery ZED ve venkovním prostředí / Analysis of ZED stereocamera in outdoor environmentSvoboda, Ondřej January 2019 (has links)
The Master thesis is focused on analyzing stereo camera ZED in the outdoor environment. There is compared ZEDfu visual odometry with commonly used methods like GPS or wheel odometry. Moreover, the thesis includes analyses of SLAM in the changeable outdoor environment, too. The simultaneous mapping and localization in RTAB-Map were processed separately with SIFT and BRISK descriptors. The aim of this master thesis is to analyze the behaviour ZED camera in the outdoor environment for future implementation in mobile robotics.
|
79 |
Cooperative Navigation of Fixed-Wing Micro Air Vehicles in GPS-Denied EnvironmentsEllingson, Gary James 05 November 2019 (has links)
Micro air vehicles have recently gained popularity due to their potential as autonomous systems. Their future impact, however, will depend in part on how well they can navigate in GPS-denied and GPS-degraded environments. In response to this need, this dissertation investigates a potential solution for GPS-denied operations called relative navigation. The method utilizes keyframe-to-keyframe odometry estimates and their covariances in a global back end that represents the global state as a pose graph. The back end is able to effectively represent nonlinear uncertainties and incorporate opportunistic global constraints. The GPS-denied research community has, for the most part, neglected to consider fixed-wing aircraft. This dissertation enables fixed-wing aircraft to utilize relative navigation by accounting for their sensing requirements. The development of an odometry-like, front-end, EKF-based estimator that utilizes only a monocular camera and an inertial measurement unit is presented. The filter uses the measurement model of the multi-state-constraint Kalman filter and regularly performs relative resets in coordination with keyframe declarations. In addition to the front-end development, a method is provided to account for front-end velocity bias in the back-end optimization. Finally a method is presented for enabling multiple vehicles to improve navigational accuracy by cooperatively sharing information. Modifications to the relative navigation architecture are presented that enable decentralized, cooperative operations amidst temporary communication dropouts. The proposed framework also includes the ability to incorporate inter-vehicle measurements and utilizes a new concept called the coordinated reset, which is necessary for optimizing the cooperative odometry and improving localization. Each contribution is demonstrated through simulation and/or hardware flight testing. Simulation and Monte-Carlo testing is used to show the expected quality of the results. Hardware flight-test results show the front-end estimator performance, several back-end optimization examples, and cooperative GPS-denied operations.
|
80 |
Enabling Autonomous Operation of Micro Aerial Vehicles Through GPS to GPS-Denied TransitionsJackson, James Scott 11 November 2019 (has links)
Micro aerial vehicles and other autonomous systems have the potential to truly transform life as we know it, however much of the potential of autonomous systems remains unrealized because reliable navigation is still an unsolved problem with significant challenges. This dissertation presents solutions to many aspects of autonomous navigation. First, it presents ROSflight, a software and hardware architure that allows for rapid prototyping and experimentation of autonomy algorithms on MAVs with lightweight, efficient flight control. Next, this dissertation presents improvments to the state-of-the-art in optimal control of quadrotors by utilizing the error-state formulation frequently utilized in state estimation. It is shown that performing optimal control directly over the error-state results in a vastly more computationally efficient system than competing methods while also dealing with the non-vector rotation components of the state in a principled way. In addition, real-time robust flight planning is considered with a method to navigate cluttered, potentially unknown scenarios with real-time obstacle avoidance. Robust state estimation is a critical component to reliable operation, and this dissertation focuses on improving the robustness of visual-inertial state estimation in a filtering framework by extending the state-of-the-art to include better modeling and sensor fusion. Further, this dissertation takes concepts from the visual-inertial estimation community and applies it to tightly-coupled GNSS, visual-inertial state estimation. This method is shown to demonstrate significantly more reliable state estimation than visual-inertial or GNSS-inertial state estimation alone in a hardware experiment through a GNSS-GNSS denied transition flying under a building and back out into open sky. Finally, this dissertation explores a novel method to combine measurements from multiple agents into a coherent map. Traditional approaches to this problem attempt to solve for the position of multiple agents at specific times in their trajectories. This dissertation instead attempts to solve this problem in a relative context, resulting in a much more robust approach that is able to handle much greater intial error than traditional approaches.
|
Page generated in 0.0595 seconds