• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 38
  • 35
  • 24
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Localization of Combat Aircraft at High Altitude using Visual Odometry

Nilsson Boij, Jenny January 2022 (has links)
Most of the navigation systems used in today’s aircraft rely on Global Navigation Satellite Systems (GNSS). However, GNSS is not fully reliable. For example, it can be jammed by attacks on the space or ground segments of the system or denied at inaccessible areas. Hence to ensure successful navigation it is of great importance to continuously be able to establish the aircraft’s location without having to rely on external reference systems. Localization is one of many sub-problems in navigation and will be the focus of this thesis. This brings us to the field of visual odometry (VO), which involves determining position and orientation with the help of images from one or more camera sensors. But to date, most VO systems have primarily been established on ground vehicles and low flying multi-rotor systems. This thesis seeks to extend VO to new applications by exploring it in a fairly new context; a fixed-wing piloted combat aircraft, for vision-only pose estimation in applications of extremely large scene depth. A major part of this research work is the data gathering, where the data is collected using the flight simulator X-Plane 11. Three different flight routes are flown; a straight line, a curve and a loop, for two types of visual conditions; in clear weather with daylight and during sunset. The method used in this work is ORB-SLAM3, an open-source library for visual simultaneous localization and mapping (SLAM). It has shown excellent results in previous works and has become a benchmark method often used in the field of visual pose estimation. ORB-SLAM3 tracks the straight line of 78 km very well at an altitude over 2700 m. The absolute trajectory error (ATE) is 0.072% of the total distance traveled in daylight and 0.11% during sunset. These results are of the same magnitude as ORB-SLAM3 on the EuRoC MAV dataset. For the curved trajectory of 79 km ATE is 2.0% and 1.2% of total distance traveled in daylight and sunset respectively.  The longest flight route of 258 km shows the challenges of visual pose estimation. Although it is managing to close loops in daylight, it has an ATE of 3.6% during daylight. During sunset the features do not possess enough invariant characteristics to close loops, resulting in an even larger ATE of 14% of total distance traveled. Hence to be able to use and properly rely on vision in localization, more sensor information is needed. But since all aircraft already possess an inertial measurement unit (IMU), the future work naturally includes IMU data in the system. Nevertheless, the results from this research show that vision is useful, even at the high altitudes and speeds used by a combat aircraft.
52

Odométrie visuelle directe et cartographie dense de grands environnements à base d'images panoramiques RGB-D / Direct visual odometry and dense large-scale environment mapping from panoramic RGB-D images

Martins, Renato 27 October 2017 (has links)
Cette thèse se situe dans le domaine de l'auto-localisation et de la cartographie 3D des caméras RGB-D pour des robots mobiles et des systèmes autonomes avec des caméras RGB-D. Nous présentons des techniques d'alignement et de cartographie pour effectuer la localisation d'une caméra (suivi), notamment pour des caméras avec mouvements rapides ou avec faible cadence. Les domaines d'application possibles sont la réalité virtuelle et augmentée, la localisation de véhicules autonomes ou la reconstruction 3D des environnements.Nous proposons un cadre consistant et complet au problème de localisation et cartographie 3D à partir de séquences d'images RGB-D acquises par une plateforme mobile. Ce travail explore et étend le domaine d'applicabilité des approches de suivi direct dites "appearance-based". Vis-à-vis des méthodes fondées sur l'extraction de primitives, les approches directes permettent une représentation dense et plus précise de la scène mais souffrent d'un domaine de convergence plus faible nécessitant une hypothèse de petits déplacements entre images.Dans la première partie de la thèse, deux contributions sont proposées pour augmenter ce domaine de convergence. Tout d'abord une méthode d'estimation des grands déplacements est développée s'appuyant sur les propriétés géométriques des cartes de profondeurs contenues dans l'image RGB-D. Cette estimation grossière (rough estimation) peut être utilisée pour initialiser la fonction de coût minimisée dans l'approche directe. Une seconde contribution porte sur l'étude des domaines de convergence de la partie photométrique et de la partie géométrique de cette fonction de coût. Il en résulte une nouvelle fonction de coût exploitant de manière adaptative l'erreur photométrique et géométrique en se fondant sur leurs propriétés de convergence respectives.Dans la deuxième partie de la thèse, nous proposons des techniques de régularisation et de fusion pour créer des représentations précises et compactes de grands environnements. La régularisation s'appuie sur une segmentation de l'image sphérique RGB-D en patchs utilisant simultanément les informations géométriques et photométriques afin d'améliorer la précision et la stabilité de la représentation 3D de la scène. Cette segmentation est également adaptée pour la résolution non uniforme des images panoramiques. Enfin les images régularisées sont fusionnées pour créer une représentation compacte de la scène, composée de panoramas RGB-D sphériques distribués de façon optimale dans l'environnement. Ces représentations sont particulièrement adaptées aux applications de mobilité, tâches de navigation autonome et de guidage, car elles permettent un accès en temps constant avec une faible occupation de mémoire qui ne dépendent pas de la taille de l'environnement. / This thesis is in the context of self-localization and 3D mapping from RGB-D cameras for mobile robots and autonomous systems. We present image alignment and mapping techniques to perform the camera localization (tracking) notably for large camera motions or low frame rate. Possible domains of application are localization of autonomous vehicles, 3D reconstruction of environments, security or in virtual and augmented reality. We propose a consistent localization and 3D dense mapping framework considering as input a sequence of RGB-D images acquired from a mobile platform. The core of this framework explores and extends the domain of applicability of direct/dense appearance-based image registration methods. With regard to feature-based techniques, direct/dense image registration (or image alignment) techniques are more accurate and allow us a more consistent dense representation of the scene. However, these techniques have a smaller domain of convergence and rely on the assumption that the camera motion is small.In the first part of the thesis, we propose two formulations to relax this assumption. Firstly, we describe a fast pose estimation strategy to compute a rough estimate of large motions, based on the normal vectors of the scene surfaces and on the geometric properties between the RGB-D images. This rough estimation can be used as initialization to direct registration methods for refinement. Secondly, we propose a direct RGB-D camera tracking method that exploits adaptively the photometric and geometric error properties to improve the convergence of the image alignment.In the second part of the thesis, we propose techniques of regularization and fusion to create compact and accurate representations of large scale environments. The regularization is performed from a segmentation of spherical frames in piecewise patches using simultaneously the photometric and geometric information to improve the accuracy and the consistency of the scene 3D reconstruction. This segmentation is also adapted to tackle the non-uniform resolution of panoramic images. Finally, the regularized frames are combined to build a compact keyframe-based map composed of spherical RGB-D panoramas optimally distributed in the environment. These representations are helpful for autonomous navigation and guiding tasks as they allow us an access in constant time with a limited storage which does not depend on the size of the environment.
53

Vers un système de capture du mouvement humain en 3D pour un robot mobile évoluant dans un environnement encombré / Toward a motion capture system in 3D for a mobile robot moving in a cluttered environment

Dib, Abdallah 24 May 2016 (has links)
Dans cette thèse nous intéressons à la conception d'un robot mobile capable d’analyser le comportement et le mouvement d’une personne en environnement intérieur et encombré, par exemple le domicile d’une personne âgée. Plus précisément, notre objectif est de doter le robot des capacités de perception visuelle de la posture humaine de façon à mieux maîtriser certaines situations qui nécessitent de comprendre l’intention des personnes avec lesquelles le robot interagit, ou encore de détecter des situations à risques comme les chutes ou encore d’analyser les capacités motrices des personnes dont il a la garde. Le suivi de la posture dans un environnement dynamique et encombré relève plusieurs défis notamment l'apprentissage en continue du fond de la scène et l'extraction la silhouette qui peut être partiellement observable lorsque la personne est dans des endroits occultés. Ces difficultés rendent le suivi de la posture une tâche difficile. La majorité des méthodes existantes, supposent que la scène est statique et la personne est toujours visible en entier. Ces approches ne sont pas adaptées pour fonctionner dans des conditions réelles. Nous proposons, dans cette thèse, un nouveau système de suivi capable de suivre la posture de la personne dans ces conditions réelles. Notre approche utilise une grille d'occupation avec un modèle de Markov caché pour apprendre en continu l'évolution de la scène et d'extraire la silhouette, ensuite un algorithme de filtrage particulaire hiérarchique est utilisé pour reconstruire la posture. Nous proposons aussi un nouvel algorithme de gestion d'occlusion capable d'identifier et d'exclure les parties du corps cachées du processus de l'estimation de la pose. Finalement, nous avons proposé une base de données contenant des images RGB-D avec la vérité-terrain dans le but d'établir une nouvelle référence pour l'évaluation des systèmes de capture de mouvement dans un environnement réel avec occlusions. La vérité-terrain est obtenue à partir d'un système de capture de mouvement à base de marqueur de haute précision avec huit caméras infrarouges. L'ensemble des données est disponible en ligne. La deuxième contribution de cette thèse, est le développement d'une méthode de localisation visuelle à partir d'une caméra du type RGB-D montée sur un robot qui se déplace dans un environnement dynamique. En effet, le système de capture de mouvement que nous avons développé doit équiper un robot se déplaçant dans une scène. Ainsi, l'estimation de mouvement du robot est importante pour garantir une extraction de silhouette correcte pour le suivi. La difficulté majeure de la localisation d'une caméra dans un environnement dynamique, est que les objets mobiles de la scène induisent un mouvement supplémentaire qui génère des pixels aberrants. Ces pixels doivent être exclus du processus de l'estimation du mouvement de la caméra. Nous proposons ainsi une extension de la méthode de localisation dense basée sur le flux optique pour isoler les pixels aberrants en utilisant l'algorithme de RANSAC. / In this thesis we are interested in designing a mobile robot able to analyze the behavior and movement of a a person in indoor and cluttered environment. Our goal is to equip the robot by visual perception capabilities of the human posture to better analyze situations that require understanding of person with which the robot interacts, or detect risk situations such as falls or analyze motor skills of the person. Motion capture in a dynamic and crowded environment raises multiple challenges such as learning the background of the environment and extracting the silhouette that can be partially observable when the person is in hidden places. These difficulties make motion capture difficult. Most of existing methods assume that the scene is static and the person is always fully visible by the camera. These approaches are not able to work in such realistic conditions. In this thesis, We propose a new motion capture system capable of tracking a person in realistic world conditions. Our approach uses a 3D occupancy grid with a hidden Markov model to continuously learn the changing background of the scene and to extract silhouette of the person, then a hierarchical particle filtering algorithm is used to reconstruct the posture. We propose a novel occlusion management algorithm able to identify and discards hidden body parts of the person from process of the pose estimation. We also proposed a new database containing RGBD images with ground truth data in order to establish a new benchmark for the assessment of motion capture systems in a real environment with occlusions. The ground truth is obtained from a motion capture system based on high-precision marker with eight infrared cameras. All data is available online. The second contribution of this thesis is the development of a new visual odometry method to localize an RGB-D camera mounted on a robot moving in a dynamic environment. The major difficulty of the localization in a dynamic environment, is that mobile objects in the scene induce additional movement that generates outliers pixels. These pixels should be excluded from the camera motion estimation process in order to produce accurate and precise localization. We thus propose an extension of the dense localization method based on the optical flow method to remove outliers pixels using the RANSAC algorithm.
54

Camera Motion Estimation for Multi-Camera Systems

Kim, Jae-Hak, Jae-Hak.Kim@anu.edu.au January 2008 (has links)
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
55

Stereo vision for simultaneous localization and mapping

Brink, Wikus 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Simultaneous localization and mapping (SLAM) is vital for autonomous robot navigation. The robot must build a map of its environment while tracking its own motion through that map. Although many solutions to this intricate problem have been proposed, one of the most prominent issues that still needs to be resolved is to accurately measure and track landmarks over time. In this thesis we investigate the use of stereo vision for this purpose. In order to find landmarks in images we explore the use of two feature detectors: the scale-invariant feature transform (SIFT) and speeded-up robust features (SURF). Both these algorithms find salient points in images and calculate a descriptor for each point that is invariant to scale, rotation and illumination. By using the descriptors we match these image features between stereo images and use the geometry of the system to calculate a set of 3D landmark measurements. A Taylor approximation of this transformation is used to derive a Gaussian noise model for the measurements. The measured landmarks are matched to landmarks in a map to find correspondences. We find that this process often incorrectly matches ambiguous landmarks. To find these mismatches we develop a novel outlier detection scheme based on the random sample consensus (RANSAC) framework. We use a similarity transformation for the RANSAC model and derive a probabilistic consensus measure that takes the uncertainties of landmark locations into account. Through simulation and practical tests we find that this method is a significant improvement on the standard approach of using the fundamental matrix. With accurately identified landmarks we are able to perform SLAM. We investigate the use of three popular SLAM algorithms: EKF SLAM, FastSLAM and FastSLAM 2. EKF SLAM uses a Gaussian distribution to describe the systems states and linearizes the motion and measurement equations with Taylor approximations. The two FastSLAM algorithms are based on the Rao-Blackwellized particle filter that uses particles to describe the robot states, and EKFs to estimate the landmark states. FastSLAM 2 uses a refinement process to decrease the size of the proposal distribution and in doing so decreases the number of particles needed for accurate SLAM. We test the three SLAM algorithms extensively in a simulation environment and find that all three are capable of very accurate results under the right circumstances. EKF SLAM displays extreme sensitivity to landmark mismatches. FastSLAM, on the other hand, is considerably more robust against landmark mismatches but is unable to describe the six-dimensional state vector required for 3D SLAM. FastSLAM 2 offers a good compromise between efficiency and accuracy, and performs well overall. In order to evaluate the complete system we test it with real world data. We find that our outlier detection algorithm is very effective and greatly increases the accuracy of the SLAM systems. We compare results obtained by all three SLAM systems, with both feature detection algorithms, against DGPS ground truth data and achieve accuracies comparable to other state-of-the-art systems. From our results we conclude that stereo vision is viable as a sensor for SLAM. / AFRIKAANSE OPSOMMING: Gelyktydige lokalisering en kartering (simultaneous localization and mapping, SLAM) is ’n noodsaaklike proses in outomatiese robot-navigasie. Die robot moet ’n kaart bou van sy omgewing en tegelykertyd sy eie beweging deur die kaart bepaal. Alhoewel daar baie oplossings vir hierdie ingewikkelde probleem bestaan, moet een belangrike saak nog opgelos word, naamlik om landmerke met verloop van tyd akkuraat op te spoor en te meet. In hierdie tesis ondersoek ons die moontlikheid om stereo-visie vir hierdie doel te gebruik. Ons ondersoek die gebruik van twee beeldkenmerk-onttrekkers: scale-invariant feature transform (SIFT) en speeded-up robust features (SURF). Altwee algoritmes vind toepaslike punte in beelde en bereken ’n beskrywer vir elke punt wat onveranderlik is ten opsigte van skaal, rotasie en beligting. Deur die beskrywer te gebruik, kan ons ooreenstemmende beeldkenmerke soek en die geometrie van die stelsel gebruik om ’n stel driedimensionele landmerkmetings te bereken. Ons gebruik ’n Taylor- benadering van hierdie transformasie om ’n Gaussiese ruis-model vir die metings te herlei. Die gemete landmerke se beskrywers word dan vergelyk met dié van landmerke in ’n kaart om ooreenkomste te vind. Hierdie proses maak egter dikwels foute. Om die foutiewe ooreenkomste op te spoor het ons ’n nuwe uitskieterherkenningsalgoritme ontwikkel wat gebaseer is op die RANSAC-raamwerk. Ons gebruik ’n gelykvormigheidstransformasie vir die RANSAC-model en lei ’n konsensusmate af wat die onsekerhede van die ligging van landmerke in ag neem. Met simulasie en praktiese toetse stel ons vas dat die metode ’n beduidende verbetering op die standaardprosedure, waar die fundamentele matriks gebruik word, is. Met ons akkuraat geïdentifiseerde landmerke kan ons dan SLAM uitvoer. Ons ondersoek die gebruik van drie SLAM-algoritmes: EKF SLAM, FastSLAM en FastSLAM 2. EKF SLAM gebruik ’n Gaussiese verspreiding om die stelseltoestande te beskryf en Taylor-benaderings om die bewegings- en meetvergelykings te lineariseer. Die twee FastSLAM-algoritmes is gebaseer op die Rao-Blackwell partikelfilter wat partikels gebruik om robottoestande te beskryf en EKF’s om die landmerktoestande af te skat. FastSLAM 2 gebruik ’n verfyningsproses om die grootte van die voorstelverspreiding te verminder en dus die aantal partikels wat vir akkurate SLAM benodig word, te verminder. Ons toets die drie SLAM-algoritmes deeglik in ’n simulasie-omgewing en vind dat al drie onder die regte omstandighede akkurate resultate kan behaal. EKF SLAM is egter baie sensitief vir foutiewe landmerkooreenkomste. FastSLAM is meer bestand daarteen, maar kan nie die sesdimensionele verspreiding wat vir 3D SLAM vereis word, beskryf nie. FastSLAM 2 bied ’n goeie kompromie tussen effektiwiteit en akkuraatheid, en presteer oor die algemeen goed. Ons toets die hele stelsel met werklike data om dit te evalueer, en vind dat ons uitskieterherkenningsalgoritme baie effektief is en die akkuraatheid van die SLAM-stelsels beduidend verbeter. Ons vergelyk resultate van die drie SLAM-stelsels met onafhanklike DGPS-data, wat as korrek beskou kan word, en behaal akkuraatheid wat vergelykbaar is met ander toonaangewende stelsels. Ons resultate lei tot die gevolgtrekking dat stereo-visie ’n lewensvatbare sensor vir SLAM is.
56

Field-based measurement of hydrodynamics associated with engineered in-channel structures : the example of fish pass assessment

Kriechbaumer, Thomas January 2016 (has links)
The construction of fish passes has been a longstanding measure to improve river ecosystem status by ensuring the passability of weirs, dams and other in- channel structures for migratory fish. Many fish passes have a low biological effectiveness because of unsuitable hydrodynamic conditions hindering fish to rapidly detect the pass entrance. There has been a need for techniques to quantify the hydrodynamics surrounding fish pass entrances in order to identify those passes that require enhancement and to improve the design of new passes. This PhD thesis presents the development of a methodology for the rapid, spatially continuous quantification of near-pass hydrodynamics in the field. The methodology involves moving-vessel Acoustic Doppler Current Profiler (ADCP) measurements in order to quantify the 3-dimensional water velocity distribution around fish pass entrances. The approach presented in this thesis is novel because it integrates a set of techniques to make ADCP data robust against errors associated with the environmental conditions near engineered in-channel structures. These techniques provide solutions to (i) ADCP compass errors from magnetic interference, (ii) bias in water velocity data caused by spatial flow heterogeneity, (iii) the accurate ADCP positioning in locales with constrained line of sight to navigation satellites, and (iv) the accurate and cost-effective sensor deployment following pre-defined sampling strategies. The effectiveness and transferability of the methodology were evaluated at three fish pass sites covering conditions of low, medium and high discharge. The methodology outputs enabled a detailed quantitative characterisation of the fish pass attraction flow and its interaction with other hydrodynamic features. The outputs are suitable to formulate novel indicators of hydrodynamic fish pass attractiveness and they revealed the need to refine traditional fish pass design guidelines.
57

From low level perception towards high level action planning

Reich, Simon Martin 30 October 2018 (has links)
No description available.
58

SiameseVO-Depth: odometria visual através de redes neurais convolucionais siamesas / SiameseVO-Depth: visual odometry through siamese neural networks

Santos, Vinícius Araújo 11 October 2018 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-21T11:05:44Z No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-21T11:06:26Z (GMT) No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-11-21T11:06:26Z (GMT). No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-10-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Visual Odometry is an important process in image based navigation of robots. The standard methods of this field rely on the good feature matching between frames where feature detection on images stands as a well adressed problem within Computer Vision. Such techniques are subject to illumination problems, noise and poor feature localization accuracy. Thus, 3D information on a scene may mitigate the uncertainty of the features on images. Deep Learning techniques show great results when dealing with common difficulties of VO such as low illumination conditions and bad feature selection. While Visual Odometry and Deep Learning have been connected previously, no techniques applying Siamese Convolutional Networks on depth infomation given by disparity maps have been acknowledged as far as this work’s researches went. This work aims to fill this gap by applying Deep Learning to estimate egomotion through disparity maps on an Siamese architeture. The SiameseVO-Depth architeture is compared to state of the art techniques on OV by using the KITTI Vision Benchmark Suite. The results reveal that the chosen methodology succeeded on the estimation of Visual Odometry although it doesn’t outperform the state-of-the-art techniques. This work presents fewer steps in relation to standard VO techniques for it consists of an end-to-end solution and demonstrates a new approach of Deep Learning applied to Visual Odometry. / Odometria Visual é um importante processo na navegação de robôs baseada em imagens. Os métodos clássicos deste tema dependem de boas correspondências de características feitas entre imagens sendo que a detecção de características em imagens é um tema amplamente discutido no campo de Visão Computacional. Estas técnicas estão sujeitas a problemas de iluminação, presença de ruído e baixa de acurácia de localização. Nesse contexto, a informação tridimensional de uma cena pode ser uma forma de mitigar as incertezas sobre as características em imagens. Técnicas de Deep Learning têm demonstrado bons resultados lidando com problemas comuns em técnicas de OV como insuficiente iluminação e erros na seleção de características. Ainda que já existam trabalhos que relacionam Odometria Visual e Deep Learning, não foram encontradas técnicas que utilizem Redes Convolucionais Siamesas com sucesso utilizando informações de profundidade de mapas de disparidade durante esta pesquisa. Este trabalho visa preencher esta lacuna aplicando Deep Learning na estimativa do movimento por de mapas de disparidade em uma arquitetura Siamesa. A arquitetura SiameseVO-Depth proposta neste trabalho é comparada à técnicas do estado da arte em OV utilizando a base de dados KITTI Vision Benchmark Suite. Os resultados demonstram que através da metodologia proposta é possível a estimativa dos valores de uma Odometria Visual ainda que o desempenho não supere técnicas consideradas estado da arte. O trabalho proposto possui menos etapas em comparação com técnicas clássicas de OV por apresentar-se como uma solução fim-a-fim e apresenta nova abordagem no campo de Deep Learning aplicado à Odometria Visual.
59

Caméras 3D pour la localisation d'un système mobile en environnement urbain / 3D cameras for the localization of a mobile platform in urban environment

Mittet, Marie-Anne 15 June 2015 (has links)
L’objectif de la thèse est de développer un nouveau système de localisation mobile composé de trois caméras 3D de type Kinect et d’une caméra additionnelle de type Fish Eye. La solution algorithmique est basée sur l’odométrie visuelle et permet de calculer la trajectoire du mobile en temps réel à partir des données fournies par les caméras 3D. L’originalité de la méthodologie réside dans l’exploitation d’orthoimages créées à partir des nuages de points acquis en temps réel par les trois caméras. L’étude des différences entre les orthoimages successives acquises par le système mobile permet d’en déduire ses positions successives et d’en calculer sa trajectoire. / The aim of the thesis is to develop a new kind of localization system, composed of three 3D cameras such as Kinect and an additional Fisheye camera. The localization algorithm is based on Visual Odometry principles in order to calculate the trajectory of the mobile platform in real time from the data provided by the 3D cameras.The originality of the processing method lies within the exploitation of orthoimages processed from the point clouds that are acquired in real time by the three cameras. The positions and trajectory of the mobile platform can be derived from the study of the differences between successive orthoimages.
60

Odhad rychlosti vozidla ze záznamu on-board kamery / Vehicle Speed Estimation from On-Board Camera Recording

Janíček, Kryštof January 2018 (has links)
This thesis describes the design and implementation of system for vehicle speed estimation from on-board camera recording. Speed estimation is based on optical flow estimation and convolutional neural network. Designed system is able to estimate speed with average error of 20% on created data set where actual speed is greater than 35 kilometers per hour.

Page generated in 0.05 seconds