Spelling suggestions: "subject:"cisual odometry"" "subject:"4visual odometry""
41 |
Monocular Visual Odometry for Underwater Navigation : An examination of the performance of two methods / Monokulär visuell odometri för undervattensnavigation : En undersökning av två metoderVoisin-Denoual, Maxime January 2018 (has links)
This thesis examines two methods for monocular visual odometry, FAST + KLT and ORBSLAM2, in the case of underwater environments.This is done by implementing and testing the methods on different underwater datasets. The results for the FAST + KLT provide no evidence that this method is effective in underwater settings. However, results for the ORBSLAM2 indicate that good performance is possible whenproperly tuned and provided with good camera calibration. Still, thereremain challenges related to, for example, sand bottom environments and scale estimation in monocular setups. The conclusion is therefore that the ORBSLAM2 is the most promising method of the two tested for underwater monocular visual odometry. / Denna uppsats undersöker två metoder för monokulär visuell odometri, FAST + KLT och ORBSLAM2, i det särskilda fallet av miljöer under vatten. Detta görs genom att implementera och testa metoderna på olika undervattensdataset. Resultaten för FAST + KLT ger inget stöd för att metoden skulle vara effektiv i undervattensmiljöer. Resultaten för ORBSLAM2, däremot, indikerar att denna metod kan prestera bra om den justeras på rätt sätt och får bra kamerakalibrering. Samtidigt återstår dock utmaningar relaterade till exempelvis miljöer med sandbottnar och uppskattning av skala i monokulära setups. Slutsatsen är därför att ORBSLAM2 är den mest lovande metoden av de två testade för monokulär visuell odometri under vatten.
|
42 |
LDD: Learned Detector and Descriptor of Points for Visual OdometryAksjonova, Jevgenija January 2018 (has links)
Simultaneous localization and mapping is an important problem in robotics that can be solved using visual odometry -- the process of estimating ego-motion from subsequent camera images. In turn, visual odometry systems rely on point matching between different frames. This work presents a novel method for matching key-points by applying neural networks to point detection and description. Traditionally, point detectors are used in order to select good key-points (like corners) and then these key-points are matched using features extracted with descriptors. However, in this work a descriptor is trained to match points densely and then a detector is trained to predict, which points are more likely to be matched with the descriptor. This information is further used for selection of good key-points. The results of this project show that this approach can lead to more accurate results compared to model-based methods. / Samtidig lokalisering och kartläggning är ett viktigt problem inom robotik som kan lösas med hjälp av visuell odometri -- processen att uppskatta självrörelse från efterföljande kamerabilder. Visuella odometrisystem förlitar sig i sin tur på punktmatchningar mellan olika bildrutor. Detta arbete presenterar en ny metod för matchning av nyckelpunkter genom att applicera neurala nätverk för detektion av punkter och deskriptorer. Traditionellt sett används punktdetektorer för att välja ut bra nyckelpunkter (som hörn) och sedan används dessa nyckelpunkter för att matcha särdrag. I detta arbete tränas istället en deskriptor att matcha punkterna. Sedan tränas en detektor till att förutspå vilka punker som är mest troliga att matchas korrekt med deskriptorn. Denna information används sedan för att välja ut bra nyckelpunkter. Resultatet av projektet visar att det kan leda till mer precisa resultat jämfört med andra modellbaserade metoder.
|
43 |
Event-Based Visual SLAM : An Explorative ApproachRideg, Johan January 2023 (has links)
Simultaneous Localization And Mapping (SLAM) is an important topic within the field of roboticsaiming to localize an agent in a unknown or partially known environment while simultaneouslymapping the environment. The ability to perform robust SLAM is especially important inhazardous environments such as natural disasters, firefighting and space exploration wherehuman exploration may be too dangerous or impractical. In recent years, neuromorphiccameras have been made commercially available. This new type of sensor does not outputconventional frames but instead an asynchronous signal of events at a microsecond resolutionand is capable of capturing details in complex lightning scenarios where a standard camerawould be either under- or overexposed, making neuromorphic cameras a promising solution insituations where standard cameras struggle. This thesis explores a set of different approachesto virtual frames, a frame-based representation of events, in the context of SLAM.UltimateSLAM, a project fusing events, gray scale and IMU data, is investigated using virtualframes of fixed and varying frame rate both with and without motion compensation. Theresulting trajectories are compared to the trajectories produced when using gray scale framesand the number of detected and tracked features are compared. We also use a traditional visualSLAM project, ORB-SLAM, to investigate the Gaussian weighted virtual frames and gray scaleframes reconstructed from the event stream using a recurrent network model. While virtualframes can be used for SLAM, the event camera is not a plug and play sensor and requires agood choice of parameters when constructing virtual frames, relying on pre-existing knowledgeof the scene.
|
44 |
ESA ExoMars Rover PanCam System Geometric Modeling and EvaluationLi, Ding 14 May 2015 (has links)
No description available.
|
45 |
Simultaneous Three-Dimensional Mapping and Geolocation of Road SurfaceLi, Diya 23 October 2018 (has links)
This thesis paper presents a simultaneous 3D mapping and geolocation of road surface technique that combines local road surface mapping and global camera localization. The local road surface is generated by structure from motion (SFM) with multiple views and optimized by Bundle Adjustment (BA). A system is developed for the global reconstruction of 3D road surface. Using the system, the proposed technique globally reconstructs 3D road surface by estimating the global camera pose using the Adaptive Extended Kalman Filter (AEKF) and integrates it with local road surface reconstruction techniques. The proposed AEKF-based technique uses image shift as prior. And the camera pose was corrected with the sparse low-accuracy Global Positioning System (GPS) data and digital elevation map (DEM). The AEKF adaptively updates the covariance of uncertainties such that the estimation works well in environment with varying uncertainties. The image capturing system is designed with the camera frame rate being dynamically controlled by vehicle speed read from on-board diagnostics (OBD) for capturing continuous data and helping to remove the effects of moving vehicle shadow from the images with a Random Sample and Consensus (RANSAC) algorithm. The proposed technique is tested in both simulation and field experiment, and compared with similar previous work. The results show that the proposed technique achieves better accuracy than conventional Extended Kalman Filter (EKF) method and achieves smaller translation error than other similar other works. / Master of Science / This thesis paper presents a simultaneous three dimensional (3D) mapping and geolocation of road surface technique that combines local road surface mapping and global camera localization. The local road surface is reconstructed by image processing technique with optimization. And the designed system globally reconstructs 3D road surface by estimating the global camera poses using the proposed Adaptive Extended Kalman Filter (AEKF)-based method and integrates with local road surface reconstructing technique. The camera pose uses image shift as prior, and is corrected with the sparse low-accuracy Global Positioning System (GPS) data and digital elevation map (DEM). The final 3D road surface map with geolocation is generated by combining both local road surface mapping and global localization results. The proposed technique is tested in both simulation and field experiment, and compared with similar previous work. The results show that the proposed technique achieves better accuracy than conventional Extended Kalman Filter (EKF) method and achieves smaller translation error than other similar other works.
|
46 |
Semi-supervised learning for joint visual odometry and depth estimationPapadopoulos, Kyriakos January 2024 (has links)
Autonomous driving has seen huge interest and improvements in the last few years. Two important functions of autonomous driving is the depth and visual odometry estimation.Depth estimation refers to determining the distance from the camera to each point in the scene captured by the camera, while the visual odometry refers to estimation of ego motion using images recorded by the camera. The algorithm presented by Zhou et al. [1] is a completely unsupervised algorithm for depth and ego motion estimation. This thesis sets out to minimize ambiguity and enhance performance of the algorithm [1]. The purpose of the mentioned algorithm is to estimate the depth map given an image, from a camera attached to the agent, and the ego motion of the agent, in the case of the thesis, the agent is a vehicle. The algorithm lacks the ability to make predictions in the true scale in both depth and ego motion, said differently, it suffers from ambiguity. Two extensions of the method were developed by changing the loss function of the algorithm and supervising ego motion. Both methods show a remarkable improvement in their performance and reduced ambiguity, utilizing only the ego motion ground data which is significantly easier to access than depth ground truth data
|
47 |
Visual odometry: comparing a stereo and a multi-camera approach / Odometria visual: comparando métodos estéreo e multi-câmeraPereira, Ana Rita 25 July 2017 (has links)
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results. / O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
|
48 |
Visual odometry and mapping in natural environments for arbitrary camera motion modelsTerzakis, George January 2016 (has links)
This is a thesis on outdoor monocular visual SLAM in natural environments. The techniques proposed herein aim at estimating camera pose and 3D geometrical structure of the surrounding environment. This problem statement was motivated by the GPS-denied scenario for a sea-surface vehicle developed at Plymouth University named Springer. The algorithms proposed in this thesis are mainly adapted for the Springer’s environmental conditions, so that the vehicle can navigate on a vision based localization system when GPS is not available; such environments include estuarine areas, forests and the occasional semi-urban territories. The research objectives are constrained versions of the ever-abiding problems in the fields of multiple view geometry and mobile robotics. The research is proposing new techniques or improving existing ones for problems such as scene reconstruction, relative camera pose recovery and filtering, always in the context of the aforementioned landscapes (i.e., rivers, forests, etc.). Although visual tracking is paramount for the generation of data point correspondences, this thesis focuses primarily on the geometric aspect of the problem as well as with the probabilistic framework in which the optimization of pose and structure estimates takes place. Besides algorithms, the deliverables of this research should include the respective implementations and test data for these algorithms in the form of a software library and a dataset containing footage of estuarine regions taken from a boat, along with synchronized sensor logs. This thesis is not the final analysis on vision based navigation. It merely proposes various solutions for the localization problem of a vehicle navigating in natural environments either on land or on the surface of the water. Although these solutions can be used to provide position and orientation estimates when GPS is not available, they have limitations and there is still a vast new world of ideas to be explored.
|
49 |
Visual odometry: comparing a stereo and a multi-camera approach / Odometria visual: comparando métodos estéreo e multi-câmeraAna Rita Pereira 25 July 2017 (has links)
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results. / O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
|
50 |
A Robust Synthetic Basis Feature Descriptor Implementation and Applications Pertaining to Visual Odometry, Object Detection, and Image StitchingRaven, Lindsey Ann 05 December 2017 (has links)
Feature detection and matching is an important step in many object tracking and detection algorithms. This paper discusses methods to improve upon previous work on the SYnthetic BAsis feature descriptor (SYBA) algorithm, which describes and compares image features in an efficient and discreet manner. SYBA utilizes synthetic basis images overlaid on a feature region of interest (FRI) to generate binary numbers that uniquely describe the feature contained within the FRI. These binary numbers are then used to compare against feature values in subsequent images for matching. However, in a non-ideal environment the accuracy of the feature matching suffers due to variations in image scale, and rotation. This paper introduces a new version of SYBA which processes FRI’s such that the descriptions developed by SYBA are rotation and scale invariant. To demonstrate the improvements of this robust implementation of SYBA called rSYBA, included in this paper are applications that have to cope with high amounts of image variation. The first detects objects along an oil pipeline by transforming and comparing frame-by-frame two surveillance videos recorded at two different times. The second shows camera pose plotting for a ground based vehicle using monocular visual odometry. The third generates panoramic images through image stitching and image transforms. All applications contain large amounts of image variation between image frames and therefore require a significant amount of correct feature matches to generate acceptable results.
|
Page generated in 0.0559 seconds