1 |
Monocular Odometry using Optical Flow on Autonomous Guided Vehicles / Monokulär Odometri med Optiskt Flöde för Autonoma Guidade FordonKunalic, Asmir January 2024 (has links)
This thesis investigates the use of monocular computer vision for odometry, specificallyemploying optical flow techniques. The goal is to develop and evaluate avisual odometry system that accurately estimates the trajectory and rotation of acamera in real-time. The system utilizes a camera and the Lucas-Kanade methodto capture high-frame-rate images, detect and track features within these images,and calculate the camera’s motion based on the movement of these features.Odometry is essential for estimating the position and orientation of moving objects,such as robots or vehicles, over time. Traditional methods rely on wheel encodersand Inertial Measurement Units (IMU), but visual odometry leverages visual datato enhance accuracy and robustness, without the risk of slippage and change inwheel diameter from loads. Furthermore a visual odometry system, like the oneused in this project, is not affected by occlusion.In this project, a camera was set up and calibrated, followed by the implementationof feature detection using the Shi-Tomasi corner detection algorithm. TheLucas-Kanade method was then applied to estimate optical flow, and an affinetransformation was used to compute the translation and rotation of the camera.The system’s performance was evaluated based on accuracy, computational efficiency,and robustness to noise.The results demonstrate that the visual odometry system can effectively track thecamera’s motion with a high degree of accuracy. But with limitations in speed. Thediscussion highlights potential applications in autonomous navigation and areas forfuture improvement, such as integrating additional sensors and enhancing featuredetection algorithm. / Denna rapport undersöker användningen av monokulärt datorseende förodometri, med särskilt fokus på optisk flödesteknik. Målet är att utveckla ochutvärdera ett visuellt odometrisystem som noggrant uppskattar en kameras positionoch rotation i realtid. Systemet använder en kamera och Lucas-Kanademetodenför att fånga bilder i hög frekvens, upptäcka och spåra pixelpunkter inomdessa bilder samt beräkna kamerans rörelse baserat på dessa punkters förflyttning.Odometri är viktigt för att uppskatta position och orientering av rörliga objekt,såsom robotar eller fordon, över tid. Traditionella metoder förlitar sig på hjulenkodersoch Inetrial Measurement Units (IMU), men visuell odometri utnyttjarvisuella data för att förbättra noggrannheten och robustheten, utan risken förslirning och förändringar i hjuldiametern på grund av belastningar. Dessutompåverkas inte ett visuellt odometrisystem, som det som används i detta projekt,av ocklusion.I detta projekt installerades och kalibrerades en kamera, följt av implementeringav funktionsdetektion med hjälp av Shi-Tomasi hörndetektionsalgoritm.Därefter tillämpades Lucas-Kanade-metoden för att uppskatta optiskt flöde, ochen affin transformation användes för att beräkna kamerans förflyttning och rotation.Systemets prestanda utvärderades utifrån noggrannhet, beräkningseffektivitetoch robusthet mot brus.Resultaten visar att det visuella odometrisystemet effektivt kan spåra kameransrörelse med hög noggrannhet. Med begränsning beroende på hastighet. Diskussionenbelyser potentiella tillämpningar inom autonom navigering och områden förframtida förbättringar, såsom integrering av ytterligare sensorer och förbättringav feature detection algoritmen.
|
2 |
Ολοκληρωμένο σύστημα οδομετρίας για κινούμενα ρομπότ με χρήση μετρήσεων από πολλαπλούς αισθητήρες / Integrated robotic odometry system using sensor data fusionΚελασίδη, Ελένη 10 June 2009 (has links)
Στόχος της παρούσας διπλωματικής εργασίας είναι η ανάπτυξη ολοκληρωμένου συστήματος οδομετρίας που θα υπολογίζει την απόσταση μετακίνησης ενός κινητού μέσου με χρήση τεχνικών όρων της όρασης των υπολογιστών. Στα κλασικά μετρητικά συστήματα εμφανίζονται σημαντικές αποκλίσεις μεταξύ της πραγματικής και της υπολογισθείσας θέσης του ρομπότ. Σκοπός της ολοκληρωμένης διάταξης οδομετρίας που θα κατασκευαστεί είναι ο περιορισμός των σφαλμάτων αυτών. / Aim of this diploma thesis is the development of a dead-reackoning
(odometry) system through a hollistic approach, in order to calculate the
distance travelled by a mobile system (robot) by means of computer vision.
In traditional systems there exist important deviations between the real
and the calculated positions. Goal of the current work is to limit
(minimize) the aforementioned deviations.
|
3 |
Performance Improvements for Lidar-based Visual OdometryDong, Hang 22 November 2013 (has links)
Recent studies have demonstrated that images constructed from lidar reflectance information exhibit superior robustness to lighting changes. However, due to the scanning nature of the lidar and assumptions made in previous implementations, data acquired during continuous vehicle motion suffer from geometric motion distortion and can subsequently result in poor metric visual odometry (VO) estimates, even over short distances (e.g., 5-10 m). The first part of this thesis revisits the measurement timing assumption made in previous systems, and proposes a frame-to-frame VO estimation framework based on a pose-interpolation scheme that explicitly accounts for the exact acquisition time of each intrinsic, geometric feature measurement. The second part of this thesis investigates a novel method of lidar calibration that can be applied without consideration of the internal structure of the sensor. Both methods are validated using experimental data collected from a planetary analogue environment with a real scanning laser rangefinder.
|
4 |
Performance Improvements for Lidar-based Visual OdometryDong, Hang 22 November 2013 (has links)
Recent studies have demonstrated that images constructed from lidar reflectance information exhibit superior robustness to lighting changes. However, due to the scanning nature of the lidar and assumptions made in previous implementations, data acquired during continuous vehicle motion suffer from geometric motion distortion and can subsequently result in poor metric visual odometry (VO) estimates, even over short distances (e.g., 5-10 m). The first part of this thesis revisits the measurement timing assumption made in previous systems, and proposes a frame-to-frame VO estimation framework based on a pose-interpolation scheme that explicitly accounts for the exact acquisition time of each intrinsic, geometric feature measurement. The second part of this thesis investigates a novel method of lidar calibration that can be applied without consideration of the internal structure of the sensor. Both methods are validated using experimental data collected from a planetary analogue environment with a real scanning laser rangefinder.
|
5 |
Online Monocular SLAM : RittumsPersson, Mikael January 2014 (has links)
A classic Computer Vision task is the estimation of a 3D map from a collection of images. This thesis explores the online simultaneous estimation of camera poses and map points, often called Visual Simultaneous Localisation and Mapping [VSLAM]. In the near future the use of visual information by autonomous cars is likely, since driving is a vision dominated process. For example, VSLAM could be used to estimate the position of the car in relation to objects of interest, such as the road, other cars and pedestrians. Aimed at the creation of a real-time, robust, loop closing, single camera SLAM system, the properties of several state-of-the-art VSLAM systems and related techniques are studied. The system goals cover several important, if difficult, problems, which makes a solution widely applicable. This thesis makes two contributions: A rigorous qualitative analysis of VSLAM methods and a system designed accordingly. A novel tracking by matching scheme is proposed, which, unlike the trackers used by many similar systems, is able to deal better with forward camera motion. The system estimates general motion with loop closure in real time. The system is compared to a state-of-the-art monocular VSLAM algorithm and found to be similar in speed and performance.
|
6 |
Lifelong localization of robots / Lifelong localization of robotsKrejčí, Tomáš January 2018 (has links)
This work presents a novel technique for lifelong localization of robots. It performs a tight fusion of GPS and Multi-State Constraint Kalman Filter, a visual-inertial odometry method for robot localization. It is shown in exper- iments that the proposed algorithm achieves better position accuracy than either GPS and Multi-State Constraint Kalman Filter alone. Additionally, the experiments demonstrate that the algorithm is able to reliably operate when the GPS signal is highly corrupted by noise or even in presence of substantial GPS outages. 1
|
7 |
Vizuální odometrie pro robotické vozidlo Car4 / Visual odometry for robotic vehicle Car4Szente, Michal January 2017 (has links)
This thesis deals with algorithms of visual odometry and its application on the experimental vehicle Car4. The first part contains different researches in this area on which the solution process is based. Next chapters introduce theoretical design and ideas of monocular and stereo visual odometry algorithms. The third part deals with the implementation in the software MATLAB with the use of Image processing toolbox. After tests done and based on real data, the chosen algorithm is applied to the vehicle Car4 used in practical conditions of interior and exterior. The last part summarizes the results of the work and address the problems which are asociated with the application of visual obmetry algorithms.
|
8 |
Robustness of State-of-the-Art Visual Odometry and SLAM Systems / Robusthet hos moderna Visual Odometry och SLAM systemMannila, Cassandra January 2023 (has links)
Visual(-Inertial) Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) are hot topics in Computer Vision today. These technologies have various applications, including robotics, autonomous driving, and virtual reality. They may also be valuable in studying human behavior and navigation through head-mounted visual systems. A complication to SLAM and VIO systems could potentially be visual degeneration such as motion blur. This thesis attempts to evaluate the robustness to motion blur of two open-source state-of-the-art VIO and SLAM systems, namely Delayed Marginalization Visual-Inertial Odometry (DM-VIO) and ORB-SLAM3. There are no real-world benchmark datasets with varying amounts of motion blur today. Instead, a semi-synthetic dataset was created with a dynamic trajectory-based motion blurring technique on an existing dataset, TUM VI. The systems were evaluated in two sensor configurations, Monocular and Monocular-Inertial. The systems are evaluated using the Root Mean Square (RMS) of the Absolute Trajectory Error (ATE). Based on the findings, the visual input highly influences DM-VIO, and performance decreases substantially as motion blur increases, regardless of the sensor configuration. In the Monocular setup, the performance decline significantly going from centimeter precision to decimeter. The performance is slightly improved using the Monocular-Inertial configuration. ORB-SLAM3 is unaffected by motion blur performing on centimeter precision, and there is no significant difference between the sensor configurations. Nevertheless, a stochastic behavior can be noted in ORB-SLAM3 that can cause some sequences to deviate from this. In total, ORB-SLAM3 outperforms DM-VIO on the all sequences in the semi-synthetic datasets created for this thesis. The code used in this thesis is available at GitHub https://github.com/cmannila along with forked repositories of DM-VIO and ORB-SLAM3 / Visual(-Inertial) Odometry (VIO) och Simultaneous Localization and Mapping (SLAM) är av stort intresse inom datorseende (Computer Vision). Dessa system har en variation av tillämpningar såsom robotik, själv-körande bilar och VR (Virtual Reality). En ytterligare potentiell tillämpning är att integrera SLAM/VIO i huvudmonterade system, såsom glasögon, för att kunna studera beteenden och navigering hos bäraren. En komplikation till SLAM och VIO skulle kunna vara en visuell degration i det visuella systemet såsom rörelseoskärpa. Detta examensarbete försöker utvärdera robustheten mot rörelseoskärpa i två tillgängliga state-of-the-art system, DM-VIO (Delayed Marginalization Visual-Inertial Odometry) och ORB-SLAM3. Idag finns det inga tillgängliga dataset som innehåller specifikt varierande mängder rörelseoskärpa. Således, skapades ett semisyntetiskt dataset baserat på ett redan existerande, vid namn TUM VI. Detta gjordes med en dynamisk rendering av rörelseoskärpa enligt en känd rörelsebana erhållen från datasetet. Med denna teknik kunde olika mängder exponeringstid simuleras. DM-VIO och ORB-SLAM3 utvärderades med två sensor konfigurationer, Monocular (en kamera) och Monokulär-Inertial (en kamera med Inertial Measurement Unit). Det objektiva mått som användes för att jämföra systemen var Root Mean Square av Absolute Trajectory Error i meter. Resultaten i detta arbete visar på att DM-VIO är i hög-grad beroende av den visuella signalen som används, och prestandan minskar avsevärt när rörelseoskärpan ökar, oavsett sensorkonfiguration. När enbart en kamera (Monocular) används minskar prestandan från centimeterprecision till diameter. ORB-SLAM3 påverkas inte av rörelseoskärpa och presterar med centimeterprecision för alla sekvenser. Det kan heller inte påvisas någon signifikant skillnad mellan sensorkonfigurationerna. Trots detta kan ett stokastiskt beteende i ORB-SLAM3 noteras, detta kan ha orsakat vissa sekvenser att bete sig avvikande. I helhet, ORB-SLAM3 överträffar DM-VIO på alla sekvenser i det semisyntetiska datasetet som skapats för detta arbete. Koden som använts i detta arbete finns tillgängligt på GitHub https://github.com/cmannila tillsammans med forkade repository för DM-VIO och ORB-SLAM3.
|
9 |
Standalone and embedded stereo visual odometry based navigation solutionChermak, Lounis January 2015 (has links)
This thesis investigates techniques and designs an autonomous visual stereo based navigation sensor to improve stereo visual odometry for purpose of navigation in unknown environments. In particular, autonomous navigation in a space mission context which imposes challenging constraints on algorithm development and hardware requirements. For instance, Global Positioning System (GPS) is not available in this context. Thus, a solution for navigation cannot rely on similar external sources of information. Support to handle this problem is required with the conception of an intelligent perception-sensing device that provides precise outputs related to absolute and relative 6 degrees of freedom (DOF) positioning. This is achieved using only images from stereo calibrated cameras possibly coupled with an inertial measurement unit (IMU) while fulfilling real time processing requirements. Moreover, no prior knowledge about the environment is assumed. Robotic navigation has been the motivating research to investigate different and complementary areas such as stereovision, visual motion estimation, optimisation and data fusion. Several contributions have been made in these areas. Firstly, an efficient feature detection, stereo matching and feature tracking strategy based on Kanade-Lucas-Tomasi (KLT) feature tracker is proposed to form the base of the visual motion estimation. Secondly, in order to cope with extreme illumination changes, High dynamic range (HDR) imaging solution is investigated and a comparative assessment of feature tracking performance is conducted. Thirdly, a two views local bundle adjustment scheme based on trust region minimisation is proposed for precise visual motion estimation. Fourthly, a novel KLT feature tracker using IMU information is integrated into the visual odometry pipeline. Finally, a smart standalone stereo visual/IMU navigation sensor has been designed integrating an innovative combination of hardware as well as the novel software solutions proposed above. As a result of a balanced combination of hardware and software implementation, we achieved 5fps frame rate processing up to 750 initials features at a resolution of 1280x960. This is the highest reached resolution in real time for visual odometry applications to our knowledge. In addition visual odometry accuracy of our algorithm achieves the state of the art with less than 1% relative error in the estimated trajectories.
|
10 |
Using Deep Learning Semantic Segmentation to Estimate Visual OdometryUnknown Date (has links)
In this research, image segmentation and visual odometry estimations in real time
are addressed, and two main contributions were made to this field. First, a new image
segmentation and classification algorithm named DilatedU-NET is introduced. This deep
learning based algorithm is able to process seven frames per-second and achieves over
84% accuracy using the Cityscapes dataset. Secondly, a new method to estimate visual
odometry is introduced. Using the KITTI benchmark dataset as a baseline, the visual
odometry error was more significant than could be accurately measured. However, the
robust framerate speed made up for this, able to process 15 frames per second. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
Page generated in 0.0375 seconds