• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 79
  • 79
  • 36
  • 34
  • 24
  • 18
  • 18
  • 16
  • 15
  • 15
  • 15
  • 15
  • 13
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Learning-based Visual Odometry - A Transformer Approach

Rao, Anantha N 04 October 2021 (has links)
No description available.
32

Vizuální odometrie pro robotické vozidlo Car4 / Visual odometry for robotic vehicle Car4

Szente, Michal January 2017 (has links)
This thesis deals with algorithms of visual odometry and its application on the experimental vehicle Car4. The first part contains different researches in this area on which the solution process is based. Next chapters introduce theoretical design and ideas of monocular and stereo visual odometry algorithms. The third part deals with the implementation in the software MATLAB with the use of Image processing toolbox. After tests done and based on real data, the chosen algorithm is applied to the vehicle Car4 used in practical conditions of interior and exterior. The last part summarizes the results of the work and address the problems which are asociated with the application of visual obmetry algorithms.
33

Amélioration des méthodes de navigation vision-inertiel par exploitation des perturbations magnétiques stationnaires de l’environnement / Improving Visual-Inertial Navigation Using Stationary Environmental Magnetic Disturbances

Caruso, David 01 June 2018 (has links)
Cette thèse s'intéresse au problème du positionnement (position et orientation) dans un contexte de réalité augmentée et aborde spécifiquement les solutions à base de capteurs embarqués. Aujourd'hui, les systèmes de navigation vision-inertiel commencent à combler les besoins spécifiques de cette application. Néanmoins, ces systèmes se basent tous sur des corrections de trajectoire issues des informations visuelles à haute fréquence afin de pallier la rapide dérive des capteurs inertiels bas-coûts. Pour cette raison, ces méthodes sont mises en défaut lorsque l'environnement visuel est défavorable.Parallèlement, des travaux récents menés par la société Sysnav ont démontré qu'il était possible de réduire la dérive de l'intégration inertielle en exploitant le champ magnétique, grâce à un nouveau type d'UMI bas-coût composée – en plus des accéléromètres et gyromètres traditionnels – d'un réseau de magnétomètres. Néanmoins, cette méthode est également mise en défaut si des hypothèses de non-uniformité et de stationnarité du champ magnétique ne sont pas vérifiées localement autour du capteur.Nos travaux portent sur le développement d'une solution de navigation à l'estime robuste combinant toutes ces sources d'information: magnétiques, visuelles et inertielles.Nous présentons plusieurs approches pour la fusion de ces données, basées sur des méthodes de filtrage ou d’optimisation et nous développons un modèle de prédiction du champ magnétique inspiré d'approximation proposées en inertiel et permettant d’intégrer efficacement des termes magnétiques dans les méthodes d’ajustement de faisceaux. Les performances de ces différentes approches sont évaluées sur des données réelles et nous démontrons le bénéfice de la fusion de données comparées aux solutions vision-inertielles ou magnéto-inertielles. Des propriétés théoriques de ces méthodes liées à la théorie de l’invariance des estimateurs sont également étudiées. / This thesis addresses the issue of positioning in 6-DOF that arises from augmented reality applications and focuses on embedded sensors based solutions.Nowadays, the performance reached by visual-inertial navigation systems is starting to be adequate for AR applications. Nonetheless, those systems are based on position correction from visual sensors involved at a relatively high frequency to mitigate the quick drift of low-cost inertial sensors. This is a problem when the visual environment is unfavorable.In parallel, recent works have shown it was feasible to leverage magnetic field to reduce inertial integration drift thanks to a new type of low-cost sensor, which includes – in addition to the accelerometers and gyrometers – a network of magnetometers. Yet, this magnetic approach for dead-reckoning fails if stationarity and non-uniformity hypothesis on the magnetic field are unfulfilled in the vicinity of the sensor.We develop a robust dead-reckoning solution combining simultaneously information from all these sources: magnetic, visual, and inertial sensor. We present several approaches to solve for the fusion problem, using either filtering or non-linear optimization paradigm and we develop an efficient way to use magnetic error term in a classical bundle adjustment that was inspired from already used idea for inertial terms. We evaluate the performance of these estimators on data from real sensors. We demonstrate the benefits of the fusion compared to visual-inertial and magneto-inertial solutions. Finally, we study theoretical properties of the estimators that are linked to invariance theory.
34

Tracking motion in mineshafts : Using monocular visual odometry

Suikki, Karl January 2022 (has links)
LKAB has a mineshaft trolley used for scanning mineshafts. It is suspended down into a mineshaft by wire, scanning the mineshaft on both descent and ascent using two LiDAR (Light Detection And Ranging) sensors and an IMU (Internal Measurement Unit) used for tracking the position. With good tracking, one could use the LiDAR scans to create a three-dimensional model of the mineshaft which could be used for monitoring, planning and visualization in the future. Tracking with IMU is very unstable since most IMUs are susceptible to disturbances and will drift over time; we strive to track the movement using monocular visual odometry instead. Visual odometry is used to track movement based on video or images. It is the process of retrieving the pose of a camera by analyzing a sequence of images from one or multiple cameras. The mineshaft trolley is also equipped with one camera which is filming the descent and ascent and we aim to use this video for tracking. We present a simple algorithm for visual odometry and test its tracking on multiple datasets being: KITTI datasets of traffic scenes accompanied by their ground truth trajectories, mineshaft data intended for the mineshaft trolley operator and self-captured data accompanied by an approximate ground truth trajectory. The algorithm is feature based, meaning that it is focused on tracking recognizable keypoints in sequent images. We compare the performance of our algortihm by tracking the different datasets using two different feature detection and description systems, ORB and SIFT. We find that our algorithm performs well on tracking the movement of the KITTI datasets using both ORB and SIFT whose largest total errors of estimated trajectories are $3.1$ m and $0.7$ m for ORB and SIFT respectively in $51.8$ m moved. This was compared to their ground truth trajectories. The tracking of the self-captured dataset shows by visual inspection that the algorithm can perform well on data which has not been as carefully captured as the KITTI datasets. We do however find that we cannot track the movement with the current data from the mineshaft. This is due to the algorithm finding too few matching features in sequent images, breaking the pose estimation of the visual odometry. We make a comparison of how ORB and SIFT finds features in the mineshaft images and find that SIFT performs better by finding more features. The mineshaft data was never intended for visual odometry and therefore it is not suitable for this purpose either. We argue that the tracking could work in the mineshaft if the visual conditions are made better by focusing on more even lighting and camera placement or if it can be combined with other sensors such as an IMU, that assist the visual odometry when it fails.
35

Direction estimation using visual odometry / Uppskattning av riktning med visuell odometri

Masson, Clément January 2015 (has links)
This Master thesis tackles the problem of measuring objects’ directions from a motionlessobservation point. A new method based on a single rotating camera requiring the knowledge ofonly two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry isused to estimate camera rotations and key elements’ direction from a set of overlapping images.Then in a second phase, the direction of any object can be estimated by resectioning the cameraassociated to a picture showing this object. A detailed description of the algorithmic chain isgiven, along with test results on both synthetic data and real images taken with an infraredcamera. / Detta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
36

Study of vehicle localization optimization with visual odometry trajectory tracking / Fusion de données pour la localisation de véhicule par suivi de trajectoire provenant de l'odométrie visuelle

Awang Salleh, Dayang Nur Salmi Dharmiza 19 December 2018 (has links)
Au sein des systèmes avancés d’aide à la conduite (Advanced Driver Assistance Systems - ADAS) pour les systèmes de transport intelligents (Intelligent Transport Systems - ITS), les systèmes de positionnement, ou de localisation, du véhicule jouent un rôle primordial. Le système GPS (Global Positioning System) largement employé ne peut donner seul un résultat précis à cause de facteurs extérieurs comme un environnement contraint ou l’affaiblissement des signaux. Ces erreurs peuvent être en partie corrigées en fusionnant les données GPS avec des informations supplémentaires provenant d'autres capteurs. La multiplication des systèmes d’aide à la conduite disponibles dans les véhicules nécessite de plus en plus de capteurs installés et augmente le volume de données utilisables. Dans ce cadre, nous nous sommes intéressés à la fusion des données provenant de capteurs bas cout pour améliorer le positionnement du véhicule. Parmi ces sources d’information, en parallèle au GPS, nous avons considérés les caméras disponibles sur les véhicules dans le but de faire de l’odométrie visuelle (Visual Odometry - VO), couplée à une carte de l’environnement. Nous avons étudié les caractéristiques de cette trajectoire reconstituée dans le but d’améliorer la qualité du positionnement latéral et longitudinal du véhicule sur la route, et de détecter les changements de voies possibles. Après avoir été fusionnée avec les données GPS, cette trajectoire générée est couplée avec la carte de l’environnement provenant d’Open-StreetMap (OSM). L'erreur de positionnement latérale est réduite en utilisant les informations de distribution de voie fournies par OSM, tandis que le positionnement longitudinal est optimisé avec une correspondance de courbes entre la trajectoire provenant de l’odométrie visuelle et les routes segmentées décrites dans OSM. Pour vérifier la robustesse du système, la méthode a été validée avec des jeux de données KITTI en considérant des données GPS bruitées par des modèles de bruits usuels. Plusieurs méthodes d’odométrie visuelle ont été utilisées pour comparer l’influence de la méthode sur le niveau d'amélioration du résultat après fusion des données. En utilisant la technique d’appariement des courbes que nous proposons, la précision du positionnement connait une amélioration significative, en particulier pour l’erreur longitudinale. Les performances de localisation sont comparables à celles des techniques SLAM (Simultaneous Localization And Mapping), corrigeant l’erreur d’orientation initiale provenant de l’odométrie visuelle. Nous avons ensuite employé la trajectoire provenant de l’odométrie visuelle dans le cadre de la détection de changement de voie. Cette indication est utile dans pour les systèmes de navigation des véhicules. La détection de changement de voie a été réalisée par une somme cumulative et une technique d’ajustement de courbe et obtient de très bon taux de réussite. Des perspectives de recherche sur la stratégie de détection sont proposées pour déterminer la voie initiale du véhicule. En conclusion, les résultats obtenus lors de ces travaux montrent l’intérêt de l’utilisation de la trajectoire provenant de l’odométrie visuelle comme source d’information pour la fusion de données à faible coût pour la localisation des véhicules. Cette source d’information provenant de la caméra est complémentaire aux données d’images traitées qui pourront par ailleurs être utilisées pour les différentes taches visée par les systèmes d’aides à la conduite. / With the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis.
37

Robustness of State-of-the-Art Visual Odometry and SLAM Systems / Robusthet hos moderna Visual Odometry och SLAM system

Mannila, Cassandra January 2023 (has links)
Visual(-Inertial) Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) are hot topics in Computer Vision today. These technologies have various applications, including robotics, autonomous driving, and virtual reality. They may also be valuable in studying human behavior and navigation through head-mounted visual systems. A complication to SLAM and VIO systems could potentially be visual degeneration such as motion blur. This thesis attempts to evaluate the robustness to motion blur of two open-source state-of-the-art VIO and SLAM systems, namely Delayed Marginalization Visual-Inertial Odometry (DM-VIO) and ORB-SLAM3. There are no real-world benchmark datasets with varying amounts of motion blur today. Instead, a semi-synthetic dataset was created with a dynamic trajectory-based motion blurring technique on an existing dataset, TUM VI. The systems were evaluated in two sensor configurations, Monocular and Monocular-Inertial. The systems are evaluated using the Root Mean Square (RMS) of the Absolute Trajectory Error (ATE).  Based on the findings, the visual input highly influences DM-VIO, and performance decreases substantially as motion blur increases, regardless of the sensor configuration. In the Monocular setup, the performance decline significantly going from centimeter precision to decimeter. The performance is slightly improved using the Monocular-Inertial configuration. ORB-SLAM3 is unaffected by motion blur performing on centimeter precision, and there is no significant difference between the sensor configurations. Nevertheless, a stochastic behavior can be noted in ORB-SLAM3 that can cause some sequences to deviate from this. In total, ORB-SLAM3 outperforms DM-VIO on the all sequences in the semi-synthetic datasets created for this thesis. The code used in this thesis is available at GitHub https://github.com/cmannila along with forked repositories of DM-VIO and ORB-SLAM3 / Visual(-Inertial) Odometry (VIO) och Simultaneous Localization and Mapping (SLAM) är av stort intresse inom datorseende (Computer Vision). Dessa system har en variation av tillämpningar såsom robotik, själv-körande bilar och VR (Virtual Reality). En ytterligare potentiell tillämpning är att integrera SLAM/VIO i huvudmonterade system, såsom glasögon, för att kunna studera beteenden och navigering hos bäraren. En komplikation till SLAM och VIO skulle kunna vara en visuell degration i det visuella systemet såsom rörelseoskärpa. Detta examensarbete försöker utvärdera robustheten mot rörelseoskärpa i två tillgängliga state-of-the-art system, DM-VIO (Delayed Marginalization Visual-Inertial Odometry) och ORB-SLAM3. Idag finns det inga tillgängliga dataset som innehåller specifikt varierande mängder rörelseoskärpa. Således, skapades ett semisyntetiskt dataset baserat på ett redan existerande, vid namn TUM VI. Detta gjordes med en dynamisk rendering av rörelseoskärpa enligt en känd rörelsebana erhållen från datasetet. Med denna teknik kunde olika mängder exponeringstid simuleras.  DM-VIO och ORB-SLAM3 utvärderades med två sensor konfigurationer, Monocular (en kamera) och Monokulär-Inertial (en kamera med Inertial Measurement Unit). Det objektiva mått som användes för att jämföra systemen var Root Mean Square av Absolute Trajectory Error i meter. Resultaten i detta arbete visar på att DM-VIO är i hög-grad beroende av den visuella signalen som används, och prestandan minskar avsevärt när rörelseoskärpan ökar, oavsett sensorkonfiguration. När enbart en kamera (Monocular) används minskar prestandan från centimeterprecision till diameter. ORB-SLAM3 påverkas inte av rörelseoskärpa och presterar med centimeterprecision för alla sekvenser. Det kan heller inte påvisas någon signifikant skillnad mellan sensorkonfigurationerna. Trots detta kan ett stokastiskt beteende i ORB-SLAM3 noteras, detta kan ha orsakat vissa sekvenser att bete sig avvikande. I helhet, ORB-SLAM3 överträffar DM-VIO på alla sekvenser i det semisyntetiska datasetet som skapats för detta arbete. Koden som använts i detta arbete finns tillgängligt på GitHub https://github.com/cmannila tillsammans med forkade repository för DM-VIO och ORB-SLAM3.
38

Ego-Motion Estimation of Drones / Positionsestimering för drönare

Ay, Emre January 2017 (has links)
To remove the dependency on external structure for drone positioning in GPS-denied environments, it is desirable to estimate the ego-motion of drones on-board. Visual positioning systems have been studied for quite some time and the literature on the area is diligent. The aim of this project is to investigate the currently available methods and implement a visual odometry system for drones which is capable of giving continuous estimates with a lightweight solution. In that manner, the state of the art systems are investigated and a visual odometry system is implemented based on the design decisions. The resulting system is shown to give acceptable estimates. / För att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.
39

Monocular Visual Odometry for Underwater Navigation : An examination of the performance of two methods / Monokulär visuell odometri för undervattensnavigation : En undersökning av två metoder

Voisin-Denoual, Maxime January 2018 (has links)
This thesis examines two methods for monocular visual odometry, FAST + KLT and ORBSLAM2, in the case of underwater environments.This is done by implementing and testing the methods on different underwater datasets. The results for the FAST + KLT provide no evidence that this method is effective in underwater settings. However, results for the ORBSLAM2 indicate that good performance is possible whenproperly tuned and provided with good camera calibration. Still, thereremain challenges related to, for example, sand bottom environments and scale estimation in monocular setups. The conclusion is therefore that the ORBSLAM2 is the most promising method of the two tested for underwater monocular visual odometry. / Denna uppsats undersöker två metoder för monokulär visuell odometri, FAST + KLT och ORBSLAM2, i det särskilda fallet av miljöer under vatten. Detta görs genom att implementera och testa metoderna på olika undervattensdataset. Resultaten för FAST + KLT ger inget stöd för att metoden skulle vara effektiv i undervattensmiljöer. Resultaten för ORBSLAM2, däremot, indikerar att denna metod kan prestera bra om den justeras på rätt sätt och får bra kamerakalibrering. Samtidigt återstår dock utmaningar relaterade till exempelvis miljöer med sandbottnar och uppskattning av skala i monokulära setups. Slutsatsen är därför att ORBSLAM2 är den mest lovande metoden av de två testade för monokulär visuell odometri under vatten.
40

LDD: Learned Detector and Descriptor of Points for Visual Odometry

Aksjonova, Jevgenija January 2018 (has links)
Simultaneous localization and mapping is an important problem in robotics that can be solved using visual odometry -- the process of estimating ego-motion from subsequent camera images. In turn, visual odometry systems rely on point matching between different frames. This work presents a novel method for matching key-points by applying neural networks to point detection and description. Traditionally, point detectors are used in order to select good key-points (like corners) and then these key-points are matched using features extracted with descriptors. However, in this work a descriptor is trained to match points densely and then a detector is trained to predict, which points are more likely to be matched with the descriptor. This information is further used for selection of good key-points. The results of this project show that this approach can lead to more accurate results compared to model-based methods. / Samtidig lokalisering och kartläggning är ett viktigt problem inom robotik som kan lösas med hjälp av visuell odometri -- processen att uppskatta självrörelse från efterföljande kamerabilder. Visuella odometrisystem förlitar sig i sin tur på punktmatchningar mellan olika bildrutor. Detta arbete presenterar en ny metod för matchning av nyckelpunkter genom att applicera neurala nätverk för detektion av punkter och deskriptorer. Traditionellt sett används punktdetektorer för att välja ut bra nyckelpunkter (som hörn) och sedan används dessa nyckelpunkter för att matcha särdrag. I detta arbete tränas istället en deskriptor att matcha punkterna. Sedan tränas en detektor till att förutspå vilka punker som är mest troliga att matchas korrekt med deskriptorn. Denna information används sedan för att välja ut bra nyckelpunkter. Resultatet av projektet visar att det kan leda till mer precisa resultat jämfört med andra modellbaserade metoder.

Page generated in 0.1054 seconds