• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 14
  • 10
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 91
  • 45
  • 37
  • 34
  • 34
  • 28
  • 27
  • 24
  • 21
  • 21
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Tracking motion in mineshafts : Using monocular visual odometry

Suikki, Karl January 2022 (has links)
LKAB has a mineshaft trolley used for scanning mineshafts. It is suspended down into a mineshaft by wire, scanning the mineshaft on both descent and ascent using two LiDAR (Light Detection And Ranging) sensors and an IMU (Internal Measurement Unit) used for tracking the position. With good tracking, one could use the LiDAR scans to create a three-dimensional model of the mineshaft which could be used for monitoring, planning and visualization in the future. Tracking with IMU is very unstable since most IMUs are susceptible to disturbances and will drift over time; we strive to track the movement using monocular visual odometry instead. Visual odometry is used to track movement based on video or images. It is the process of retrieving the pose of a camera by analyzing a sequence of images from one or multiple cameras. The mineshaft trolley is also equipped with one camera which is filming the descent and ascent and we aim to use this video for tracking. We present a simple algorithm for visual odometry and test its tracking on multiple datasets being: KITTI datasets of traffic scenes accompanied by their ground truth trajectories, mineshaft data intended for the mineshaft trolley operator and self-captured data accompanied by an approximate ground truth trajectory. The algorithm is feature based, meaning that it is focused on tracking recognizable keypoints in sequent images. We compare the performance of our algortihm by tracking the different datasets using two different feature detection and description systems, ORB and SIFT. We find that our algorithm performs well on tracking the movement of the KITTI datasets using both ORB and SIFT whose largest total errors of estimated trajectories are $3.1$ m and $0.7$ m for ORB and SIFT respectively in $51.8$ m moved. This was compared to their ground truth trajectories. The tracking of the self-captured dataset shows by visual inspection that the algorithm can perform well on data which has not been as carefully captured as the KITTI datasets. We do however find that we cannot track the movement with the current data from the mineshaft. This is due to the algorithm finding too few matching features in sequent images, breaking the pose estimation of the visual odometry. We make a comparison of how ORB and SIFT finds features in the mineshaft images and find that SIFT performs better by finding more features. The mineshaft data was never intended for visual odometry and therefore it is not suitable for this purpose either. We argue that the tracking could work in the mineshaft if the visual conditions are made better by focusing on more even lighting and camera placement or if it can be combined with other sensors such as an IMU, that assist the visual odometry when it fails.
52

Direction estimation using visual odometry / Uppskattning av riktning med visuell odometri

Masson, Clément January 2015 (has links)
This Master thesis tackles the problem of measuring objects’ directions from a motionlessobservation point. A new method based on a single rotating camera requiring the knowledge ofonly two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry isused to estimate camera rotations and key elements’ direction from a set of overlapping images.Then in a second phase, the direction of any object can be estimated by resectioning the cameraassociated to a picture showing this object. A detailed description of the algorithmic chain isgiven, along with test results on both synthetic data and real images taken with an infraredcamera. / Detta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
53

Pose Estimation using Genetic Algorithm with Line Extraction using Sequential RANSAC for a 2-D LiDAR

Kumat, Ashwin Dharmesh January 2021 (has links)
No description available.
54

Study of vehicle localization optimization with visual odometry trajectory tracking / Fusion de données pour la localisation de véhicule par suivi de trajectoire provenant de l'odométrie visuelle

Awang Salleh, Dayang Nur Salmi Dharmiza 19 December 2018 (has links)
Au sein des systèmes avancés d’aide à la conduite (Advanced Driver Assistance Systems - ADAS) pour les systèmes de transport intelligents (Intelligent Transport Systems - ITS), les systèmes de positionnement, ou de localisation, du véhicule jouent un rôle primordial. Le système GPS (Global Positioning System) largement employé ne peut donner seul un résultat précis à cause de facteurs extérieurs comme un environnement contraint ou l’affaiblissement des signaux. Ces erreurs peuvent être en partie corrigées en fusionnant les données GPS avec des informations supplémentaires provenant d'autres capteurs. La multiplication des systèmes d’aide à la conduite disponibles dans les véhicules nécessite de plus en plus de capteurs installés et augmente le volume de données utilisables. Dans ce cadre, nous nous sommes intéressés à la fusion des données provenant de capteurs bas cout pour améliorer le positionnement du véhicule. Parmi ces sources d’information, en parallèle au GPS, nous avons considérés les caméras disponibles sur les véhicules dans le but de faire de l’odométrie visuelle (Visual Odometry - VO), couplée à une carte de l’environnement. Nous avons étudié les caractéristiques de cette trajectoire reconstituée dans le but d’améliorer la qualité du positionnement latéral et longitudinal du véhicule sur la route, et de détecter les changements de voies possibles. Après avoir été fusionnée avec les données GPS, cette trajectoire générée est couplée avec la carte de l’environnement provenant d’Open-StreetMap (OSM). L'erreur de positionnement latérale est réduite en utilisant les informations de distribution de voie fournies par OSM, tandis que le positionnement longitudinal est optimisé avec une correspondance de courbes entre la trajectoire provenant de l’odométrie visuelle et les routes segmentées décrites dans OSM. Pour vérifier la robustesse du système, la méthode a été validée avec des jeux de données KITTI en considérant des données GPS bruitées par des modèles de bruits usuels. Plusieurs méthodes d’odométrie visuelle ont été utilisées pour comparer l’influence de la méthode sur le niveau d'amélioration du résultat après fusion des données. En utilisant la technique d’appariement des courbes que nous proposons, la précision du positionnement connait une amélioration significative, en particulier pour l’erreur longitudinale. Les performances de localisation sont comparables à celles des techniques SLAM (Simultaneous Localization And Mapping), corrigeant l’erreur d’orientation initiale provenant de l’odométrie visuelle. Nous avons ensuite employé la trajectoire provenant de l’odométrie visuelle dans le cadre de la détection de changement de voie. Cette indication est utile dans pour les systèmes de navigation des véhicules. La détection de changement de voie a été réalisée par une somme cumulative et une technique d’ajustement de courbe et obtient de très bon taux de réussite. Des perspectives de recherche sur la stratégie de détection sont proposées pour déterminer la voie initiale du véhicule. En conclusion, les résultats obtenus lors de ces travaux montrent l’intérêt de l’utilisation de la trajectoire provenant de l’odométrie visuelle comme source d’information pour la fusion de données à faible coût pour la localisation des véhicules. Cette source d’information provenant de la caméra est complémentaire aux données d’images traitées qui pourront par ailleurs être utilisées pour les différentes taches visée par les systèmes d’aides à la conduite. / With the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis.
55

Visual-Inertial Odometry for Autonomous Ground Vehicles

Burusa, Akshay Kumar January 2017 (has links)
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular vision data by using an inertial sensor in addition to the camera. It is shown that the simultaneous estimation of pose and scale in autonomous ground vehicles is possible by the fusion of visual and inertial sensors in an Extended Kalman Filter (EKF) framework. However, the convergence of scale is sensitive to several factors including the initialization error. An accurate estimation of scale allows the accurate estimation of pose. This facilitates the localization of ground vehicles in the absence of GNSS, providing a reliable fall-back option. / Monokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
56

Ego-Motion Estimation of Drones / Positionsestimering för drönare

Ay, Emre January 2017 (has links)
To remove the dependency on external structure for drone positioning in GPS-denied environments, it is desirable to estimate the ego-motion of drones on-board. Visual positioning systems have been studied for quite some time and the literature on the area is diligent. The aim of this project is to investigate the currently available methods and implement a visual odometry system for drones which is capable of giving continuous estimates with a lightweight solution. In that manner, the state of the art systems are investigated and a visual odometry system is implemented based on the design decisions. The resulting system is shown to give acceptable estimates. / För att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.
57

Monocular Visual Odometry for Underwater Navigation : An examination of the performance of two methods / Monokulär visuell odometri för undervattensnavigation : En undersökning av två metoder

Voisin-Denoual, Maxime January 2018 (has links)
This thesis examines two methods for monocular visual odometry, FAST + KLT and ORBSLAM2, in the case of underwater environments.This is done by implementing and testing the methods on different underwater datasets. The results for the FAST + KLT provide no evidence that this method is effective in underwater settings. However, results for the ORBSLAM2 indicate that good performance is possible whenproperly tuned and provided with good camera calibration. Still, thereremain challenges related to, for example, sand bottom environments and scale estimation in monocular setups. The conclusion is therefore that the ORBSLAM2 is the most promising method of the two tested for underwater monocular visual odometry. / Denna uppsats undersöker två metoder för monokulär visuell odometri, FAST + KLT och ORBSLAM2, i det särskilda fallet av miljöer under vatten. Detta görs genom att implementera och testa metoderna på olika undervattensdataset. Resultaten för FAST + KLT ger inget stöd för att metoden skulle vara effektiv i undervattensmiljöer. Resultaten för ORBSLAM2, däremot, indikerar att denna metod kan prestera bra om den justeras på rätt sätt och får bra kamerakalibrering. Samtidigt återstår dock utmaningar relaterade till exempelvis miljöer med sandbottnar och uppskattning av skala i monokulära setups. Slutsatsen är därför att ORBSLAM2 är den mest lovande metoden av de två testade för monokulär visuell odometri under vatten.
58

Robust Graph SLAM in Challenging GNSS Environments Using Lidar Odometry

Sundström, Jesper, Åström, Alfred January 2023 (has links)
Localization is a fundamental part of achieving fully autonomous vehicles. A localization system needs to constantly provide accurate information about the position of the vehicle and failure could lead to catastrophic consequences. Global Navigation Satellite Systems (GNSS) can supply accurate positional measurements but are susceptible to disturbances and outages in environments such as indoors, in tunnels, or nearby tall buildings. A common method called simultaneous localization and mapping (SLAM) creates a spatial map and simultaneously determines the position of a robot or vehicle. Utilizing different sensors for localization can increase the accuracy and robustness of such a system if used correctly. This thesis uses a graph-based version of SLAM called graph SLAM which stores previous measurements in a factor graph, making it possible to adjust the trajectory and map as new information is gained. The best position state estimation is gained by optimizing the graph representing the log-likelihood of the data. To treat GNSS outliers in a graph SLAM system, robust optimization techniques can be used, and this thesis investigates two techniques called realizing, reversing, recovering (RRR), and dynamic covariance scaling (DCS). High-end GNSS and Lidar sensors are used to gather a data set on a suburban public road. Information about the position and orientation of the vehicle are inferred from the data set using graph SLAM together with robust techniques in three different scenarios. The scenarios contain disturbances called multipathing, Gaussian disturbances, and outages. A parameter study examines the free parameters Φ in DCS and the p-value in the RRR method. The localization performance varies less when changing the free parameter in RRR than in DCS. The localization performance from RRR is consistent for most values of p. DCS shows greater variation in the localization performance for different values of Φ. In the tested cases, results conclude that Φ should be set to 2.5 for the most consistent localization across all states. RRR performed best with a p-value set to 0.85. A lower value led to too many discarded measurements which decreased performance. DCS outperforms RRR across the tested scenarios but further testing is needed to determine whether RRR is better suited for handling larger errors. / Lokalisering är en fundamental del i att uppnå självkörande fordon. Lokaliseringssytemets uppgift är att kontinuerligt förse exakt information om fordonets position, och vid fel kan detta leda till katastrofala följder. Global Navigation Satellite Systems (GNSS) används ofta i ett lokaliseringssystem för att uppnå exakta positionsmätningar, men i vissa miljöer så som parkeringshus, tunnlar eller storstäder kan störningar uppstå. Genom att förlita sig på fler typer av sensorer kan lokaliseringen bli mer noggrann och robust mot störningar. En vanlig metod som kan skatta ett fordons position och samtidigt skapa en karta över omgivningen är simultaneous localization and mapping SLAM. I detta examensarbete används graph SLAM, en version av SLAM som utnyttjar en faktorgraf för att representera mätvärden och sedan estimera position av fordonet. Robusta metoder kan användas inom SLAM för hantering av felaktiga mätningar i ett grafbaserat SLAM-nätverk, och här undersöks två metoder, realizing, reversing, recovering (RRR) och dynamic covariance scaling DCS. Data från GNSS och Lidarsensorer av hög kvalitet samlades in på en offentlig väg i stadsmiljö. I tre olika scenarion beräknas testfordonets position och orientering med graph SLAM tillsammans med de två robusta metoderna som undersöks. Scenarion utgör fall med olika typer av störningar som agerar på gnss-mätningarna. Störningarna är av typerna multipath, Gaussiskt brus, samt avbrott. DCS presterar bättre jämfört med RRR under de tester som utförts. En parameterstudie har utförts som undersöker parametern Φ i DCS och p i RRR. När Φ varieras i DCS ger det en större skillnad på resultatet än när p varieras i RRR. Detta indikerar att det är lättare att hantera och använda RRR optimalt. Trots att DCS presterar bättre än RRR i de testade fallen, krävs vidare undersökning för att besluta om RRR hanterar stora fel bättre än DCS. De bästa inställningarna visades vara 2,5 för Φ i DCS och större än 0,85 för p i RRR.
59

Light-weighted Deep Learning for LiDAR and Visual Odometry Fusion in Autonomous Driving

Zhang, Dingnan 20 December 2022 (has links)
No description available.
60

LDD: Learned Detector and Descriptor of Points for Visual Odometry

Aksjonova, Jevgenija January 2018 (has links)
Simultaneous localization and mapping is an important problem in robotics that can be solved using visual odometry -- the process of estimating ego-motion from subsequent camera images. In turn, visual odometry systems rely on point matching between different frames. This work presents a novel method for matching key-points by applying neural networks to point detection and description. Traditionally, point detectors are used in order to select good key-points (like corners) and then these key-points are matched using features extracted with descriptors. However, in this work a descriptor is trained to match points densely and then a detector is trained to predict, which points are more likely to be matched with the descriptor. This information is further used for selection of good key-points. The results of this project show that this approach can lead to more accurate results compared to model-based methods. / Samtidig lokalisering och kartläggning är ett viktigt problem inom robotik som kan lösas med hjälp av visuell odometri -- processen att uppskatta självrörelse från efterföljande kamerabilder. Visuella odometrisystem förlitar sig i sin tur på punktmatchningar mellan olika bildrutor. Detta arbete presenterar en ny metod för matchning av nyckelpunkter genom att applicera neurala nätverk för detektion av punkter och deskriptorer. Traditionellt sett används punktdetektorer för att välja ut bra nyckelpunkter (som hörn) och sedan används dessa nyckelpunkter för att matcha särdrag. I detta arbete tränas istället en deskriptor att matcha punkterna. Sedan tränas en detektor till att förutspå vilka punker som är mest troliga att matchas korrekt med deskriptorn. Denna information används sedan för att välja ut bra nyckelpunkter. Resultatet av projektet visar att det kan leda till mer precisa resultat jämfört med andra modellbaserade metoder.

Page generated in 0.0462 seconds