• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 14
  • 11
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 128
  • 94
  • 45
  • 37
  • 35
  • 34
  • 30
  • 28
  • 25
  • 21
  • 21
  • 21
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

LDD: Learned Detector and Descriptor of Points for Visual Odometry

Aksjonova, Jevgenija January 2018 (has links)
Simultaneous localization and mapping is an important problem in robotics that can be solved using visual odometry -- the process of estimating ego-motion from subsequent camera images. In turn, visual odometry systems rely on point matching between different frames. This work presents a novel method for matching key-points by applying neural networks to point detection and description. Traditionally, point detectors are used in order to select good key-points (like corners) and then these key-points are matched using features extracted with descriptors. However, in this work a descriptor is trained to match points densely and then a detector is trained to predict, which points are more likely to be matched with the descriptor. This information is further used for selection of good key-points. The results of this project show that this approach can lead to more accurate results compared to model-based methods. / Samtidig lokalisering och kartläggning är ett viktigt problem inom robotik som kan lösas med hjälp av visuell odometri -- processen att uppskatta självrörelse från efterföljande kamerabilder. Visuella odometrisystem förlitar sig i sin tur på punktmatchningar mellan olika bildrutor. Detta arbete presenterar en ny metod för matchning av nyckelpunkter genom att applicera neurala nätverk för detektion av punkter och deskriptorer. Traditionellt sett används punktdetektorer för att välja ut bra nyckelpunkter (som hörn) och sedan används dessa nyckelpunkter för att matcha särdrag. I detta arbete tränas istället en deskriptor att matcha punkterna. Sedan tränas en detektor till att förutspå vilka punker som är mest troliga att matchas korrekt med deskriptorn. Denna information används sedan för att välja ut bra nyckelpunkter. Resultatet av projektet visar att det kan leda till mer precisa resultat jämfört med andra modellbaserade metoder.
62

Event-Based Visual SLAM : An Explorative Approach

Rideg, Johan January 2023 (has links)
Simultaneous Localization And Mapping (SLAM) is an important topic within the field of roboticsaiming to localize an agent in a unknown or partially known environment while simultaneouslymapping the environment. The ability to perform robust SLAM is especially important inhazardous environments such as natural disasters, firefighting and space exploration wherehuman exploration may be too dangerous or impractical. In recent years, neuromorphiccameras have been made commercially available. This new type of sensor does not outputconventional frames but instead an asynchronous signal of events at a microsecond resolutionand is capable of capturing details in complex lightning scenarios where a standard camerawould be either under- or overexposed, making neuromorphic cameras a promising solution insituations where standard cameras struggle. This thesis explores a set of different approachesto virtual frames, a frame-based representation of events, in the context of SLAM.UltimateSLAM, a project fusing events, gray scale and IMU data, is investigated using virtualframes of fixed and varying frame rate both with and without motion compensation. Theresulting trajectories are compared to the trajectories produced when using gray scale framesand the number of detected and tracked features are compared. We also use a traditional visualSLAM project, ORB-SLAM, to investigate the Gaussian weighted virtual frames and gray scaleframes reconstructed from the event stream using a recurrent network model. While virtualframes can be used for SLAM, the event camera is not a plug and play sensor and requires agood choice of parameters when constructing virtual frames, relying on pre-existing knowledgeof the scene.
63

Doppler radar odometry for localization in difficult underground environment

Fritz, Emil, Nilsson, Annika January 2023 (has links)
Accurate and efficient localization is a fundamental requirement for autonomous operation of robots, especially in areas that deny global navigation services. Localization is even more challenging in environments that present visual and geographic difficulties. This not only includes environmental aspects like darkness, fog and dust but also geometrically monotone areas. The solution that the Center for Applied Autonomous Sensor Systems at the Örebro university decided to develop is therefore a prototype of radar-only localization and mapping (SLAM) system. The radar modality is less susceptible to the environmental factors when compared to, for example, a lidar. Our goal is to support this effort by creating an odometry module that uses radar and inertial data to provide the localization for this SLAM prototype. This radar-inertial-odometry (RIO) takes radar point clouds and inertial gyroscopic data to output an odometry message usable by other components in the robot operating system (ROS). The module has been tested on two datasets representing areas typical for deployment, one consisting of underground tunnels and the other one being an outside forest environment. The dataset has been processed by two different mappers where the lidar has been used as the basic modality. This choice allows us to evaluate the odometry module in a more practical way. The final results are promising, the underground localization closely adheres to reality. The forest dataset is more challenging although it still resembles the ground-truth position in the horizontal dimension. The module's biggest shortcoming is a noticeable drift problem in the vertical z-dimension , for which we propose a constraint that limits this drift.
64

ESA ExoMars Rover PanCam System Geometric Modeling and Evaluation

Li, Ding 14 May 2015 (has links)
No description available.
65

From robotics to healthcare: toward clinically-relevant 3-D human pose tracking for lower limb mobility assessments

Mitjans i Coma, Marc 11 September 2024 (has links)
With an increase in age comes an increase in the risk of frailty and mobility decline, which can lead to dangerous falls and can even be a cause of mortality. Despite these serious consequences, healthcare systems remain reactive, highlighting the need for technologies to predict functional mobility decline. In this thesis, we present an end-to-end autonomous functional mobility assessment system that seeks to bridge the gap between robotics research and clinical rehabilitation practices. Unlike many fully integrated black-box models, our approach emphasizes the need for a system that is both reliable as well as transparent to facilitate its endorsement and adoption by healthcare professionals and patients. Our proposed system is characterized by the sensor fusion of multimodal data using an optimization framework known as factor graphs. This method, widely used in robotics, enables us to obtain visually interpretable 3-D estimations of the human body in recorded footage. These representations are then used to implement autonomous versions of standardized assessments employed by physical therapists for measuring lower-limb mobility, using a combination of custom neural networks and explainable models. To improve the accuracy of the estimations, we investigate the application of the Koopman operator framework to learn linear representations of human dynamics: We leverage these outputs as prior information to enhance the temporal consistency across entire movement sequences. Furthermore, inspired by the inherent stability of natural human movement, we propose ways to impose stability constraints in the dynamics during the training of linear Koopman models. In this light, we propose a sufficient condition for the stability of discrete-time linear systems that can be represented as a set of convex constraints. Additionally, we demonstrate how it can be seamlessly integrated into larger-scale gradient descent optimization methods. Lastly, we report the performance of our human pose detection and autonomous mobility assessment systems by evaluating them on outcome mobility datasets collected from controlled laboratory settings and unconstrained real-life home environments. While we acknowledge that further research is still needed, the study results indicate that the system can demonstrate promising performance in assessing mobility in home environments. These findings underscore the significant potential of this and similar technologies to revolutionize physical therapy practices.
66

Simultaneous Three-Dimensional Mapping and Geolocation of Road Surface

Li, Diya 23 October 2018 (has links)
This thesis paper presents a simultaneous 3D mapping and geolocation of road surface technique that combines local road surface mapping and global camera localization. The local road surface is generated by structure from motion (SFM) with multiple views and optimized by Bundle Adjustment (BA). A system is developed for the global reconstruction of 3D road surface. Using the system, the proposed technique globally reconstructs 3D road surface by estimating the global camera pose using the Adaptive Extended Kalman Filter (AEKF) and integrates it with local road surface reconstruction techniques. The proposed AEKF-based technique uses image shift as prior. And the camera pose was corrected with the sparse low-accuracy Global Positioning System (GPS) data and digital elevation map (DEM). The AEKF adaptively updates the covariance of uncertainties such that the estimation works well in environment with varying uncertainties. The image capturing system is designed with the camera frame rate being dynamically controlled by vehicle speed read from on-board diagnostics (OBD) for capturing continuous data and helping to remove the effects of moving vehicle shadow from the images with a Random Sample and Consensus (RANSAC) algorithm. The proposed technique is tested in both simulation and field experiment, and compared with similar previous work. The results show that the proposed technique achieves better accuracy than conventional Extended Kalman Filter (EKF) method and achieves smaller translation error than other similar other works. / Master of Science / This thesis paper presents a simultaneous three dimensional (3D) mapping and geolocation of road surface technique that combines local road surface mapping and global camera localization. The local road surface is reconstructed by image processing technique with optimization. And the designed system globally reconstructs 3D road surface by estimating the global camera poses using the proposed Adaptive Extended Kalman Filter (AEKF)-based method and integrates with local road surface reconstructing technique. The camera pose uses image shift as prior, and is corrected with the sparse low-accuracy Global Positioning System (GPS) data and digital elevation map (DEM). The final 3D road surface map with geolocation is generated by combining both local road surface mapping and global localization results. The proposed technique is tested in both simulation and field experiment, and compared with similar previous work. The results show that the proposed technique achieves better accuracy than conventional Extended Kalman Filter (EKF) method and achieves smaller translation error than other similar other works.
67

Semi-supervised learning for joint visual odometry and depth estimation

Papadopoulos, Kyriakos January 2024 (has links)
Autonomous driving has seen huge interest and improvements in the last few years. Two important functions of autonomous driving is the depth and visual odometry estimation.Depth estimation refers to determining the distance from the camera to each point in the scene captured by the camera, while the visual odometry refers to estimation of ego motion using images recorded by the camera. The algorithm presented by Zhou et al. [1] is a completely unsupervised algorithm for depth and ego motion estimation. This thesis sets out to minimize ambiguity and enhance performance of the algorithm [1]. The purpose of the mentioned algorithm is to estimate the depth map given an image, from a camera attached to the agent, and the ego motion of the agent, in the case of the thesis, the agent is a vehicle. The algorithm lacks the ability to make predictions in the true scale in both depth and ego motion, said differently, it suffers from ambiguity. Two extensions of the method were developed by changing the loss function of the algorithm and supervising ego motion. Both methods show a remarkable improvement in their performance and reduced ambiguity, utilizing only the ego motion ground data which is significantly easier to access than depth ground truth data
68

Visual odometry: comparing a stereo and a multi-camera approach / Odometria visual: comparando métodos estéreo e multi-câmera

Pereira, Ana Rita 25 July 2017 (has links)
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results. / O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
69

Triangulation Based Fusion of Sonar Data with Application in Mobile Robot Mapping and Localization

Wijk, Olle January 2001 (has links)
No description available.
70

Triangulation Based Fusion of Sonar Data with Application in Mobile Robot Mapping and Localization

Wijk, Olle January 2001 (has links)
No description available.

Page generated in 0.1045 seconds