• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 44
  • 44
  • 34
  • 17
  • 13
  • 12
  • 9
  • 6
  • 6
  • 5
  • 1
  • Tagged with
  • 482
  • 110
  • 104
  • 102
  • 94
  • 92
  • 87
  • 78
  • 60
  • 55
  • 50
  • 48
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Lidar-based SLAM : Investigation of environmental changes and use of road-edges for improved positioning

Karlsson, Oskar January 2020 (has links)
The ability to position yourself and map the surroundings is an important aspect for both civilian and military applications. Global navigation satellite systems are very popular and are widely used for positioning. This kind of system is however quite easy to disturb and therefore lacks robustness. The introduction of autonomous vehicles has accelerated the development of local positioning systems. This thesis work is done in collaboration with FOI in Linköping, using a positioning system with LIDAR and IMU sensors in a EKF-SLAM system using the GTSAM framework. The goal was to evaluate the system in different conditions and also investigate the possibility of using the road surface for positioning. Data available at FOI was used for evaluation. These data sets have a known sensor setup and matches the intended hardware. The data sets used have been gathered on three different occasions in a residential area, a country road and a forest road in sunny spring weather on two occasions and one occasion in winter conditions. To evaluate the performance several different measures were used, common ones such as looking at positioning error and RMSE, but also the number of found landmarks, the estimated distance between landmarks and the drift of the vehicle. All results pointed towards the forest road providing the best positioning, the country road the worst and the residential area in between. When comparing different weather conditions the data set from winter conditions performed the best. The difference between the two spring data sets was quite different which indicates that there may be other factors at play than just weather. A road edge detector was implemented to improve mapping and positioning. Vectors, denoted road vectors, with position and orientation were adapted to the edge points and the change between these road vectors were used in the system using GTSAM in areas with few landmarks. The clearest improvements to the drift in the vehicle direction was in the longer country area where the error was lowered with 6.4 % with increase in the error sideways and in orientation as side effects. The implemented method has a significant impact on the computational cost of the system as well as requiring precise adjustment of uncertainty to have a noticeable improvement and not worsen the overall results.
352

Towards new sensing capabilities for legged locomotion using real-time state estimation with low-cost IMUs / Vers de nouvelles capacités de perception pour les robotes à jambes à l'aide de l'estimation d'états temps réel avec des centrales inertielles à bas coût

Atchuthan, Dinesh 23 October 2018 (has links)
L'estimation en robotique est un sujet important affecté par les compromis entre certains critères majeurs parmi lesquels nous pouvons citer le temps de calcul et la précision. L'importance de ces deux critères dépend de l'application. Si le temps de calcul n'est pas important pour les méthodes hors ligne, il devient critique lorsque l'application doit s'exécuter en temps réel. De même, les exigences de précision dépendent des applications. Les estimateurs EKF sont largement utilisés pour satisfaire les contraintes en temps réel tout en obtenant une estimation avec des précisions acceptables. Les centrales inertielles (Inertial Measurement Unit - IMU) demeurent des capteurs répandus dnas les problèmes d'estimation de trajectoire. Ces capteurs ont par ailleurs la particularité de fournir des données à une fréquence élevée. La principale contribution de cette thèses est une présentation claire de la méthode de préintégration donnant lieu à une meilleure utilisation des centrales inertielles. Nous appliquons cette méthode aux problèmes d'estimation dans les cas de la navigation piétonne et celle des robots humanoïdes. Nous souhaitons par ailleurs montrer que l'estimation en temps réel à l'aide d'une centrale inertielle à faible coût est possible avec des méthodes d'optimisation tout en formulant les problèmes à l'aide d'un modèle graphique bien que ces méthodes soient réputées pour leurs coûts élevés en terme de calculs. Nous étudions également la calibration des centrales inertielles, une étape qui demeure critique pour leurs utilisations. Les travaux réalisés au cours de cette thèse ont été pensés en gardant comme perspective à moyen terme le SLAM visuel-inertiel. De plus, ce travail aborde une autre question concernant les robots à jambes. Contrairement à leur architecture habituelle, pourrions-nous utiliser plusieurs centrales inertielles à faible coût sur le robot pour obtenir des informations précieuses sur le mouvement en cours d'exécution ? / Estimation in robotics is an important subject affected by trade-offs between some major critera from which we can cite the computation time and the accuracy. The importance of these two criteria are application-dependent. If the computation time is not important for off-line methods, it becomes critical when the application has to run on real-time. Similarly, accuracy requirements are dependant on the applications. EKF estimators are widely used to satisfy real-time constraints while achieving acceptable accuracies. One sensor widely used in trajectory estimation problems remains the inertial measurement units (IMUs) providing data at a high rate. The main contribution of this thesis is a clear presentation of the preintegration theory yielding in a better use IMUs. We apply this method for estimation problems in both pedestrian and humanoid robots navigation to show that real-time estimation using a low- cost IMU is possible with smoothing methods while formulating the problems with a factor graph. We also investigate the calibration of the IMUs as it is a critical part of those sensors. All the development made during this thesis was thought with a visual-inertial SLAM background as a mid-term perspective. Firthermore, this work tries to rise another question when it comes to legged robots. In opposition to their usual architecture, could we use multiple low- cost IMUs on the robot to get valuable information about the motion being executed?
353

Registration and Localization of Unknown Moving Objects in Markerless Monocular SLAM

Blake Austin Troutman (15305962) 18 May 2023 (has links)
<p>Simultaneous localization and mapping (SLAM) is a general device localization technique that uses realtime sensor measurements to develop a virtualization of the sensor's environment while also using this growing virtualization to determine the position and orientation of the sensor. This is useful for augmented reality (AR), in which a user looks through a head-mounted display (HMD) or viewfinder to see virtual components integrated into the real world. Visual SLAM (i.e., SLAM in which the sensor is an optical camera) is used in AR to determine the exact device/headset movement so that the virtual components can be accurately redrawn to the screen, matching the perceived motion of the world around the user as the user moves the device/headset. However, many potential AR applications may need access to more than device localization data in order to be useful; they may need to leverage environment data as well. Additionally, most SLAM solutions make the naive assumption that the environment surrounding the system is completely static (non-moving). Given these circumstances, it is clear that AR may benefit substantially from utilizing a SLAM solution that detects objects that move in the scene and ultimately provides localization data for each of these objects. This problem is known as the dynamic SLAM problem. Current attempts to address the dynamic SLAM problem often use machine learning to develop models that identify the parts of the camera image that belong to one of many classes of potentially-moving objects. The limitation with these approaches is that it is impractical to train models to identify every possible object that moves; additionally, some potentially-moving objects may be static in the scene, which these approaches often do not account for. Some other attempts to address the dynamic SLAM problem also localize the moving objects they detect, but these systems almost always rely on depth sensors or stereo camera configurations, which have significant limitations in real-world use cases. This dissertation presents a novel approach for registering and localizing unknown moving objects in the context of markerless, monocular, keyframe-based SLAM with no required prior information about object structure, appearance, or existence. This work also details a novel deep learning solution for determining SLAM map initialization suitability in structure-from-motion-based initialization approaches. This dissertation goes on to validate these approaches by implementing them in a markerless, monocular SLAM system called LUMO-SLAM, which is built from the ground up to demonstrate this approach to unknown moving object registration and localization. Results are collected for the LUMO-SLAM system, which address the accuracy of its camera localization estimates, the accuracy of its moving object localization estimates, and the consistency with which it registers moving objects in the scene. These results show that this solution to the dynamic SLAM problem, though it does not act as a practical solution for all use cases, has an ability to accurately register and localize unknown moving objects in such a way that makes it useful for some applications of AR without thwarting the system's ability to also perform accurate camera localization.</p>
354

Analyzing different approaches to Visual SLAM in dynamic environments : A comparative study with focus on strengths and weaknesses / Analys av olika metoder för Visual SLAM i dynamisk miljö : En jämförande studie med fokus på styrkor och svagheter

Ólafsdóttir, Kristín Sól January 2023 (has links)
Simultaneous Localization and Mapping (SLAM) is the crucial ability for many autonomous systems to operate in unknown environments. In recent years SLAM development has focused on achieving robustness regarding the challenges the field still faces e.g. dynamic environments. During this thesis work different existing approaches to tackle dynamics with Visual SLAM systems were analyzed by surveying the recent literature within the field. The goal was to define the advantages and drawbacks of the approaches to provide further insight into the field of dynamic SLAM. Furthermore, two methods of different approaches were chosen for experiments and their implementation was documented. Key conclusions from the literature survey and experiments are the following. The exclusion of dynamic objects with regard to camera pose estimation presents promising results. Tracking of dynamic objects provides valuable information when combining SLAM with other tasks e.g. path planning. Moreover, dynamic reconstruction with SLAM offers better scene understanding and analysis of objects’ behavior within an environment. Many solutions rely on pre-processing and heavy hardware requirements due to the nature of the object detection methods. Methods of motion confirmation of objects lack consideration of camera movement, resulting in static objects being excluded from feature extraction. Considerations for future work within the field include accounting for camera movement for motion confirmation and producing available benchmarks that offer evaluation of the SLAM result as well as the dynamic object detection i.e. ground truth for both camera and objects within the scene. / Simultaneous Localization and Mapping (SLAM) är för många autonoma system avgörande för deras förmåga att kunna verka i tidigare outforskade miljöer. Under de senaste åren har SLAM-utvecklingen fokuserat på att uppnå robusthet när det gäller de utmaningar som fältet fortfarande står inför, t.ex. dynamiska miljöer. I detta examensarbete analyserades befintliga metoder för att hantera dynamik med visuella SLAM-system genom att kartlägga den senaste litteraturen inom området. Målet var att definiera för- och nackdelar hos de olika tillvägagångssätten för att bidra med insikter till området dynamisk SLAM. Dessutom valdes två metoder från olika tillvägagångssätt ut för experiment och deras implementering dokumenterades. De viktigaste slutsatserna från litteraturstudien och experimenten är följande. Uteslutningen av dynamiska objekt vid uppskattning av kamerans position ger lovande resultat. Spårning av dynamiska objekt ger värdefull information när SLAM kombineras med andra uppgifter, t.ex. path planning. Dessutom ger dynamisk rekonstruktion med SLAM bättre förståelse om omgivningen och analys av objekts beteende i den kringliggande miljön. Många lösningar är beroende av förbehandling samt ställer höga hårdvarumässiga krav till följd av objektdetekteringsmetodernas natur. Metoder för rörelsebekräftelse av objekt tar inte hänsyn till kamerarörelser, vilket leder till att statiska objekt utesluts från funktionsextraktion. Uppmaningar för framtida studier inom området inkluderar att ta hänsyn till kamerarörelser under rörelsebekräftelse samt att ta ändamålsenliga riktmärken för att möjliggöra tydligare utvärdering av SLAM-resultat såväl som för dynamisk objektdetektion, dvs. referensvärden för både kamerans position såväl som för objekt i scenen.
355

Optical Navigation for Autonomous Approach of Unexplored Small Bodies / Autonomt visionsbaserat navigationssystem för att närma sig en outforskad liten himlakropp

Villa, Jacopo January 2020 (has links)
This thesis presents an autonomous vision-based navigation strategy applicable to the approach phase of a small body mission, developed within the Robotics Section at NASA Jet Propulsion Laboratory. Today, the operations performed to approach small planetary bodies are largely dependent on ground support and human decision-making, which demand operational complexity and restrict the spectrum of achievable activities throughout the mission. In contrast, the autonomous pipeline presented here could be run onboard, without ground intervention. Using optical data only, the pipeline estimates the target body's rotation, pole, shape, and performs identification and tracking of surface landmarks, for terrain relative navigation. An end-to-end simulation is performed to validate the pipeline, starting from input synthetic images and ending with an orbit determination solution. As a case study, the approach phase of the Rosetta mission is reproduced, and it is concluded that navigation performance is in line with the ground-based state-of-the-art. Such results are presented in detail in the paper attached in the appendix, which presents the pipeline architecture and navigation analysis. This thesis manuscript aims to provide additional context to the appended paper, further describing some implementation details used for the approach simulations. / Detta examensarbete presenterar en strategi för ett autonomt visionsbaserat navigationssystem för att närma sig en liten himlakropp. Strategin har utvecklats av robotikavdelningen vid NASA Jet Propulsion Laboratory i USA. Nuvarande system som används för att närma sig en liten himlakropp bygger till största delen på markstationer och mänskligt beslutsfattande, vilka utgör komplexa rutiner och begränsar spektrumet av möjliga aktiviteter under rymduppdraget. I jämförelse, det autonoma system presenterat i denna rapport är utformat för att köras helt från rymdfarkosten och utan krav på kontakt med markstationer. Genom att använda enbart optisk information uppskattar systemet himlakroppens rotation, poler och form samt genomför en identifiering och spårning av landmärken på himlakroppens yta för relativ terrängnavigering. En simulering har genomförts för att validera det autonoma navigationssystemet. Simuleringen utgick ifrån bilder av himlakroppen och avslutades med en lösning på banbestämningsproblemet. Fasen då rymdfarkosten i ESA:s Rosetta-rymduppdrag närmar sig kometen valdes som fallstudie för simuleringen och slutsatsen från denna fallstudie var att systemets autonoma navigationsprestanda var i linje med toppmoderna system. Den detaljerade beskrivningen av det autonoma systemet och resultaten från studien har presenterats i ett konferensbidrag, som ingår som bilaga till rapporten. Inledningen av rapporten syftar till att förtydliga bakgrunden och implementering som komplement till innehållet i bilagan.
356

Local visual feature based localisation and mapping by mobile robots

Andreasson, Henrik January 2008 (has links)
This thesis addresses the problems of registration, localisation and simultaneous localisation and mapping (SLAM), relying particularly on local visual features extracted from camera images. These fundamental problems in mobile robot navigation are tightly coupled. Localisation requires a representation of the environment (a map) and registration methods to estimate the pose of the robot relative to the map given the robot’s sensory readings. To create a map, sensor data must be accumulated into a consistent representation and therefore the pose of the robot needs to be estimated, which is again the problem of localisation. The major contributions of this thesis are new methods proposed to address the registration, localisation and SLAM problems, considering two different sensor configurations. The first part of the thesis concerns a sensor configuration consisting of an omni-directional camera and odometry, while the second part assumes a standard camera together with a 3D laser range scanner. The main difference is that the former configuration allows for a very inexpensive set-up and (considering the possibility to include visual odometry) the realisation of purely visual navigation approaches. By contrast, the second configuration was chosen to study the usefulness of colour or intensity information in connection with 3D point clouds (“coloured point clouds”), both for improved 3D resolution (“super resolution”) and approaches to the fundamental problems of navigation that exploit the complementary strengths of visual and range information. Considering the omni-directional camera/odometry setup, the first part introduces a new registration method based on a measure of image similarity. This registration method is then used to develop a localisation method, which is robust to the changes in dynamic environments, and a visual approach to metric SLAM, which does not require position estimation of local image features and thus provides a very efficient approach. The second part, which considers a standard camera together with a 3D laser range scanner, starts with the proposal and evaluation of non-iterative interpolation methods. These methods use colour information from the camera to obtain range information at the resolution of the camera image, or even with sub-pixel accuracy, from the low resolution range information provided by the range scanner. Based on the ability to determine depth values for local visual features, a new registration method is then introduced, which combines the depth of local image features and variance estimates obtained from the 3D laser range scanner to realise a vision-aided 6D registration method, which does not require an initial pose estimate. This is possible because of the discriminative power of the local image features used to determine point correspondences (data association). The vision-aided registration method is further developed into a 6D SLAM approach where the optimisation constraint is based on distances of paired local visual features. Finally, the methods introduced in the second part are combined with a novel adaptive normal distribution transform (NDT) representation of coloured 3D point clouds into a robotic difference detection system.
357

Cartographie de l'environnement et suivi simultané de cibles dynamiques par un robot mobile

Baba, Abdellatif 18 December 2007 (has links) (PDF)
Une capacité essentielle pour qu'un robot mobile soit capable de comprendre son environnement est de pouvoir en construire la carte. Des entités mobiles dans la zone de perception du robot pendant qu'il construit sa propre carte peuvent provoquer de graves erreurs au niveau de sa localisation, et des inexactitudes dans la carte. Les traces des objets mobiles doivent être détectées et éliminées du modèle. Nous proposons d'adopter la représentation de l'environnement par une grille d'occupation. Cette représentation fournit un modèle de l'environnement pouvant être mis à jour à une fréquence élevée ce qui présente l'intérêt de permettre la détection et le suivi d'éléments mobiles autour du robot. La détection et le suivi des cibles dynamiques sont ensuite effectués. Une procédure qui attribue les mesures acquises aux cibles correspondantes est aussi présentée. Pour essayer d'améliorer la qualité de l'information sensorielle du robot et d'augmenter la fiabilité du suivi, nous utilisons une caméra catadioptrique dont les données seront fusionnées avec le laser. Une image binaire est calculée à partir d'une approche modifiée de flot optique où un nouveau terme qui compense les changements d'illumination est calculé et pris en compte. Deux stratégies qui déterminent les directions des cibles détectées à partir d'images déployées (sur une forme cylindrique) et d'images panoramiques brutes sont respectivement présentées et comparées. Nous proposons en fin, une technique probabiliste pour coupler des mesures reçues par l'un et par l'autre capteur et pour les associer à un même événement.
358

Benchmark of RELAP5 Check Valve Models against Experimental Data

Gardell, Jens January 2013 (has links)
The use of check valves in the nuclear industry is of great importance from a safety precaution point ofview (McElhaney, 1995). Choosing check valves for these high-pressurized systems comes with agreat challenge. The valves causes what is called check valve slams when closing, leading to a hugepressure wave traveling through the system. To prevent this from happening calculations have to bedone to see what kind of forces are generated during a check valve slam. When the forces are known itis easier designing systems that will endure these slams. A commonly used software in the nuclearindustry is RELAP5 (Reactor Excursion and Leak Analysis Program), its main purpose is to calculatetransients in piping systems. This program can also be used when calculating a check valve slam. Buthow precise is the code compared to the real event? By doing an experiment measuring pressures created by swing check valves during slams, the codewas compared to real data and analyzed to decide what was of importance when modeling for thesetypes of simulations. The RELAP5 code was not initially designed to calculate transients during a check valve slam. This isclearly shown when the code overestimates the pressure waves in the system when using themanufacturer data for the check valve model. Matching the data from the simulations in RELAP5 withthe data recorded from the experiment is not easy. The parameters used for this have no connection tothe specifications for the check valve, which means that transients are hard to estimate withoutexperimental data.
359

Modélisation des environnements dynamiques pour la localisation

Decrouez, Marion 07 May 2013 (has links) (PDF)
Les travaux effectués dans cette thèse s'inscrivent dans les problématiques de modélisation d'environnement pour la localisation par vision monoculaire. Nous nous intéressons tout particulièrement à la modélisation des environnements intérieurs dynamiques. Les environnements intérieurs sont constitués d'une multitude d'objets susceptibles d'être déplacés. Ces déplacements modifient de façon notable la structure et l'apparence de l'environnement et perturbent les méthodes actuelles de localisation par vision. Nous présentons dans ces travaux une nouvelle approche pour la modélisation d'un environnement et son évolution au fil du temps. Dans cette approche, nous définissons la scène explicitement comme une structure statique et un ensemble d'objets dynamiques. L'objet est défini comme une entité rigide qu'un utilisateur peut prendre et déplacer et qui est repérable visuellement. Nous présentons tout d'abord comment détecter et apprendre automatiquement les objets d'un environnement dynamique. Alors que les méthodes actuelles de localisation filtrent les incohérences dues aux modifications de la scène, nous souhaitons analyser ces modifications pour extraire des informations supplémentaires. Sans aucune connaissance a priori, un objet est défini comme une structure rigide ayant un mouvement cohérent par rapport à la structure statique de la scène. En associant deux méthodes de localisation par vision reposant sur des paradigmes différents, nous comparons les multiples passages d'une caméra dans un même environnement. La comparaison permet de détecter des objets ayant bougé entre deux passages. Nous pouvons alors, pour chaque objet détecté, apprendre un modèle géométrique et un modèle d'apparence et retenir les positions occupées par l'objet dans les différentes explorations. D'autre part, à chaque nouveau passage, la connaissance de l'environnement est enrichie en mettant à jour les cartes métrique et topologique de la structure statique de la scène. La découverte d'objet par le mouvement repose en grande partie sur un nouvel algorithme de détection de multiples structures entre deux vues que nous proposons dans ces travaux. Etant donné un ensemble de correspondances entre deux vues similaires, l'algorithme, reposant sur le RANSAC, segmente les structures correspondant aux différentes paramétrisations d'un modèle mathématique. La méthode est appliquée à la détection de multiples homographies pour détecter les plans de la scène et à la détection de multiples matrices fondamentales pour détecter les objets rigides en mouvement. La modélisation de l'environnement que nous proposons est utilisée dans une nouvelle formulation de reconnaissance de lieu prenant en compte la connaissance d'objets dynamiques susceptibles d'être présents dans l'environnement. Le modèle du lieu est constitué de l'apparence de la structure statique observée dans ce lieu. Une base de données d'objets est apprise à partir des précédentes observations de l'environnement avec la méthode de découverte par le mouvement. La méthode proposée permet à la fois de détecter les objets mobiles présents dans le lieu et de rejeter les erreurs de détection dues à la présence de ces objets. L'ensemble des approches proposées sont évaluées sur des données synthétiques et réelles. Des résultats qualitatifs et quantitatifs sont présentés tout au long du mémoire.
360

Fusion of IMU and Monocular-SLAM in a Loosely Coupled EKF

Henrik, Fåhraeus January 2017 (has links)
Camera based navigation is getting more and more popular and is the often the cornerstone in Augmented and Virtual Reality. However, navigation systems using camera are less accurate during fast movements and the systems are often resource intensive in terms of CPU and battery consumption. Also, the image processing algorithms introduce latencies in the systems, causing the information of the current position to be delayed. This thesis investigates if a camera and an IMU can be fused in a loosely coupled Extended Kalman Filter to reduce these problems. An IMU introduces unnoticeable latencies and the performance of the IMU is not affected by fast movements. For accurate tracking using an IMU it is important to estimate the bias correctly. Thus, a new method was used in a calibration step to see if it could improve the result. Also, a method to estimate the relative position and orientation between the camera and IMU is evaluated. The filter shows promising results estimating the orientation. The filter can estimate the orientation without latencies and can also offer accurate tracking during fast rotation when the camera is not able to estimate the orientation. However, the position is much harder and no performance gain could be seen. Some methods that are likely to improve the tracking are discussed and suggested as future work.

Page generated in 0.0548 seconds