• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 44
  • 44
  • 34
  • 17
  • 13
  • 12
  • 9
  • 6
  • 6
  • 5
  • 1
  • Tagged with
  • 481
  • 109
  • 104
  • 102
  • 93
  • 91
  • 87
  • 78
  • 60
  • 55
  • 49
  • 48
  • 46
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Multiple Mobile Robot SLAM for collaborative mapping and exploration

Dikoko, Boitumelo 26 January 2022 (has links)
Over the past five decades, Autonomous Mobile Robots (AMRs) have been an active research field. Maps of high accuracy are required for AMRs to operate successfully. In addition to this, AMRs needs to localise themselves reliably relative to the map. Simultaneous Localisation and Mapping (SLAM) address the problem of both map building and robot localisation. When exploring large areas, Multi-Robot SLAM (MRSLAM) has the potential to be far more efficient and robust, while sharing the computational burden across robots. However, MRSLAM encounters issues such as difficulty in map fusion of multi-resolution maps, and unknown relative positions of the robots. This thesis describes a distributed multi-resolution map merging algorithm for MRSLAM. HectorSLAM, which is one of many single robot SLAM implementations, has demonstrated exceptional results and was selected as the basis for the MRSLAM implementation in this project. We consider the environment to be three-dimensional with the maps being constrained to a two-dimensional plane. Each robot is equipped with a laser range sensor for perception and has no information regarding the relative positioning of the other robots. The experiments were conducted both in simulation and a real-world environment. Up-to three robots were placed in the same environment with Hector-SLAM running, the local maps and localisation were then sent to a central node, which attempted to find map overlaps and merge the resulting maps. When evaluating the success of the map merging algorithm, the quality of the map from each robot was interrogated. Experiments conducted on up to three AMRs show the effectiveness of the proposed algorithms in an indoor environment.
162

Autonomous Propulsion for a GPR-UGV / Autonom framdrivning för obemannat markradarfordon

Wall Eskilsson, Fredrik January 2022 (has links)
This thesis presents the research and development behind the integration of an autonomous propulsion system for a four-wheeled Ground Penetrating Radar (GPR) measurement unit, previously requiring manual operation. In order to ease the administration of the complex new system, Robot Operating System (ROS) 2 was used as middleware, where an implementation of Light Detection And Ranging (LiDAR) 3D-SLAM (Simultaneous Localization And Mapping) served to secure precise localization of the Unmanned Ground Vehicle (UGV) and mapping of its environment. This, with the ultimate goal of enabling accurate survey execution along paths optimized for various dynamic indoor- and outdoor environments. From a more general point of view, this work can also act as a hardware- and software selection guide for similar projects, especially if stricter physical limitations apply and the autonomous system is not considered the primary system, but the majority of the internal enclosed space of the UGV is reserved for higher purpose equipment or storage requirements. In this first prototype iteration, the mapping accuracy of the autonomous system reached centimeter precision and the execution of surveys in grid- and spiral patterns yielded position accuracies of 5(2) cm and 6(4) cm, respectively. These results are indeed very promising and show the proof of concept needed to enter the next development phase.
163

Mapping and Visualizing Ancient Water Storage Systems with an ROV – An Approach Based on Fusing Stationary Scans Within a Particle Filter

McVicker, William D 01 December 2012 (has links) (PDF)
This paper presents a new method for constructing 2D maps of enclosed un- derwater structures using an underwater robot equipped with only a 2D scanning sonar, compass and depth sensor. In particular, no motion model or odometry is used. To accomplish this, a two step offline SLAM method is applied to a set of stationary sonar scans. In the first step, the change in position of the robot between each consecutive pair of stationary sonar scans is estimated using a particle filter. This set of pair wise relative scan positions is used to create an estimate of each scan’s position within a global coordinate frame using a weighted least squares fit that optimizes consistency between the relative positions of the entire set of scans. In the second step of the method, scans and their estimated positions act as inputs to a mapping algorithm that constructs 2D octree-based evidence grid maps of the site. This work is motivated by a multi-year archaeological project that aims to construct maps of ancient water storage systems, i.e. cisterns, on the islands of Malta and Gozo. Cisterns, wells, and water galleries within fortresses, churches and homes operated as water storage systems as far back as 2000 B.C. Using a Remotely Operated Vehicle (ROV) these water storage systems located around the islands were explored while collecting video, still images, sonar, depth, and compass measurements. Data gathered from 5 different expeditions has produced maps of over 90 sites. Presented are results from applying the new mapping method to both a swimming pool of known size and to several of the previously unexplored water storage systems.
164

Dynamic Path Planning, Mapping, and Navigation for Autonomous GPR Survey Robots

Hjartarson, Ketill January 2023 (has links)
To map the subsurface Ground Penetrating Radar (GPR) can be used in a non-invasive way. It is currently done manually by pushing a wheeled device on a handlebar. This thesis suggests an alternative method using an integrated autonomous solution. To ac- complice that: several sensors were fused to give the robot perception of the world, the ability to localize itself within it, and plan a path to reach the goal. Detecting algorithms were implemented and tested to ensure the robot could handle a dynamic and compli- cated world. The results showed that the robot could independently navigate in a grid pattern conducting GPR surveys while avoiding obstacles and finding a safe route. All this will allow for collecting GPR data with precise localization measurements and in paths more detailed than a human operator could. In addition, it enables the operator to be at a safe distance in dangerous environments and to search large areas.
165

Towards Visual-Inertial SLAM for Dynamic Environments Using Instance Segmentation and Dense Optical Flow

Sarmiento Gonzalez, Luis Alejandro January 2021 (has links)
Dynamic environments pose an open problem for the performance of visual SLAM systems in real-life scenarios. Such environments involve dynamic objects that can cause pose estimation errors. Recently, Deep Learning semantic segmentation networks have been employed to identify potentially moving objects in visual SLAM; however, semantic information is subject to misclassifications and does not yield motion information alone. The thesis presents a hybrid method that employs semantic information and dense optical flow to determine moving objects through a motion likelihood. The proposed approach builds over stereo- inertial ORBSLAM 3, adding the capability of dynamic object detection to allow a more robust performance in dynamic scenarios. The system is evaluated in the OpenLORIS dataset, which considers stereo-inertial information in challenging scenes. The impact of dynamic objects on the system’s performance is studied through the use of ATE, RPE and Correctness Rate metrics. A comparison is made between the original ORBSLAM 3, ORBSLAM 3 considering only semantic information and the hybrid approach. The comparison helps identify the benefits and limitations of the proposed method. Results suggest an improvement in ATE for the hybrid approach with respect to the original ORBSLAM 3 in dynamic scenes. / Dynamiska miljöer utgör ett öppet problem för prestanda för visuella SLAM-system i verkliga scenarier. Sådana miljöer involverar dynamiska objekt som kan orsaka uppskattningsfel vid positionering. Nyligen har djupinlärning med semantiska segmenteringsnätverk använts för att identifiera potentiellt rörliga objekt i visuellt SLAM; emellertid är semantisk information föremål för felklassificeringar och ger inte enskilt rörelseinformation. Avhandlingen presenterar en hybridmetod som använder semantisk information och tätt optiskt flöde för att bestämma rörliga föremål genom en rörlig sannolikhet. Det föreslagna tillvägagångssättet bygger på stereotröghet ORBSLAM 3 och lägger till möjligheten för dynamisk objektdetektering för att möjliggöra en mer robust prestanda i dynamiska scenarier. Systemet utvärderas i OpenLORIS dataset, som tar hänsyn till stereo-inertial information i utmanande scener. Dynamiska objekts inverkan på systemets prestanda studeras med hjälp av medelvärdet av translationsfelet (ATE), relativa positioneringsfelet (RPE) och korrekthetsfördelning (Correctness Rate). En jämförelse görs mellan den ursprungliga ORBSLAM 3, ORBSLAM 3 med endast semantisk information, samt hybridmetoden. Jämförelsen hjälper till att identifiera fördelarna och begränsningarna med den föreslagna metoden. Resultaten tyder på en förbättring av ATE för hybridmetoden i jämförelse med den ursprungliga ORBSLAM 3 i dynamiska scener.
166

Mätosäkerhet vid deformationsmätning med bärbar laserskanner

Sundqvist, Filiph, Tannebo, Peter January 2017 (has links)
Mobile wearable laser scanning systems, also called personal laser scanning systems (PLS), have the potential to combine the strengths of mobile laser scanning (MLS) with usability indoors and in harsh terrain. The mobility makes surveying possible where terrestrial laser scanning (TLS) is difficult or not so resource-efficient to use. This may render PLS a both suitable and favorable alternative for certain deformation surveying. However, what measurement uncertainties that is acheivable and so how small deformations that is measureable, is yet to be clarified. The purpose of this study is therefore to investigate these subjects using a rucksack mounted PLS. A literature study is applied to outline the fundamentals of deformation surveying and thereby possible ways of controlling measurement uncertainties. Ways of georeferencing point clouds are described including the new technology Simultaneous Localization and Mapping (SLAM). Concluding is an overview of earlier work on measurement uncertainties regarding MLS, PLS and SLAM focusing on methods and results.A rucksack mounted PLS (Leica Pegasus: Backpack) is used to survey simulated deformations both out- and indoors as well as with and without control points. Rotational, horizontal and vertical displacements are tested (at an interval of 5° between 5° and 20° and 0.050 m between 0.050 m and 0.200 m, respectively) together with a nonmoving object. By optimizing the trajectory with SLAM and analyzing geometrical planes fitted into the point clouds, conclusions can be drawn regarding how small deformations that is measureable and the variability of the surveys. The results indicate possibilities to detect rotations at 5° outdoors, but the substantially fluctuating measurement uncertainties indoors show that rotations at 20° or smaller are impossible to detect. Horizontal and vertical deformations at 0.050 m can be surveyed outdoors, but the measurement uncertainties indoors exceed even the largest tested deformations for all but the vertical deformations with control points. These may be surveyed at 0,050 m. The analyzis of the nonmoving object reveals a combined 3D-uncertainty of 0.001 m outdoors, 1.49 m indoors without control points and 0.490 m indoors with control points. The results show that several factors have to be minded but also that there are possible areas of use outdoors within catastrophe analyzis, geomorphological changes in landforms, forestry and urban change detection. Indoors the results may improve by use of more advanced SLAM-algorithms along with control points, but the measurement uncertainties still imply that the main application is rough change detection. / Mobila bärbara laserskanningssystem, även kallade personliga laserskanningssystem (PLS), har potentialen att kombinera styrkorna hos fordonsburen mobil laserskanning (MLS) med användbarhet inomhus och i svårtillgänglig terräng. Mobiliteten innebär även möjligheter att mäta där terrester laserskanning (TLS) är svårt eller resursineffektivt att använda vilket kan göra PLS både lämpligt och fördelaktigt för viss deformationsmätning. Ännu är det dock inte klarlagt hur låg mätosäkerhet som kan nås och därmed hur små deformationer som kan uppmätas, varför den här studien avser att kontrollera det med ett ryggsäcksmonterat PLS. Via en litteraturstudie ges först en översikt av deformationsmätning och därvid möjliga sätt att kontrollera mätosäkerheter. Olika sätt att georeferera punktmoln beskrivs inklusive den nya tekniken Simultaneous Localization and Mapping (SLAM). Till sist gås tidigare studier av mätosäkerheter med MLS, PLS och SLAM igenom med fokus på metoder och resultat. Ett ryggsäcksmonterat PLS (Leica Pegasus: Backpack) används för att mäta simulerade deformationer både utomhus, inomhus och med tillägg av stödpunkter. Rotationer samt horisontella och vertikala deformationer testas (i intervall om 5° mellan 5° och 20° respektive 0,050 m mellan 0,050 m och 0,200 m) tillsammans med ett stillastående objekt. Genom att optimera skanningsslingan med hjälp av SLAM och analysera geometriska plan inpassade i punktmolnen, kan slutsatser dras om såväl hur små deformationer som kan uppmätas som om variabiliteten i mätningarna. Resultaten tyder på att rotationer på 5° kan mätas utomhus, men inomhus gör de kraftigt varierande mätosäkerheterna att rotationer på 20° och mindre inte kan mätas.Horisontellt och vertikalt kan deformationer på 0,050 m mätas utomhus, men inomhus kan endast vertikala deformationer med stödpunkter mätas (dock ner till 0,050 m). En slutlig analys av stillastående objekt visar på en sammanlagd standardosäkerhet i 3D på 0,001 m i utomhusmätningarna, 1,49 m i inomhusmätningarna utan stödpunkter och 0,490 m i inomhusmätningarna med stödpunkter. Resultaten visar på att flera faktorer måste tas i beaktning vid inmätning men också att potentiella användningsområden finns utomhus inom analys av katastrofområden, geomorfologiska förändringar av landformer, skogsbruk och detektion av urbana förändringar. Inomhus kan resultaten förbättras av mer avancerade SLAM-algoritmer tillsammans med stödpunkter, men mätosäkerheterna tyder ändå på att det framförallt är grov förändringsdetektering som är möjlig.
167

Integración de SLAM y planificación para el control de vehículos terrestres

Pinna Cortiñas, Juan Martín 09 April 2012 (has links)
En esta tesis, el algoritmo de SLAM localización y mapeo simultáneoha sido integrado con técnicas de planificación, con el fin de obtener trayectorias para vehículos terrestres en ambientes exteriores. El algoritmo de SLAM permite que el vehículo obtenga una representación del ambiente en un mapa variante en el tiempo al que podrían asociarse múltiples capas de datos, donde estas capas pueden incluir información diversa tal como: presencia de obstáculos, pendiente o altura del terreno, tipo de suelo, etc. Este mapa es dividido en regiones con alta correlación interna utilizando técnicas de geometría altamente eficientes desde el punto de vista computacional, con estructuras de datos y abstracciones que permitan manejar grandes cantidades de información. Estas regiones a su vez son divididas logrando una resolución en dos niveles de la información. Una que se denomina global y divide a todo el mapa conocido, y otra local que fracciona cada una de esas regiones. Con esta representación del ambiente, o en forma simultánea con la adquisición de ésta, es posible hallar trayectorias mediante técnicas de planificación. Con este fin, se definen criterios que permitan utilizar en la plani-ficación las múltiples capas de información permitiendo la utili-zación de cualquier algoritmo existente para este propósito. Esta aproximación permite lograr un algoritmo que es más eficiente que cualquier algoritmo de planificación existente para ambientes en exploración como se enfrentan en aplicio-nes de SLAM. Finalmente, como ejemplo de la estrategia propuesta, se generan trayectorias a partir de un modelo cinemático de un vehículo de cuatro ruedas y una estrategia de control. Se elige la velocidad lineal deseada del vehículo y se observa la limitación en el ángulo de giro del volante, con el objetivo de evaluar el desempeño del mismo mediante simulaciones numéricas y datos experimentales. / In this thesis, the SLAM algorithm Simultaneous Localization and Mapping has been integrated with planning strategies in order to achieve trajectories for land vehicles in outdoor environments. The SLAM algorithm allows the vehicle to obtain a representation of the environment in a time varying map. This time varying map could contain multiple layers of information, where these layers may include many types of data such as obstacles, slope or height of the terrain, etc. The map is divided into regions that have a high correlation using computational geometry techniques, which are highly efficient from the computational point of view. Towards this end, data structures and abstractions which can handle a significant amount of data are defined. Additionally, these regions are divided achieving two levels of inform: global and local. At global level, the whole known map is divided and at local level, each region is partitioned. Either once the repre-sentation of the environment is achieved or simultaneously with this process, it is possible to find trajectories using planning techniques. Criteria are defined to make use of the multiple layers of information at the planning stage and a particular type of planning algorithm is used, but there is a wide range of possible algorithms. This approach allows to achieve an algorithm that is more efficient than any existing planning algorithm for environments under exploration. Finally, to exemplify the proposed strategy, a kinetic model of a carlike vehicle is used with a control strategy to obtain trajectories. In these trajectories, the speed of the vehicle is chosen roughly constant and the limits of the steering angle are observed. The performance of the whole scheme is assessed through numerical simulations and experimental data.
168

Feature Extraction and Feasibility Study on CT Image Guided Colonoscopy

Shen, Yuan 14 May 2010 (has links)
Computed tomographic colonography(CTC), also called virtual colonoscopy, uses CT scanning and computer post-processing to create two dimensional images and three dimensional virtual views inside of the colon. Computer-aided polyp detection(CAPD) automatically detects colonic polyps and presents them to the user in either a first or second reader paradigm, with a goal reducing examination time while increasing the detection sensitivity. During colonoscopy, the endoscopists use the colonoscope inside of a patient's colon to target potential polyps and validate CAPD found ones. However, there is no direct information linking between CT images and the real-time optical colonoscopy(OC) video provided during the operation, thus endoscopists need to rely largely on their past experience to locate and remove polyps. The goal of this research project is to study the feasibility of developing an image guided colonoscopy(IGC) system that combines CTC images, real-time colonoscope position measurements, and video stream to validate and guide the removal of polyps found in CAPD. System would ease polyp level validation of CTC and improve the accuracy and efficiency of guiding the endoscopist to the target polyps. In this research project, a centerline based matching algorithm has been designed to estimate, in real time, the relative location of the colonoscope in the virtual colonoscopy environment. Furthermore, the feasibility of applying online simultaneous localization and mapping(SLAM) into CT image guided colonoscopy has been evaluated to further improve the performance of localizing and removing the pre-defined target polyps. A colon phantom is used to provide a testing setup to assess the performance of the proposed algorithms. / Master of Science
169

Scalable online decentralized smoothing and mapping

Cunningham, Alexander G. 22 May 2014 (has links)
Many applications for field robots can benefit from large numbers of robots, especially applications where the objective is for the robots to cover or explore a region. A key enabling technology for robust autonomy in these teams of small and cheap robots is the development of collaborative perception to account for the shortcomings of the small and cheap sensors on the robots. In this dissertation, I present DDF-SAM to address the decentralized data fusion (DDF) inference problem with a smoothing and mapping (SAM) approach to single-robot mapping that is online, scalable and consistent while supporting a variety of sensing modalities. The DDF-SAM approach performs fully decentralized simultaneous localization and mapping in which robots choose a relevant subset of variables from their local map to share with neighbors. Each robot summarizes their local map to yield a density on exactly this chosen set of variables, and then distributes this summarized map to neighboring robots, allowing map information to propagate throughout the network. Each robot fuses summarized maps it receives to yield a map solution with an extended sensor horizon. I introduce two primary variations on DDF-SAM, one that uses a batch nonlinear constrained optimization procedure to combine maps, DDF-SAM 1.0, and one that uses an incremental solving approach for substantially faster performance, DDF-SAM 2.0. I validate these systems using a combination of real-world and simulated experiments. In addition, I evaluate design trade-offs for operations within DDF-SAM, with a focus on efficient approximate map summarization to minimize communication costs.
170

Cartographie RGB-D dense pour la localisation visuelle temps-réel et la navigation autonome / Dense RGB-D mapping for real-time localisation and autonomous navigation

Meilland, Maxime 28 March 2012 (has links)
Dans le contexte de la navigation autonome en environnement urbain, une localisation précise du véhicule est importante pour une navigation sure et fiable. La faible précision des capteurs bas coût existants tels que le système GPS, nécessite l'utilisation d'autres capteurs eux aussi à faible coût. Les caméras mesurent une information photométrique riche et précise sur l'environnement, mais nécessitent l'utilisation d'algorithmes de traitement avancés pour obtenir une information sur la géométrie et sur la position de la caméra dans l'environnement. Cette problématique est connue sous le terme de Cartographie et Localisation Simultanées (SLAM visuel). En général, les techniques de SLAM sont incrémentales et dérivent sur de longues trajectoires. Pour simplifier l'étape de localisation, il est proposé de découpler la partie cartographie et la partie localisation en deux phases: la carte est construite hors-ligne lors d'une phase d'apprentissage, et la localisation est effectuée efficacement en ligne à partir de la carte 3D de l'environnement. Contrairement aux approches classiques, qui utilisent un modèle 3D global approximatif, une nouvelle représentation égo-centrée dense est proposée. Cette représentation est composée d'un graphe d'images sphériques augmentées par l'information dense de profondeur (RGB+D), et permet de cartographier de larges environnements. Lors de la localisation en ligne, ce type de modèle apporte toute l'information nécessaire pour une localisation précise dans le voisinage du graphe, et permet de recaler en temps-réel l'image perçue par une caméra embarquée sur un véhicule, avec les images du graphe, en utilisant une technique d'alignement d'images directe. La méthode de localisation proposée, est précise, robuste aux aberrations et prend en compte les changements d'illumination entre le modèle de la base de données et les images perçues par la caméra. Finalement, la précision et la robustesse de la localisation permettent à un véhicule autonome, équipé d'une caméra, de naviguer de façon sure en environnement urbain. / In an autonomous navigation context, a precise localisation of the vehicule is important to ensure a reliable navigation. Low cost sensors such as GPS systems are inacurrate and inefficicent in urban areas, and therefore the employ of such sensors alone is not well suited for autonomous navigation. On the other hand, camera sensors provide a dense photometric measure that can be processed to obtain both localisation and mapping information. In the robotics community, this problem is well known as Simultaneous Localisation and Mapping (SLAM) and it has been studied for the last thirty years. In general, SLAM algorithms are incremental and prone to drift, thus such methods may not be efficient in large scale environments for real-time localisation. Clearly, an a-priori 3D model simplifies the localisation and navigation tasks since it allows to decouple the structure and motion estimation problems. Indeed, the map can be previously computed during a learning phase, whilst the localisation can be handled in real-time using a single camera and the pre-computed model. Classic global 3D model representations are usually inacurrate and photometrically inconsistent. Alternatively, it is proposed to use an ego-centric model that represents, as close as possible, real sensor measurements. This representation is composed of a graph of locally accurate spherical panoramas augmented with dense depth information. These augmented panoramas allow to generate varying viewpoints through novel view synthesis. To localise a camera navigating locally inside the graph, we use the panoramas together with a direct registration technique. The proposed localisation method is accurate, robust to outliers and can handle large illumination changes. Finally, autonomous navigation in urban environments is performed using the learnt model, with only a single camera to compute localisation.

Page generated in 0.0542 seconds