• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 69
  • 69
  • 69
  • 40
  • 24
  • 20
  • 17
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Navigation of Mobile Robots in Unknown Environments Using Range Measurements / Navigace mobilních robotů v neznámém prostředí s využitím měření vzdáleností

Jež, Ondřej January 2008 (has links)
The ability of a robot to navigate itself in the environment is a crucial step towards its autonomy. Navigation as a subtask of the development of autonomous robots is the subject of this thesis, focusing on the development of a method for simultaneous localization an mapping (SLAM) of mobile robots in six degrees of freedom (DOF). As a part of this research, a platform for 3D range data acquisition based on a continuously inclined laser rangefinder was developed. This platform is presented, evaluating the measurements and also presenting the robotic equipment on which the platform can be fitted. The localization and mapping task is equal to the registration of multiple 3D images into a common frame of reference. For this purpose, a method based on the Iterative Closest Point (ICP) algorithm was developed. First, the originally implemented SLAM method is presented, focusing on the time-wise performance and the registration quality issues introduced by the implemented algorithms. In order to accelerate and improve the quality of the time-demanding 6DOF image registration, an extended method was developed. The major extension is the introduction of a factorized registration, extracting 2D representations of vertical objects called leveled maps from the 3D point sets, ensuring these representations are 3DOF invariant. The extracted representations are registered in 3DOF using ICP algorithm, allowing pre-alignment of the 3D data for the subsequent robust 6DOF ICP based registration. The extended method is presented, showing all important modifications to the original method. The developed registration method was evaluated using real 3D data acquired in different indoor environments, examining the benefits of the factorization and other extensions as well as the performance of the original ICP based method. The factorization gives promising results compared to a single phase 6DOF registration in vertically structured environments. Also, the disadvantages of the method are discussed, proposing possible solutions. Finally, the future prospects of the research are presented.
32

Simultalní lokalizace, mapování a vytváření modelu prostředí pro autonomní robotiku / SLAM methods and tools for building world models in autonomous robotics

Pilch, Tomasz January 2016 (has links)
This master thesis explains what is SLAM and reprezents possibilities of implementation. The main goal is to implement chosen algorithm to create map and process measured data from LIDAR sensor.
33

Simultaneous recognition, localization and mapping for wearable visual robots

Castle, Robert Oliver January 2009 (has links)
With the advent of ever smaller and more powerful portable computing devices, and ever smaller cameras, wearable computing is becoming more feasible. The ever increasing numbers of augmented reality applications are allowing users to view additional data about their world overlaid on their world using portable computing devices. The main aim of this research is to enable a user of a wearable robot to explore large environments automatically viewing augmented reality at locations and on objects of interest. To implement this research a wearable visual robotic assistant is designed and constructed. Evaluation of the different technologies results in a final design that combines a shoulder mounted self stabilizing active camera, and a hand held magic lens into a single portable system. To enable the wearable assistant to locate known objects, a system is designed that combines an established method for appearance-based recognition with one for simultaneous localization and mapping using a single camera. As well as identifying planar objects, the objects are located relative to the camera in 3D by computing the image-to-database homography. The 3D positions of the objects are then used as additional measurements in the SLAM process, which routinely uses other point features to acquire and maintain a map of the surroundings, irrespective of whether objects are present or not. The monocular SLAM system is then replaced with a new method for building maps and tracking. Instead of tracking and mapping in a linear frame-rate driven manner, this adopted method separates the mapping from the tracking. This allows higher density maps to be constructed, and provides more robust tracking. The flexible framework provided by this method is extended to support multiple independent cameras, and multiple independent maps, allowing the user of the wearable two-camera robot to escape the confines of the desk top and explore arbitrarily sized environments. The final part of the work brings together the parallel tracking and multiple mapping system with the recognition and localization of planar objects from a database. The method is able to build multiple feature rich maps of the world and simultaneously recognize, reconstruct and localize objects within these maps. The object reconstruction process uses the spatially separated keyframes from the tracking and mapping processes to recognize and localize known objects in the world. These are then used for augmented reality overlays related to the objects.
34

Segbot : a multipurpose robotic platform for multi-floor navigation

Unwala, Ali Ishaq 17 February 2015 (has links)
The goal of this work is to describe a robotics platform called the Building Wide Intelligence Segbot (segbot). The segbot is a two wheeled robot that can robustly navigate our building, perform obstacle avoidance, and reason about the world. This work has two main goals. First we introduce the segbot platform to anyone that may use it in the future. We begin by examining off-the-shelf components we used and how to build a robot that is able to navigate in a complex multi-floor building environment with moving obstacles. Then we explain the software from a top down viewpoint, with a three layer abstraction model for segmenting code on any robotics platform. The second part of this document describes current work on the segbot platform, which is able to non-robustly take requests for coffee and navigate to a coffee shop while having to move across multiple floors in a building. My contribution to this work is building an infrastructure for multi-floor navigation. The multi-floor infrastructure built is non-robust but has helped identify several issues that will need to be tackled in future iterations of the segbot. / text
35

Matching Feature Points in 3D World

Avdiu, Blerta January 2012 (has links)
This thesis work deals with the most actual topic in Computer Vision field which is scene understanding and this using matching of 3D feature point images. The objective is to make use of Saab’s latest breakthrough in extraction of 3D feature points, to identify the best alignment of at least two 3D feature point images. The thesis gives a theoretical overview of the latest algorithms used for feature detection, description and matching. The work continues with a brief description of the simultaneous localization and mapping (SLAM) technique, ending with a case study on evaluation of the newly developed software solution for SLAM, called slam6d. Slam6d is a tool that registers point clouds into a common coordinate system. It does an automatic high-accurate registration of the laser scans. In the case study the use of slam6d is extended in registering 3D feature point images extracted from a stereo camera and the results of registration are analyzed. In the case study we start with registration of one single 3D feature point image captured from stationary image sensor continuing with registration of multiple images following a trail. Finally the conclusion from the case study results is that slam6d can register non-laser scan extracted feature point images with high-accuracy in case of single image but it introduces some overlapping results in the case of multiple images following a trail.
36

Incremental smoothing and mapping

Kaess, Michael 17 November 2008 (has links)
Incremental smoothing and mapping (iSAM) is presented, a novel approach to the simultaneous localization and mapping (SLAM) problem. SLAM is the problem of estimating an observer's position from local measurements only, while creating a consistent map of the environment. The problem is difficult because even very small errors in the local measurements accumulate over time and lead to large global errors. iSAM provides an exact and efficient solution to the SLAM estimation problem while also addressing data association. For the estimation problem, iSAM provides an exact solution by performing smoothing, which keeps all previous poses as part of the estimation problem, and therefore avoids linearization errors. iSAM uses methods from sparse linear algebra to provide an efficient incremental solution. In particular, iSAM deploys a direct equation solver based on QR matrix factorization of the naturally sparse smoothing information matrix. Instead of refactoring the matrix whenever new measurements arrive, only the entries of the factor matrix that actually change are calculated. iSAM is efficient even for robot trajectories with many loops as it performs periodic variable reordering to avoid unnecessary fill-in in the factor matrix. For the data association problem, I present state of the art data association techniques in the context of iSAM and present an efficient algorithm to obtain the necessary estimation uncertainties in real-time based on the factored information matrix. I systematically evaluate the components of iSAM as well as the overall algorithm using various simulated and real-world data sets.
37

Stereo vision for simultaneous localization and mapping

Brink, Wikus 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Simultaneous localization and mapping (SLAM) is vital for autonomous robot navigation. The robot must build a map of its environment while tracking its own motion through that map. Although many solutions to this intricate problem have been proposed, one of the most prominent issues that still needs to be resolved is to accurately measure and track landmarks over time. In this thesis we investigate the use of stereo vision for this purpose. In order to find landmarks in images we explore the use of two feature detectors: the scale-invariant feature transform (SIFT) and speeded-up robust features (SURF). Both these algorithms find salient points in images and calculate a descriptor for each point that is invariant to scale, rotation and illumination. By using the descriptors we match these image features between stereo images and use the geometry of the system to calculate a set of 3D landmark measurements. A Taylor approximation of this transformation is used to derive a Gaussian noise model for the measurements. The measured landmarks are matched to landmarks in a map to find correspondences. We find that this process often incorrectly matches ambiguous landmarks. To find these mismatches we develop a novel outlier detection scheme based on the random sample consensus (RANSAC) framework. We use a similarity transformation for the RANSAC model and derive a probabilistic consensus measure that takes the uncertainties of landmark locations into account. Through simulation and practical tests we find that this method is a significant improvement on the standard approach of using the fundamental matrix. With accurately identified landmarks we are able to perform SLAM. We investigate the use of three popular SLAM algorithms: EKF SLAM, FastSLAM and FastSLAM 2. EKF SLAM uses a Gaussian distribution to describe the systems states and linearizes the motion and measurement equations with Taylor approximations. The two FastSLAM algorithms are based on the Rao-Blackwellized particle filter that uses particles to describe the robot states, and EKFs to estimate the landmark states. FastSLAM 2 uses a refinement process to decrease the size of the proposal distribution and in doing so decreases the number of particles needed for accurate SLAM. We test the three SLAM algorithms extensively in a simulation environment and find that all three are capable of very accurate results under the right circumstances. EKF SLAM displays extreme sensitivity to landmark mismatches. FastSLAM, on the other hand, is considerably more robust against landmark mismatches but is unable to describe the six-dimensional state vector required for 3D SLAM. FastSLAM 2 offers a good compromise between efficiency and accuracy, and performs well overall. In order to evaluate the complete system we test it with real world data. We find that our outlier detection algorithm is very effective and greatly increases the accuracy of the SLAM systems. We compare results obtained by all three SLAM systems, with both feature detection algorithms, against DGPS ground truth data and achieve accuracies comparable to other state-of-the-art systems. From our results we conclude that stereo vision is viable as a sensor for SLAM. / AFRIKAANSE OPSOMMING: Gelyktydige lokalisering en kartering (simultaneous localization and mapping, SLAM) is ’n noodsaaklike proses in outomatiese robot-navigasie. Die robot moet ’n kaart bou van sy omgewing en tegelykertyd sy eie beweging deur die kaart bepaal. Alhoewel daar baie oplossings vir hierdie ingewikkelde probleem bestaan, moet een belangrike saak nog opgelos word, naamlik om landmerke met verloop van tyd akkuraat op te spoor en te meet. In hierdie tesis ondersoek ons die moontlikheid om stereo-visie vir hierdie doel te gebruik. Ons ondersoek die gebruik van twee beeldkenmerk-onttrekkers: scale-invariant feature transform (SIFT) en speeded-up robust features (SURF). Altwee algoritmes vind toepaslike punte in beelde en bereken ’n beskrywer vir elke punt wat onveranderlik is ten opsigte van skaal, rotasie en beligting. Deur die beskrywer te gebruik, kan ons ooreenstemmende beeldkenmerke soek en die geometrie van die stelsel gebruik om ’n stel driedimensionele landmerkmetings te bereken. Ons gebruik ’n Taylor- benadering van hierdie transformasie om ’n Gaussiese ruis-model vir die metings te herlei. Die gemete landmerke se beskrywers word dan vergelyk met dié van landmerke in ’n kaart om ooreenkomste te vind. Hierdie proses maak egter dikwels foute. Om die foutiewe ooreenkomste op te spoor het ons ’n nuwe uitskieterherkenningsalgoritme ontwikkel wat gebaseer is op die RANSAC-raamwerk. Ons gebruik ’n gelykvormigheidstransformasie vir die RANSAC-model en lei ’n konsensusmate af wat die onsekerhede van die ligging van landmerke in ag neem. Met simulasie en praktiese toetse stel ons vas dat die metode ’n beduidende verbetering op die standaardprosedure, waar die fundamentele matriks gebruik word, is. Met ons akkuraat geïdentifiseerde landmerke kan ons dan SLAM uitvoer. Ons ondersoek die gebruik van drie SLAM-algoritmes: EKF SLAM, FastSLAM en FastSLAM 2. EKF SLAM gebruik ’n Gaussiese verspreiding om die stelseltoestande te beskryf en Taylor-benaderings om die bewegings- en meetvergelykings te lineariseer. Die twee FastSLAM-algoritmes is gebaseer op die Rao-Blackwell partikelfilter wat partikels gebruik om robottoestande te beskryf en EKF’s om die landmerktoestande af te skat. FastSLAM 2 gebruik ’n verfyningsproses om die grootte van die voorstelverspreiding te verminder en dus die aantal partikels wat vir akkurate SLAM benodig word, te verminder. Ons toets die drie SLAM-algoritmes deeglik in ’n simulasie-omgewing en vind dat al drie onder die regte omstandighede akkurate resultate kan behaal. EKF SLAM is egter baie sensitief vir foutiewe landmerkooreenkomste. FastSLAM is meer bestand daarteen, maar kan nie die sesdimensionele verspreiding wat vir 3D SLAM vereis word, beskryf nie. FastSLAM 2 bied ’n goeie kompromie tussen effektiwiteit en akkuraatheid, en presteer oor die algemeen goed. Ons toets die hele stelsel met werklike data om dit te evalueer, en vind dat ons uitskieterherkenningsalgoritme baie effektief is en die akkuraatheid van die SLAM-stelsels beduidend verbeter. Ons vergelyk resultate van die drie SLAM-stelsels met onafhanklike DGPS-data, wat as korrek beskou kan word, en behaal akkuraatheid wat vergelykbaar is met ander toonaangewende stelsels. Ons resultate lei tot die gevolgtrekking dat stereo-visie ’n lewensvatbare sensor vir SLAM is.
38

From images to point clouds:practical considerations for three-dimensional computer vision

Herrera Castro, D. (Daniel) 04 August 2015 (has links)
Abstract Three-dimensional scene reconstruction has been an important area of research for many decades. It has a myriad of applications ranging from entertainment to medicine. This thesis explores the 3D reconstruction pipeline and proposes novel methods to improve many of the steps necessary to achieve a high quality reconstruction. It proposes novel methods in the areas of depth sensor calibration, simultaneous localization and mapping, depth map inpainting, point cloud simplification, and free-viewpoint rendering. Geometric camera calibration is necessary in every 3D reconstruction pipeline. This thesis focuses on the calibration of depth sensors. It presents a review of sensors models and how they can be calibrated. It then examines the case of the well-known Kinect sensor and proposes a novel calibration method using only planar targets. Reconstructing a scene using only color cameras entails di_erent challenges than when using depth sensors. Moreover, online applications require real-time response and must update the model as new frames are received. The thesis looks at these challenges and presents a novel simultaneous localization and mapping system using only color cameras. It adaptively triangulates points based on the detected baseline while still utilizing non-triangulated features for pose estimation. The thesis addresses the extrapolating missing information in depth maps. It presents three novel methods for depth map inpainting. The first utilizes random sampling to fit planes in the missing regions. The second method utilizes a 2nd-order prior aligned with intensity edges. The third method learns natural filters to apply a Markov random field on a joint intensity and depth prior. This thesis also looks at the issue of reducing the quantity of 3D information to a manageable size. It looks at how to merge depth maps from multiple views without storing redundant information. It presents a method to discard this redundant information while still maintaining the naturally variable resolution. Finally, transparency estimation is examined in the context of free-viewpoint rendering. A procedure to estimate transparency maps for the foreground layers of a multi-view scene is presented. The results obtained reinforce the need for a high accuracy 3D reconstruction pipeline including all the previously presented steps. / Tiivistelmä Kolmiuloitteisen ympäristöä kuvaavan mallin rakentaminen on ollut tärkeä tutkimuksen kohde jo usean vuosikymmenen ajan. Sen sovelluskohteet ulottuvat aina lääketieteestä viihdeteollisuuteen. Väitöskirja tarkastelee 3D ympäristöä kuvaavan mallin tuottamisprosessia ja esittää uusia keinoja parantaa korkealaatuisen rekonstruktion tuottamiseen vaadittavia vaiheita. Työssä esitetään uusia menetelmiä etäisyyssensoreiden kalibrointiin, samanaikaisesti tapahtuvaan paikannukseen ja kartoitukseen, syvyyskartan korjaamiseen, etäisyyspistepilven yksinkertaistamiseen ja vapaan katselukulman kuvantamiseen. Väitöskirjan ensi osa keskittyy etäisyyssensoreiden kalibrointiin. Työ esittelee erilaisia sensorimalleja ja niiden kalibrointia. Yleisen tarkastelun lisäksi keskitytään hyvin tunnetun Kinect-sensorin käyttämiseen, ja ehdotetaan uutta kalibrointitapaa pelkkiä tasokohteita hyväksikäyttäen. Pelkkien värikameroiden käyttäminen näkymän rekonstruointiin tuottaa erilaisia haasteita verrattuna etäisyyssensoreiden käyttöön kuvan muodostamisessa. Lisäksi verkkosovellukset vaativat reaaliaikaista vastetta. Väitös tarkastelee kyseisiä haasteita ja esittää uudenlaisen yhtäaikaisen paikannuksen ja kartoituksen mallin tuottamista pelkkiä värikameroita käyttämällä. Esitetty tapa kolmiomittaa adaptiivisesti pisteitä taustan pohjalta samalla kun hyödynnetään eikolmiomitattuja piirteitä asentotietoihin. Työssä esitellään kolme uudenlaista tapaa syvyyskartan korjaamiseen. Ensimmäinen tapa käyttää satunnaispisteitä tasojen kohdentamiseen puuttuvilla alueilla. Toinen tapa käyttää 2nd-order prior kohdistusta ja intensiteettireunoja. Kolmas tapa oppii filttereitä joita se soveltaa Markov satunnaiskenttiin yhteisillä tiheys ja syvyys ennakoinneilla. Tämä väitös selvittää myös mahdollisuuksia 3D-information määrän pienentämiseen käsiteltävälle tasolle. Työssä selvitetään, kuinka syvyyskarttoja voidaan yhdistää ilman päällekkäisen informaation tallentamista. Työssä esitetään tapa jolla päällekkäisestä datasta voidaan luopua kuitenkin säilyttäen luonnollisesti muuttuva resoluutio. Viimeksi, tutkimuksessa on esitetty läpinäkyvyyskarttojen arviointiproseduuri etualan kerroksien monikatselukulmanäkymissä vapaan katselukulman renderöinnin näkökulmasta. Saadut tulokset vahvistavat tarkan 3D-näkymän rakentamisliukuhihnan tarvetta sisältäen kaikki edellä mainitut vaiheet.
39

Průběžná lokalizace a mapování pomocí mobilního robotu / Simultaneous Localization and Mapping Using Mobile Robot

Neužil, Tomáš January 2008 (has links)
This work presents an overview of the simultaneous localisation and mapping (SLAM) problem in the mobile robotics. The Extended Kalman filter (EKF) based algorithm for localisation and mapping is proposed. For EKF algorithm the models of the skid steering mobile robot and the laser scanner are presented. The EKF algortihm is feature based algorithm, therefore the method for the landmark position determination was developed. This segmentation method is based on the clustering of the Radon transform space. Proposed SLAM algorithm was tested with real data measured with UTAR mobile platform. Achievments of the work are summarized in the conclusion of the proposed work and possible improvements of the components are suggested.
40

ROOM CATEGORIZATION USING SIMULTANEOUS LOCALIZATION AND MAPPING AND CONVOLUTIONAL NEURAL NETWORK

Iman Yazdansepas (9001001) 23 June 2020 (has links)
Robotic industries are growing faster than in any other era with the demand and rise of in home robots or assisted robots. Such a robot should be able to navigate between different rooms in the house autonomously. For autonomous navigation, the robot needs to build a map of the surrounding unknown environment and localize itself within the map. For home robots, distinguishing between different rooms improves the functionality of the robot. In this research, Simultaneously Localization And Mapping (SLAM) utilizing a LiDAR sensor is used to construct the environment map. LiDAR is more accurate and not sensitive to light intensity compared to vision. The SLAM method used is Gmapping to create a map of the environment. Gmapping is one of the robust and user-friendly packages in the Robotic Operating System (ROS), which creates a more accurate map, and requires less computational power. The constructed map is then used for room categorization using Convolutional Neural Network (CNN). Since CNN is one of the powerful techniques to classify the rooms based on the generated 2D map images. To demonstrate the applicability of the approach, simulations and experiments are designed and performed on campus and an apartment environment. The results indicate the Gmapping provides an accurate map. Each room used in the experimental design, undergoes training by using the Convolutional Neural Network with a data set of different apartment maps, to classify the room that was mapped using Gmapping. The room categorization results are compared with other approaches in the literature using the same data set to indicate the performance. The classification results show the applicability of using CNN for room categorization for applications such as assisted robots.

Page generated in 0.1242 seconds