• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 44
  • 44
  • 34
  • 17
  • 13
  • 12
  • 9
  • 6
  • 6
  • 5
  • 1
  • Tagged with
  • 482
  • 110
  • 104
  • 102
  • 94
  • 92
  • 87
  • 78
  • 60
  • 55
  • 50
  • 48
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Life-long mapping of objects and places in domestic environments

Rogers, John Gilbert 10 January 2013 (has links)
In the future, robots will expand from industrial and research applications to the home. Domestic service robots will work in the home to perform useful tasks such as object retrieval, cleaning, organization, and security. The tireless support of these systems will not only enable able bodied people to avoid mundane chores; they will also enable the elderly to remain independent from institutional care by providing service, safety, and companionship. Robots will need to understand the relationship between objects and their environments to perform some of these tasks. Structured indoor environments are organized according to architectural guidelines and convenience for their residents. Utilizing this information makes it possible to predict the location of objects. Conversely, one can also predict the function of a room from the detection of a few objects within a given space. This thesis introduces a framework for combining object permanence and context called the probabilistic cognitive model. This framework combines reasoning about spatial extent of places and the identity of objects and their relationships to one another and to the locations where they appear. This type of reasoning takes into account the context in which objects appear to determine their identity and purpose. The probabilistic cognitive model combines a mapping system called OmniMapper with a conditional random field probabilistic model for context representation. The conditional random field models the dependencies between location and identity in a real-world domestic environment. This model is used by mobile robot systems to predict the effects of their actions during autonomous object search tasks in unknown environments.
132

Multi-frame Measurement Fusion for State Estimation

Kroetsch, David January 2007 (has links)
Simultaneous Localization and Mapping (SLAM) is a technique used by robots and autonomous vehicles to build up a map while operating in an unknown environment, while at the same time keeping track of a current position estimate. This process is not as straightforward as it may sound. Due to inherent uncertainties in tracking the robot's motion and observing the world around it, errors accumulate. Incorporating External Localization (EL) sensors, such as Global Positioning System (GPS) receivers, which are not subject to the same type of drift over time, can allow a system to eliminate errors from the SLAM process. This, however, requires the system to work with measurements from both the SLAM and EL coordinate frames. Since the SLAM coordinate origin is dependent upon the vehicle's starting location, which is unknown, the relationship between it and the EL frame is unknown and must be estimated. The main contributions of this thesis arise from a novel approach to integrating EL with SLAM, by estimating a transformation between the SLAM and external reference frames. The Constrained Relative Submap Filter (CRSF) SLAM is a computationally efficient SLAM lter operates on a series of local submaps, instead of one large map, as with Extended Kalman Filter (EKF) SLAM. By integrating the transformation estimation process with CRSF SLAM, a method to correct submap locations with EL is presented. This eliminates long term estimation drift, aids in loop closing and allows for accurate map generation with a reference to an external frame.
133

Pose estimation of a VTOL UAV using IMU, Camera and GPS / Position- och orienteringsskattning av en VTOL UAV med IMU, Kamera och GPS

Bodesund, Fredrik January 2010 (has links)
When an autonomous vehicle has a mission to perform, it is of high importance that the robot has good knowledge about its position. Without good knowledge of the position, it will not be able to navigate properly and the data that it gathers, which could be of importance for the mission, might not be usable. A helicopter could for example be used to collect laser data from the terrain beneath it, which could be used to produce a 3D map of the terrain. If the knowledge about the position and orientation of the helicopter is poor then the collected laser data will be useless since it is unknown what the laser actually measures. A successful solution to position and orientation (pose) estimation of an autonomous helicopter, using an inertial measurement unit (IMU), a camera and a GPS, is proposed in this thesis. The problem is to estimate the unknown pose using sensors that measure different physical attributes and give readings containing noise. An extended Kalman filter solution to the simultaneous localisation and mapping (SLAM) is used to fuse data from the different sensors and estimate the pose of the robot. The scale invariant feature transform (SIFT) is used for feature extraction and the unified inverse depth parametrisation (UIDP) model is used to parametrise the landmarks. The orientation of the robot is described by quaternions. To be able to evaluate the performance of the filter an ABB industrial robot has been used as reference. The pose of the end tool of the robot is known with high accuracy and gives a good ground truth so that the estimations can be evaluated. The results shows that the algorithm performs well and that the pose is estimated with good accuracy. / När en autonom farkost skall utföra ett uppdrag är det av högsta vikt att den har god kännedom av sin position. Utan detta kommer den inte att kunna navigera och den data som den samlar in, relevant för uppdraget, kan vara oanvändbar. Till exempel skulle en helikopter kunna användas för att samla in laser data av terrängen under den, för att skapa en 3D karta av terrängen. Om kännedomen av helikopterns position och orientering är dålig kommer de insamlade lasermätningarna att vara oanvändbara eftersom det inte är känt vad lasern faktiskt mäter. I detta examensarbete presenteras en väl fungerande lösning för position och orienterings estimering av autonom helikopter med hjälp av en inertial measurement unit (IMU), en kamera och GPS. Problemet är att skatta positionen och orienteringen med hjälp av sensorer som mäter olika fysiska storheter och vars mätningar innehåller brus. En extended Kalman filter (EKF) lösning för simultaneous localisation and mapping (SLAM) problemet används för att fusionera data från de olika sensorerna och estimera positionen och orienteringen. För feature extrahering används scale invariant feature transform (SIFT) och för att parametrisera landmärken används unified inverse depth parametrisation (UIDP). Orienteringen av roboten beskrivs med hjälp av qvartinjoner. För att evaluera skattningarna har en ABB robot används som referens vid datainsamling. Då roboten har god kännedom om position och orientering av sitt främre verktyg gör detta att prestandan i filtret kan undersökas. Resultaten visar att algorithmen fungerar bra och att skattningar har hög noggrannhet.
134

Image Database for Pose Hypotheses Generation

Nyqvist, Hanna January 2012 (has links)
The presence of autonomous systems is becoming more and more common in today’s society.The contexts in which these kind of systems appear are numerous and the variations arelarge, from large and complex systems like autonomous mining platforms to smaller, moreeveryday useful systems like the self-guided vacuum cleaner. It is essential for a completelyself-supported mobile robot placed in unknown, dynamic or unstructured environments tobe able to localise itself and find its way through maps. This localisation problem is stillnot completely solved although the idea of completely autonomous systems arose in thehuman society centuries ago. Its complexity makes it a wide-spread field of reasearch evenin present days. In this work, the localisation problem is approached with an appearance based method forplace recognition. The objective is to develop an algorithm for fast pose hypotheses generationfrom a map. A database containing very low resolution images from urban environmentsis built and very short image retrieval times are made possible by application of imagedimension reduction. The evaluation of the database shows that it has real time potential becausea set of pose hypotheses can be generated in 3-25 hundreds of a second depending onthe tuning of the database. The probability of finding a correct pose suggestion among thegenerated hypotheses is as high as 87%, even when only a few hypotheses are retrieved fromthe database.
135

Multi-frame Measurement Fusion for State Estimation

Kroetsch, David January 2007 (has links)
Simultaneous Localization and Mapping (SLAM) is a technique used by robots and autonomous vehicles to build up a map while operating in an unknown environment, while at the same time keeping track of a current position estimate. This process is not as straightforward as it may sound. Due to inherent uncertainties in tracking the robot's motion and observing the world around it, errors accumulate. Incorporating External Localization (EL) sensors, such as Global Positioning System (GPS) receivers, which are not subject to the same type of drift over time, can allow a system to eliminate errors from the SLAM process. This, however, requires the system to work with measurements from both the SLAM and EL coordinate frames. Since the SLAM coordinate origin is dependent upon the vehicle's starting location, which is unknown, the relationship between it and the EL frame is unknown and must be estimated. The main contributions of this thesis arise from a novel approach to integrating EL with SLAM, by estimating a transformation between the SLAM and external reference frames. The Constrained Relative Submap Filter (CRSF) SLAM is a computationally efficient SLAM lter operates on a series of local submaps, instead of one large map, as with Extended Kalman Filter (EKF) SLAM. By integrating the transformation estimation process with CRSF SLAM, a method to correct submap locations with EL is presented. This eliminates long term estimation drift, aids in loop closing and allows for accurate map generation with a reference to an external frame.
136

Coordinated Landing and Mapping with Aerial and Ground Vehicle Teams

Ma, Yan 17 September 2012 (has links)
Micro Umanned Aerial Vehicle~(UAV) and Umanned Ground Vehicle~(UGV) teams present tremendous opportunities in expanding the range of operations for these vehicles. An effective coordination of these vehicles can take advantage of the strengths of both, while mediate each other's weaknesses. In particular, a micro UAV typically has limited flight time due to its weak payload capacity. To take advantage of the mobility and sensor coverage of a micro UAV in long range, long duration surveillance mission, a UGV can act as a mobile station for recharging or battery swap, and the ability to perform autonomous docking is a prerequisite for such operations. This work presents an approach to coordinate an autonomous docking between a quadrotor UAV and a skid-steered UGV. A joint controller is designed to eliminate the relative position error between the vehicles. The controller is validated in simulations and successful landing is achieved in indoor environment, as well as outdoor settings with standard sensors and real disturbances. Another goal for this work is to improve the autonomy of UAV-UGV teams in positioning denied environments, a very common scenarios for many robotics applications. In such environments, Simultaneous Mapping and Localization~(SLAM) capability is the foundation for all autonomous operations. A successful SLAM algorithm generates maps for path planning and object recognition, while providing localization information for position tracking. This work proposes an SLAM algorithm that is capable of generating high fidelity surface model of the surrounding, while accurately estimating the camera pose in real-time. This algorithm improves on a clear deficiency of its predecessor in its ability to perform dense reconstruction without strict volume limitation, enabling practical deployment of this algorithm on robotic systems.
137

Variabilitätsmodellierung in Kartographierungs- und Lokalisierungsverfahren / Variability Modelling in Localization and Mapping Algorithms

Werner, Sebastian 24 July 2015 (has links) (PDF)
In der heutigen Zeit spielt die Automatisierung eine immer bedeutendere Rolle, speziell im Bereich der Robotik entwickeln sich immer neue Einsatzgebiete, in denen der Mensch durch autonome Fahrzeuge ersetzt wird. Dabei orientiert sich der Großteil der eingesetzten Roboter an Streckenmarkierungen, die in den Einsatzumgebungen installiert sind. Bei diesen Systemen gibt es jedoch einen hohen Installationsaufwand, was die Entwicklung von Robotersystemen, die sich mithilfe ihrer verbauten Sensorik orientieren, vorantreibt. Es existiert zwar eine Vielzahl an Robotern die dafür verwendet werden können. Die Entwicklung der Steuerungssoftware ist aber immer noch Teil der Forschung. Für die Steuerung wird eine Umgebungskarte benötigt, an der sich der Roboter orientieren kann. Hierfür eignen sich besonders SLAM-Verfahren, die simultanes Lokalisieren und Kartographieren durchführen. Dabei baut der Roboter während seiner Bewegung durch den Raum mithilfe seiner Sensordaten eine Umgebungskarte auf und lokalisiert sich daran, um seine Position auf der Karte exakt zu bestimmen. Im Laufe dieser Arbeit wurden über 30 verschiedene SLAM Implementierungen bzw. Umsetzungen gefunden die das SLAM Problem lösen. Diese sind jedoch größtenteils an spezielle Systembzw. Umgebungsvoraussetzungen angepasste eigenständige Implementierungen. Es existiert keine öffentlich zugängliche Übersicht, die einen Vergleich aller bzw. des Großteils der Verfahren, z.B. in Bezug auf ihre Funktionsweise, Systemvoraussetzungen (Sensorik, Roboterplattform), Umgebungsvoraussetzungen (Indoor, Outdoor, ...), Genauigkeit oder Geschwindigkeit, gibt. Viele dieser SLAMs besitzen Implementierungen und Dokumentationen in denen ihre Einsatzgebiete, Testvoraussetzungen oder Weiterentwicklungen im Vergleich zu anderen SLAMVerfahren beschrieben werden, was aber bei der großen Anzahl an Veröffentlichungen das Finden eines passenden SLAM-Verfahrens nicht erleichtert. Bei einer solchen Menge an SLAM-Verfahren und Implementierungen stellen sich aus softwaretechnologischer Sicht folgende Fragen: 1. Besteht die Möglichkeit einzelne Teile des SLAM wiederzuverwenden? 2. Besteht die Möglichkeit einzelne Teile des SLAM dynamisch auszutauschen? Mit dieser Arbeit wird das Ziel verfolgt, diese beiden Fragen zu beantworten. Hierfür wird zu Beginn eine Übersicht über alle gefundenen SLAMs aufgebaut um diese in ihren grundlegenden Eigenschaften zu unterscheiden. Aus der Vielzahl von Verfahren werden die rasterbasierten Verfahren, welche Laserscanner bzw. Tiefenbildkamera als Sensorik verwenden, als zu untersuchende Menge ausgewählt. Diese Teilmenge an SLAM-Verfahren wird hinsichtlich ihrer nichtfunktionalen Eigenschaften genauer untersucht und versucht in Komponenten zu unterteilen, welche in mehreren verschiedenen Implementierungen wiederverwendet werden können. Anhand der extrahierten Komponenten soll ein Featurebaum aufgebaut werden, der dem Anwender einen Überblick und die Möglichkeit bereitstellt SLAM-Verfahren nach speziellen Kriterien (Systemvoraussetzungen, Umgebungen, ...) zusammenzusetzen bzw. zur Laufzeit anzupassen. Dafür müssen die verfügbaren SLAM Implementierungen und dazugehörigen Dokumentationen in Bezug auf ihre Gemeinsamkeiten und Unterschiede analysiert werden.
138

Laser-Based 3D Mapping and Navigation in Planetary Worksite Environments

Tong, Chi Hay 14 January 2014 (has links)
For robotic deployments in planetary worksite environments, map construction and navigation are essential for tasks such as base construction, scientific investigation, and in-situ resource utilization. However, operation in a planetary environment imposes sensing restrictions, as well as challenges due to the terrain. In this thesis, we develop enabling technologies for autonomous mapping and navigation by employing a panning laser rangefinder as our primary sensor on a rover platform. The mapping task is addressed as a three-dimensional Simultaneous Localization and Mapping (3D SLAM) problem. During operation, long-range 360 degree scans are obtained at infrequent stops. These scans are aligned using a combination of sparse features and odometry measurements in a batch alignment framework, resulting in accurate maps of planetary worksite terrain. For navigation, the panning laser rangefinder is configured to perform short, continuous sweeps while the rover is in motion. An appearance-based approach is taken, where laser intensity images are used to compute Visual Odometry (VO) estimates. We overcome the motion distortion issues by formulating the estimation problem in continuous time. This is facilitated by the introduction of Gaussian Process Gauss-Newton (GPGN), a novel algorithm for nonparametric, continuous-time, nonlinear, batch state estimation. Extensive experimental validation is provided for both mapping and navigation components using data gathered at multiple planetary analogue test sites.
139

Laser-Based 3D Mapping and Navigation in Planetary Worksite Environments

Tong, Chi Hay 14 January 2014 (has links)
For robotic deployments in planetary worksite environments, map construction and navigation are essential for tasks such as base construction, scientific investigation, and in-situ resource utilization. However, operation in a planetary environment imposes sensing restrictions, as well as challenges due to the terrain. In this thesis, we develop enabling technologies for autonomous mapping and navigation by employing a panning laser rangefinder as our primary sensor on a rover platform. The mapping task is addressed as a three-dimensional Simultaneous Localization and Mapping (3D SLAM) problem. During operation, long-range 360 degree scans are obtained at infrequent stops. These scans are aligned using a combination of sparse features and odometry measurements in a batch alignment framework, resulting in accurate maps of planetary worksite terrain. For navigation, the panning laser rangefinder is configured to perform short, continuous sweeps while the rover is in motion. An appearance-based approach is taken, where laser intensity images are used to compute Visual Odometry (VO) estimates. We overcome the motion distortion issues by formulating the estimation problem in continuous time. This is facilitated by the introduction of Gaussian Process Gauss-Newton (GPGN), a novel algorithm for nonparametric, continuous-time, nonlinear, batch state estimation. Extensive experimental validation is provided for both mapping and navigation components using data gathered at multiple planetary analogue test sites.
140

Simultaneous Localisation and Mapping using Autonomous Target Detection and Recognition / Simultan lokalisering och kartering med användning av automatisk måligenkänning

Sinivaara, Kristian January 2014 (has links)
Simultaneous localisation and mapping (SLAM) is an often used positioning approach in GPS denied indoor environments. This thesis presents a novel method of combining SLAM with autonomous/aided target detection and recognition (ATD/R), which is beneficial for both methods. The method uses physical objects that are recognisable by ATR as unambiguous features in SLAM, while SLAM provides the ATR with better position estimates. The intended application is to improve the positioning of a first responder moving through an indoor environment, where the map offers localisation and simultaneously helps locate people, furniture and potentially dangerous objects like gas cannisters. The developed algorithm, dubbed ATR-SLAM, uses existing methods from different fields such as EKF-SLAM and ATR based on rectangle estimation. Landmarks in the form of 3D point features based on NARF are used in conjunction with identified objects and 3D object models are used to replace landmarks when the same information is used. This leads to a more compact map representation with fewer landmarks, which partly compensates for the introduced cost of the ATR. Experiments performed using point clouds generated from a time-of-flight laser scanner show that ATR-SLAM produces more consistent maps and more robust loop closures than EKF-SLAM using only NARF landmarks.

Page generated in 0.0322 seconds