• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Large-scale urban localisation with a pushbroom LIDAR

Baldwin, Ian Alan January 2013 (has links)
Truly autonomous operation for any field robot relies on a well-defined pyramid of technical competencies. Consider the case of an autonomous car – we require the vehicle to be able to perceive its environment through noisy sensors, robustly fuse this information into an accurate representation of the world, and use this representation to plan and execute complex tasks – all the while dealing with the uncertainties inherent in real world operation. Of fundamental importance to all these capabilities is localisation – we always need to know where we are, if we are to be able to plan where we are going (or how to get there). As road vehicles make the push towards becoming truly autonomous, the system’s ability to stay accurately localised over its operating lifetime is of crucial importance – this is the core issue of lifelong localisation. The goals in this thesis are threefold – to develop the hardware needed to reliably acquire data over large scales, to build a localisation framework that is robust enough to be used over the long–term, and to establish a method of adapting our framework when necessary such that we can accommodate the inevitable difficulties present when operating over city-scales. We begin by developing the physical means to make large-scale localisation achievable, and affordable. This takes the form of a stand-alone, rugged sensor payload – incorporating a number of sensing modalities – that can be deployed in either a mapping or localisation role. We then present a new technique for localisation in a prior map using an information theoretic framework. The core idea is to build a dense retrospective sensor history, which is then matched statistically within a prior map. The underlying idea is to leverage the persistent structure in the environment, and we show that by doing so, it is possible to stay localised over the course of many months and kilometres. The developed system relies on orthogonally-oriented ranging sensors, to infer both velocity and pose. However, operating in a complex, dynamic, setting (like a town centre) can often induce velocity errors, distorting our sensor history and resulting in localisation failure. The insight into dealing with this failure is to learn from sensor context – we learn a place-dependent sensor model and show that doing so is vital to prevent such failures. The integration of these three competencies gives us the means to make inex- pensive, lifelong localisation an achievable goal.
2

Robust lifelong visual navigation and mapping

Pascoe, Geoffrey January 2017 (has links)
The ability to precisely determine one's location in within the world (localisation) is a key requirement for any robot wishing to navigate through the world. For long-term operation, such a localisation system must be robust to changes in the environment, both short term (eg. traffic, weather) and long term (eg. seasons). This thesis presents two methods for performing such localisation using cameras - small, cheap, lightweight sensors that are universally available. Whilst many image-based localisation systems have been proposed in the past, they generally rely on either feature matching, which fails under many degradations such as motion blur, or on photometric consistency, which fails under changing illumination. The methods we propose here directly align images with a dense prior map. The first method uses maps synthesised from a combination of LIDAR scanners to generate geometry and cameras to generate appearance, whilst the second uses vision for both mapping and localisation. Both make use of an information-theoretic metric, Normalised Information Distance (NID), for image alignment, relaxing the appearance constancy assumption inherent in photometric methods. Our methods require significant computational resources, but through the use of commodity GPUs, we are able to run them at a rate of 8-10Hz. Our GPU implementations make use of low level OpenGL, enabling compatibility across almost any GPU hardware. We also present a method for calibrating multi-sensor systems, enabling the joint use of cameras and LIDAR for mapping. Through experiments on both synthetic data and real-world data from over 100km of driving outdoors, we demonstrate the robustness of our localisation system to large variations in appearance. Comparisons with state-of-the-art feature-based and direct methods show that ours is significantly more robust, whilst maintaining similar precision.
3

Vision-only localisation under extreme appearance change

Linegar, Chris January 2016 (has links)
Robust localisation is a key requirement for autonomous vehicles. However, in order to achieve widespread adoption of this technology, we also require this function to be performed using low-cost hardware. Cameras are appealing due to their information-rich image content and low cost; however, camera-based localisation is difficult because of the problem of appearance change. For example, in outdoor en- vironments the appearance of the world can change dramatically and unpredictably with variations in lighting, weather, season and scene structure. We require autonomous vehicles to be robust under these challenging environmental conditions. This thesis presents Dub4, a vision-only localisation system for autonomous vehicles. The system is founded on the concept of experiences, where an "experience" is a visual memory which models the world under particular conditions. By allowing the system to build up and curate a map of these experiences, we are able to handle cyclic appearance change (lighting, weather and season) as well as adapt to slow structural change. We present a probabilistic framework for predicting which experiences are most likely to match successfully with the live image at run-time, conditioned on the robot's prior use of the map. In addition, we describe an unsupervised algorithm for detecting and modelling higher-level visual features in the environment for localisation. These features are trained on a per-experience basis and are robust to extreme changes in appearance, for example between rain and sun, or day and night. The system is tested on over 1500km of data, from urban and off-road environments, through sun, rain, snow, harsh lighting, at different times of the day and night, and through all seasons. In addition to this extensive offline testing, Dub4 has served as the primary localisation source on a number of autonomous vehicles, including the Oxford University's RobotCar, the 2016 Shell Eco-Marathon, the LUTZ PathFinder Project in Milton Keynes, and the GATEway Project in Greenwich, London.
4

Vision & laser for road based navigation

Napier, Ashley A. January 2014 (has links)
This thesis presents novel solutions for two fundamental problems associated with autonomous road driving. The first is accurate and persistent localisation and the second is automatic extrinsic sensor calibration. We start by describing a stereo Visual Odometry (VO) system, which forms the basis of later chapters. This sparse approach to ego-motion estimation leverages the efficacy and speed of the BRIEF descriptor to measure frame-to-frame correspondences and infer subsequent motion. The system is able to output locally metric trajectory estimates as demonstrated on many kilometres of data. We then present a robust vision only localisation system based on a two-stage approach. Firstly we gather a representative survey in ideal weather and lighting conditions. We then leverage locally accurate VO trajectories to synthesise a high resolution orthographic image strip of the road surface. This road image provides a highly descriptive and stable template against which to match subsequent traversals. During the second phase, localisation, we use the VO to provide high frequency pose updates, but correct for the drift inherent in all locally derived pose estimates with low frequency updates from a dense image matching technique. Here a live image stream is registered against synthesised views of the road image generated form the survey. We use an information theoretic measure, Mutual Information, to determine the alignment of live images and synthesised views. Using this measure we are able to successfully localise subsequent traversals of surveyed routes under even the most intense lighting changes expected in outdoor applications. We demonstrate our system localising in multiple environments with accuracy commensurate to that of an Inertial Navigation System. Finally we present a technique for automatically determining the extrinsic calibration between a camera and Light Detection And Ranging (LIDAR) sensor in natural scenes. Rather than requiring a stationary platform as with prior art, we actually exploit platform motion allowing us to aggregate data and adopt a retrospective approach to calibration. Coupled with accurate timing this retrospective approach allows for sensors with non-overlapping fields of view to be calibrated as long as at some point the observed workspaces overlap. We then show how we can improve the accuracy of our calibration estimates by treating each single shot estimate as a noisy measurement and fusing them together using a recursive Bayes filter. We evaluate the calibration algorithm in multiple environments and demonstrate millimetre precision in translation and deci-degrees in rotation.

Page generated in 0.1035 seconds