1 |
Image processing & robot positioningFung, Hong Chee January 1990 (has links)
No description available.
|
2 |
Visual navigation for mobile robots using the Bag-of-Words algorithmBotterill, Tom January 2011 (has links)
Robust long-term positioning for autonomous mobile robots is essential for many applications. In many
environments this task is challenging, as errors accumulate in the robot’s position estimate over time. The
robot must also build a map so that these errors can be corrected when mapped regions are re-visited; this
is known as Simultaneous Localisation and Mapping, or SLAM.
Successful SLAM schemes have been demonstrated which accurately map tracks of tens of kilometres, however
these schemes rely on expensive sensors such as laser scanners and inertial measurement units. A more
attractive, low-cost sensor is a digital camera, which captures images that can be used to recognise where
the robot is, and to incrementally position the robot as it moves. SLAM using a single camera is challenging
however, and many contemporary schemes suffer complete failure in dynamic or featureless environments, or
during erratic camera motion. An additional problem, known as scale drift, is that cameras do not directly
measure the scale of the environment, and errors in relative scale accumulate over time, introducing errors
into the robot’s speed and position estimates.
Key to a successful visual SLAM system is the ability to continue operation despite these difficulties, and
to recover from positioning failure when it occurs. This thesis describes the development of such a scheme,
which is known as BoWSLAM. BoWSLAM enables a robot to reliably navigate and map previously unknown
environments, in real-time, using only a single camera.
In order to position a camera in visually challenging environments, BoWSLAM combines contemporary visual
SLAM techniques with four new components. Firstly, a new Bag-of-Words (BoW) scheme is developed, which
allows a robot to recognise places it has visited previously, without any prior knowledge of its environment.
This BoW scheme is also used to select the best set of frames to reconstruct positions from, and to find
efficient wide-baseline correspondences between many pairs of frames. Secondly, BaySAC, a new outlier-
robust relative pose estimation scheme based on the popular RANSAC framework, is developed. BaySAC
allows the efficient computation of multiple position hypotheses for each frame. Thirdly, a graph-based
representation of these position hypotheses is proposed, which enables the selection of only reliable position
estimates in the presence of gross outliers. Fourthly, as the robot explores, objects in the world are recognised
and measured. These measurements enable scale drift to be corrected. BoWSLAM is demonstrated mapping
a 25 minute 2.5km trajectory through a challenging and dynamic outdoor environment in real-time, and
without any other sensor input; considerably further than previous single camera SLAM schemes.
|
3 |
Mobile Robot Localization Using SonarDrumheller, Michael 01 January 1985 (has links)
This paper describes a method by which range data from a sonar or other type of rangefinder can be used to determine the 2-dimensional position and orientation of a mobile robot inside a room. The plan of the room is modeled as a list of segments indicating the positions of walls. The method works by extracting straight segments from the range data and examining all hypotheses about pairings between the segments and walls in the model of the room. Inconsistent pairings are discarded efficiently by using local constraints based on distances between walls, angles between walls, and ranges between walls along their normal vectors. These constraints are used to obtain a small set of possible positions, which is further pruned using a test for physical consistency. The approach is extremely tolerant of noise and clutter. Transient objects such as furniture and people need not be included in the room model, and very noisy, low-resolution sensors can be used. The algorithm's performance is demonstrated using Polaroid Ultrasonic Rangefinder, which is a low-resolution, high-noise sensor.
|
4 |
Navigation and Automatic Ground Mapping by Rover RobotWang, Xuerui, Zhao, Li January 2010 (has links)
This project is mainly based on mosaicing of images and similarity measurements with different methods. The map of a floor is created from a database of small-images that have been captured by a camera-mounted robot scanning the wooden floor of a living room. We call this ground mapping. After the ground mapping, the robot can achieve self-positioning on the map by using novel small images it captures as it displaces on the ground. Similarity measurements based on the Schwartz inequality have been used to achieve the ground mapping, as well as to position the robot once the ground map is available. Because the natural light affects the gray value of images, this effect must be accounted for in the envisaged similarity measurements. A new approach to mosaicing is suggested. It uses the local texture orientation, instead of the original gray values, in ground mapping as well as in positioning. Additionally, we report on ground mapping results using other features, gray-values as features. The robot can find its position with few pixel errors by using the novel approach and similarity measurements based on the Schwartz inequality.
|
Page generated in 0.1412 seconds