1 |
Optical ranging and feature extractonTaylor, Robert January 2001 (has links)
No description available.
|
2 |
An investigation into architectures for autonomous agentsDowns, Joseph January 1994 (has links)
No description available.
|
3 |
Reinforcement learning and knowledge transformation in mobile roboticsPipe, Anthony Graham January 1997 (has links)
No description available.
|
4 |
Visually guided autonomous robot navigation : an insect based approach.Weber, Keven January 1998 (has links)
Giving robots the ability to move around autonomously in various real-world environments has long been a major challenge for Artificial Intelligence. New approaches to the design and control of autonomous robots have shown the value of drawing inspiration from the natural world. Animals navigate, perceive and interact with various uncontrolled environments with seemingly little effort. Flying insects, in particular, are quite adept at manoeuvring in complex, unpredictable and possibly hostile environments.Inspired by the miniature machine view of insects, this thesis contributes to the autonomous control of mobile robots through the application of insect-based visual cues and behaviours. The parsimonious, yet robust, solutions offered by insects are directly applicable to the computationally restrictive world of autonomous mobile robots. To this end, two main navigational domains are focussed on: corridor guidance and visual homing.Within a corridor environment, safe navigation is achieved through the application of simple and intuitive behaviours observed in insect, visual navigation. By observing and responding to observed apparent motions in a reactive, yet intelligent way, the robot is able to exhibit useful corridor guidance behaviours at modest expense. Through a combination of both simulation and real-world robot experiments, the feasibility of equipping a mobile robot with the ability to safely navigate in various environments, is demonstrated.It is further shown that the reactive nature of the robot can be augmented to incorporate a map building method that allows previously encountered corridors to be recognised, through the observation of landmarks en route. This allows for a more globally-directed navigational goal.Many animals, including insects such as bees and ants, successfully engage in visual homing. This is achieved through the association of ++ / visual landmarks with a specific location. In this way, the insect is able to 'home in' on a previously visited site by simply moving in such a way as to maximise the match between the currently observed environment and the memorised 'snapshot' of the panorama as seen from the goal. A mobile robot can exploit the very same strategy to simply and reliably return to a previously visited location.This thesis describes a system that allows a mobile robot to home successfully. Specifically, a simple, yet robust, homing scheme that relies only upon the observation of the bearings of visible landmarks, is proposed. It is also shown that this strategy can easily be extended to incorporate other visual cues which may improve overall performance.The homing algorithm described, allows a mobile robot to home incrementally by moving in such a way as to gradually reduce the discrepancy between the current view and the view obtained from the home position. Both simulation and mobile robot experiments are again used to demonstrate the feasibility of the approach.
|
5 |
Building safety maps using vision for safe local mobile robot navigationMurarka, Aniket 18 March 2011 (has links)
In this work we focus on building local maps to enable wheeled mobile robots to navigate safely and autonomously in urban environments. Urban environments present a variety of hazards that mobile robots have to detect and represent in their maps to navigate safely. Examples of hazards include obstacles such as furniture, drop-offs such as at downward stairs, and inclined surfaces such as wheelchair ramps. We address two shortcomings perceived in the literature on mapping. The first is the extensive use of expensive laser-based sensors for mapping, and the second is the focus on only detecting obstacles when clearly other hazards such as drop-offs need to be detected to ensure safety. Therefore, in this work we develop algorithms for building maps using only relatively inexpensive stereo cameras, that allow safe local navigation by detecting and modeling hazards such as overhangs, drop-offs, and ramps in addition to static obstacles. The hazards are represented using 2D annotated grid maps called local safety maps. Each cell in the map is annotated with one of several labels: Level, Inclined, Non-ground, or, Unknown. Level cells are safe for travel whereas Inclined cells require caution. Non-ground cells are unsafe for travel and represent obstacles, overhangs, or regions lower than safe ground. Level and Inclined cells can be further annotated as being Drop-off Edges. The process of building safety maps consists of three main steps: (i) computing a stereo depth map; (ii) building a 3D model using the stereo depths; and, (iii) analyzing the 3D model for safety to construct the safety map. We make significant contributions to each of the three steps: we develop global stereo methods for computing disparity maps that use edge and color information; we introduce a probabilistic data association method for building 3D models using stereo range points; and we devise a novel method for segmenting and fitting planes to 3D models allowing for a precise safety analysis. In addition, we also develop a stand-alone method for detecting drop-offs in front of the robot that uses motion and occlusion cues and only relies on monocular images. We introduce an evaluation framework for evaluating (and comparing) our algorithms on real world data sets, collected by driving a robot in various environments. Accuracy is measured by comparing the constructed safety maps against ground truth safety maps and computing error rates. The ground truth maps are obtained by manually annotating maps built using laser data. As part of the framework we also estimate latencies introduced by our algorithms and the accuracy of the plane fitting process. We believe this framework can be used for comparing the performance of a variety of vision-based mapping systems and for this purpose we make our datasets, ground truth maps, and evaluation code publicly available. We also implement a real-time version of one of the safety map algorithms on a wheelchair robot and demonstrate it working in various environments. The constructed safety maps allow safe local motion planning and also support the extraction of local topological structures that can be used to build global maps. / text
|
6 |
GPS Based Waypoint Navigation for an Autonomous Guided Vehicle – Bearcat IIISethuramasamyraja, Balaji 02 September 2003 (has links)
No description available.
|
7 |
On Fundamental Elements of Visual Navigation SystemsSiddiqui, Rafid January 2014 (has links)
Visual navigation is a ubiquitous yet complex task which is performed by many species for the purpose of survival. Although visual navigation is actively being studied within the robotics community, the determination of elemental constituents of a robust visual navigation system remains a challenge. Motion estimation is mistakenly considered as the sole ingredient to make a robust autonomous visual navigation system and therefore efforts are made to improve the accuracy of motion estimations. On the contrary, there are other factors which are as important as motion and whose absence could result in inability to perform seamless visual navigation such as the one exhibited by humans. Therefore, it is needed that a general model for a visual navigation system be devised which would describe it in terms of a set of elemental units. In this regard, a set of visual navigation elements (i.e. spatial memory, motion memory, scene geometry, context and scene semantics) are suggested as building blocks of a visual navigation system in this thesis. A set of methods are proposed which investigate the existence and role of visual navigation elements in a visual navigation system. A quantitative research methodology in the form of a series of systematic experiments is conducted on these methods. The thesis formulates, implements and analyzes the proposed methods in the context of visual navigation elements which are arranged into three major groupings; a) Spatial memory b) Motion Memory c) Manhattan, context and scene semantics. The investigations are carried out on multiple image datasets obtained by robot mounted cameras (2D/3D) moving in different environments. Spatial memory is investigated by evaluation of proposed place recognition methods. The recognized places and inter-place associations are then used to represent a visited set of places in the form of a topological map. Such a representation of places and their spatial associations models the concept of spatial memory. It resembles the humans’ ability of place representation and mapping for large environments (e.g. cities). Motion memory in a visual navigation system is analyzed by a thorough investigation of various motion estimation methods. This leads to proposals of direct motion estimation methods which compute accurate motion estimates by basing the estimation process on dominant surfaces. In everyday world, planar surfaces, especially the ground planes, are ubiquitous. Therefore, motion models are built upon this constraint. Manhattan structure provides geometrical cues which are helpful in solving navigation problems. There are some unique geometric primitives (e.g. planes) which make up an indoor environment. Therefore, a plane detection method is proposed as a result of investigations performed on scene structure. The method uses supervised learning to successfully classify the segmented clusters in 3D point-cloud datasets. In addition to geometry, the context of a scene also plays an important role in robustness of a visual navigation system. The context in which navigation is being performed imposes a set of constraints on objects and sections of the scene. The enforcement of such constraints enables the observer to robustly segment the scene and to classify various objects in the scene. A contextually aware scene segmentation method is proposed which classifies the image of a scene into a set of geometric classes. The geometric classes are sufficient for most of the navigation tasks. However, in order to facilitate the cognitive visual decision making process, the scene ought to be semantically segmented. The semantic of indoor scenes as well as semantic of the outdoor scenes are dealt with separately and separate methods are proposed for visual mapping of environments belonging to each type. An indoor scene consists of a corridor structure which is modeled as a cubic space in order to build a map of the environment. A “flash-n-extend” strategy is proposed which is responsible for controlling the map update frequency. The semantics of the outdoor scenes is also investigated and a scene classification method is proposed. The method employs a Markov Random Field (MRF) based classification framework which generates a set of semantic maps.
|
8 |
Visual Navigation: Constructing and Utilizing Simple Maps of an Indoor EnvironmentSarachik, Karen Beth 01 March 1989 (has links)
The goal of this work is to navigate through an office environmentsusing only visual information gathered from four cameras placed onboard a mobile robot. The method is insensitive to physical changes within the room it is inspecting, such as moving objects. Forward and rotational motion vision are used to find doors and rooms, and these can be used to build topological maps. The map is built without the use of odometry or trajectory integration. The long term goal of the project described here is for the robot to build simple maps of its environment and to localize itself within this framework.
|
9 |
Autonomous navigation of a wheeled mobile robot in farm settings2014 February 1900 (has links)
This research is mainly about autonomously navigation of an agricultural wheeled mobile robot in an unstructured outdoor setting. This project has four distinct phases defined as: (i) Navigation and control of a wheeled mobile robot for a point-to-point motion. (ii) Navigation and control of a wheeled mobile robot in following a given path (path following problem). (iii) Navigation and control of a mobile robot, keeping a constant proximity distance with the given paths or plant rows (proximity-following). (iv) Navigation of the mobile robot in rut following in farm fields. A rut is a long deep track formed by the repeated passage of wheeled vehicles in soft terrains such as mud, sand, and snow.
To develop reliable navigation approaches to fulfill each part of this project, three main steps are accomplished: literature review, modeling and computer simulation of wheeled mobile robots, and actual experimental tests in outdoor settings. First, point-to-point motion planning of a mobile robot is studied; a fuzzy-logic based (FLB) approach is proposed for real-time autonomous path planning of the robot in unstructured environment. Simulation and experimental evaluations shows that FLB approach is able to cope with different dynamic and unforeseen situations by tuning a safety margin. Comparison of FLB results with vector field histogram (VFH) and preference-based fuzzy (PBF) approaches, reveals that FLB approach produces shorter and smoother paths toward the goal in almost all of the test cases examined. Then, a novel human-inspired method (HIM) is introduced. HIM is inspired by human behavior in navigation from one point to a specified goal point. A human-like reasoning ability about the situations to reach a predefined goal point while avoiding any static, moving and unforeseen obstacles are given to the robot by HIM. Comparison of HIM results with FLB suggests that HIM is more efficient and effective than FLB.
Afterward, navigation strategies are built up for path following, rut following, and proximity-following control of a wheeled mobile robot in outdoor (farm) settings and off-road terrains. The proposed system is composed of different modules which are: sensor data analysis, obstacle detection, obstacle avoidance, goal seeking, and path tracking. The capabilities of the proposed navigation strategies are evaluated in variety of field experiments; the results show that the proposed approach is able to detect and follow rows of bushes robustly. This action is used for spraying plant rows in farm field.
Finally, obstacle detection and obstacle avoidance modules are developed in navigation system. These modules enables the robot to detect holes or ground depressions (negative obstacles), that are inherent parts of farm settings, and also over ground level obstacles (positive obstacles) in real-time at a safe distance from the robot. Experimental tests are carried out on two mobile robots (PowerBot and Grizzly) in outdoor and real farm fields. Grizzly utilizes a 3D-laser range-finder to detect objects and perceive the environment, and a RTK-DGPS unit for localization. PowerBot uses sonar sensors and a laser range-finder for obstacle detection. The experiments demonstrate the capability of the proposed technique in successfully detecting and avoiding different types of obstacles both positive and negative in variety of scenarios.
|
10 |
A layered control architecture for mobile robot navigationQiu, Jiancheng January 1998 (has links)
This thesis addresses the problem of how to control an autonomous mobile robot navigation in indoor environments, in the face of sensor noise, imprecise information, uncertainty and limited response time. The thesis argues that the effective control of autonomous mobile robots can be achieved by organising low level and higher level control activities into a layered architecture. The low level reactive control allows the robot to respond to contingencies quickly. The higher level control allows the robot to make longer term decisions and arranges appropriate sequences for a task execution. The thesis describes the design and implementation of a two layer control architecture, a task template based sequencing layer and a fuzzy behaviour based low level control layer. The sequencing layer works at the pace of the higher level of abstraction, interprets a task plan, mediates and monitors the controlling activities. While the low level performs fast computation in response to dynamic changes in the real world and carries out robust control under uncertainty. The organisation and fusion of fuzzy behaviours are described extensively for the construction of a low level control system. A learning methodology is also developed to systematically learn fuzzy behaviours and the behaviour selection network and therefore solve the difficulties in configuring the low level control layer. A two layer control system has been implemented and used to control a simulated mobile robot performing two tasks in simulated indoor environments. The effectiveness of the layered control and learning methodology is demonstrated through the traces of controlling activities at the two different levels. The results also show a general design methodology that the high level should be used to guide the robot's actions while the low level takes care of detailed control in the face of sensor noise and environment uncertainty in real time.
|
Page generated in 0.0905 seconds