1 |
Visual homing in field crickets and desert ants : a comparative behavioural and modelling studyMangan, Michael January 2011 (has links)
Visually guided navigation represents a long standing goal in robotics. Insights may be drawn from various insect species for which visual information has been shown sufficient for navigation in complex environments, however the generality of visual homing abilities across insect species remains unclear. Furthermore variousmodels have been proposed as strategies employed by navigating insects yet comparative studies across models and species are lacking. This work addresses these questions in two insect species not previously studied: the field cricket Gryllus bimaculatus for which almost no navigational data is available; and the European desert ant Cataglyphis velox, a relation of the African desert ant Cataglyphis bicolor which has become a model species for insect navigation studies. The ability of crickets to return to a hidden target using surrounding visual cues was tested using an analogue of the Morris water-maze, a standard paradigm for spatial memory testing in rodents. Crickets learned to re-locate the hidden target using the provided visual cues, with the best performance recorded when a natural image was provided as stimulus rather than clearly identifiable landmarks. The role of vision in navigation was also observed for desert ants within their natural habitat. Foraging ants formed individual, idiosyncratic, visually guided routes through their cluttered surroundings as has been reported in other ant species inhabiting similar environments. In the absence of other cues ants recalled their route even when displaced along their path indicating that ants recall previously visited places rather than a sequence of manoeuvres. Image databases were collected within the environments experienced by the insects using custompanoramic cameras that approximated the insect eye viewof the world. Six biologically plausible visual homing models were implemented and their performance assessed across experimental conditions. The models were first assessed on their ability to replicate the relative performance across the various visual surrounds in which crickets were tested. That is, best performance was sought with the natural scene, followed by blank walls and then the distinct landmarks. Only two models were able to reproduce the pattern of results observed in crickets: pixel-wise image difference with RunDown and the centre of mass average landmark vector. The efficacy of models was then assessed across locations in the ant habitat. A 3D world was generated from the captured images providing noise free and high spatial resolution images asmodel input. Best performancewas found for optic flow and image difference based models. However in many locations the centre of mass average landmark vector failed to provide reliable guidance. This work shows that two previously unstudied insect species can navigate using surrounding visual cues alone. Moreover six biologically plausible models of visual navigation were assessed in the same environments as the insects and only an image difference based model succeeded in all experimental conditions.
|
2 |
Resilient visual perception for multiagent systemsKarimian, Arman 15 May 2021 (has links)
There has been an increasing interest in visual sensors and vision-based solutions for single and multi-robot systems. Vision-based sensors, e.g., traditional RGB cameras, grant rich semantic information and accurate directional measurements at a relatively low cost; however, such sensors have two major drawbacks. They do not generally provide reliable depth estimates, and typically have a limited field of view. These limitations considerably increase the complexity of controlling multiagent systems. This thesis studies some of the underlying problems in vision-based multiagent control and mapping.
The first contribution of this thesis is a method for restoring bearing rigidity in non-rigid networks of robots. We introduce means to determine which bearing measurements can improve bearing rigidity in non-rigid graphs and provide a greedy algorithm that restores rigidity in 2D with a minimum number of added edges.
The focus of the second part is on the formation control problem using only bearing measurements. We address the control problem for consensus and formation control through non-smooth Lyapunov functions and differential inclusion. We provide a stability analysis for undirected graphs and investigate the derived controllers for directed graphs. We also introduce a newer notion of bearing persistence for pure bearing-based control in directed graphs.
The third part is concerned with the bearing-only visual homing problem with a limited field of view sensor. In essence, this problem is a special case of the formation control problem where there is a single moving agent with fixed neighbors. We introduce a navigational vector field composed of two orthogonal vector fields that converges to the goal position and does not violate the field of view constraints. Our method does not require the landmarks' locations and is robust to the landmarks' tracking loss.
The last part of this dissertation considers outlier detection in pose graphs for Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM) problems. We propose a method for detecting incorrect orientation measurements before pose graph optimization by checking their geometric consistency in cycles. We use Expectation-Maximization to fine-tune the noise's distribution parameters and propose a new approximate graph inference procedure specifically designed to take advantage of evidence on cycles with better performance than standard approaches.
These works will help enable multi-robot systems to overcome visual sensors' limitations in collaborative tasks such as navigation and mapping.
|
3 |
Visual homing for a car-like vehicleUsher, Kane January 2005 (has links)
This thesis addresses the pose stabilization of a car-like vehicle using omnidirectional visual feedback. The presented method allows a vehicle to servo to a pre-learnt target pose based on feature bearing angle and range discrepancies between the vehicle's current view of the environment and that seen at the learnt location. The best example of such a task is the use of visual feedback for autonomous parallel-parking of an automobile. Much of the existing work in pose stabilization is highly theoretical in nature with few examples of implementations on 'real' vehicles, let alone vehicles representative of those found in industry. The work in this thesis develops a suitable test platform and implements vision-based pose stabilization techniques. Many of the existing techniques were found to fail due to vehicle steering and velocity loop dynamics, and more significantly, with steering input saturation. A technique which does cope with the characteristics of 'real' vehicles is to divide the task into predefined stages, essentially dividing the state space into sub-manifolds. For a car-like vehicle, the strategy used is to stabilize the vehicle to the line which has the correct orientation and contains the target location. Once on the line, the vehicle then servos to the desired pose. This strategy can accommodate velocity and steering loop dynamics, and input saturation. It can also allow the use of linear control techniques for system analysis and tuning of control gains. To perform pose stabilization, good estimates of vehicle pose are required. A simple, yet robust, method derived from the visual homing literature is to sum the range vectors to all the landmarks in the workspace and divide by the total number of landmarks--the Improved Average Landmark Vector. By subtracting the IALV at the target location from the currently calculated IALV, an estimate of vehicle pose is obtained. In this work, views of the world are provided by an omnidirectional camera, while a magnetic compass provides a reference direction. The landmarks used are red road cones which are segmented from the omnidirectional colour images using a pre-learnt, two-dimensional lookup table of their colour profile. Range to each landmark is estimated using a model of the optics of the system, based on a flat-Earth assumption. A linked-list based method is used to filter the landmarks over time. Complementary filtering techniques, which combine the vision data with vehicle odometry, are used to improve the quality of the measurements.
|
4 |
Robot navigation in sensor spaceKeeratipranon, Narongdech January 2009 (has links)
This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.
|
Page generated in 1.6131 seconds