Spelling suggestions: "subject:"cisual navigation"" "subject:"4visual navigation""
1 |
Visually guided autonomous robot navigation : an insect based approach.Weber, Keven January 1998 (has links)
Giving robots the ability to move around autonomously in various real-world environments has long been a major challenge for Artificial Intelligence. New approaches to the design and control of autonomous robots have shown the value of drawing inspiration from the natural world. Animals navigate, perceive and interact with various uncontrolled environments with seemingly little effort. Flying insects, in particular, are quite adept at manoeuvring in complex, unpredictable and possibly hostile environments.Inspired by the miniature machine view of insects, this thesis contributes to the autonomous control of mobile robots through the application of insect-based visual cues and behaviours. The parsimonious, yet robust, solutions offered by insects are directly applicable to the computationally restrictive world of autonomous mobile robots. To this end, two main navigational domains are focussed on: corridor guidance and visual homing.Within a corridor environment, safe navigation is achieved through the application of simple and intuitive behaviours observed in insect, visual navigation. By observing and responding to observed apparent motions in a reactive, yet intelligent way, the robot is able to exhibit useful corridor guidance behaviours at modest expense. Through a combination of both simulation and real-world robot experiments, the feasibility of equipping a mobile robot with the ability to safely navigate in various environments, is demonstrated.It is further shown that the reactive nature of the robot can be augmented to incorporate a map building method that allows previously encountered corridors to be recognised, through the observation of landmarks en route. This allows for a more globally-directed navigational goal.Many animals, including insects such as bees and ants, successfully engage in visual homing. This is achieved through the association of ++ / visual landmarks with a specific location. In this way, the insect is able to 'home in' on a previously visited site by simply moving in such a way as to maximise the match between the currently observed environment and the memorised 'snapshot' of the panorama as seen from the goal. A mobile robot can exploit the very same strategy to simply and reliably return to a previously visited location.This thesis describes a system that allows a mobile robot to home successfully. Specifically, a simple, yet robust, homing scheme that relies only upon the observation of the bearings of visible landmarks, is proposed. It is also shown that this strategy can easily be extended to incorporate other visual cues which may improve overall performance.The homing algorithm described, allows a mobile robot to home incrementally by moving in such a way as to gradually reduce the discrepancy between the current view and the view obtained from the home position. Both simulation and mobile robot experiments are again used to demonstrate the feasibility of the approach.
|
2 |
A Visual Return-to-Home System for GPS-Denied FlightLewis, Benjamin Paul 01 August 2016 (has links)
Unmanned aerial vehicle technology is rapidly maturing. In recent years, the sight of hobbyist aircraft has become more common. Corporations and governments are also interested in using drone aircraft for applications such as package delivery, surveillance and communications. These autonomous UAV technologies demand robust systems that perform under any circumstances. Many UAV applications rely on GPS to obtain information about their location and velocity. However, the GPS system has known vulnerabilities, including environmental signal degradation, terrestrial or solar weather, or malicious attacks such as GPS spoofing. These conditions occur with enough frequency to cause concern. Without a GPS signal, the state estimation in many autopilots quickly degrades. In the absence of a reliable backup navigation scheme, this loss of state will cause the aircraft to drift off course, and in many cases the aircraft will lose power or crash. While no single approach can solve all of the issues with GPS signal degradation, individual events can be addressed and solved. In this thesis, we present a system which will return an aircraft to its launch point upon the loss of GPS. This functionality is advantageous because it allows recovery of the UAV in circumstances which the lack of GPS information would make difficult. The system presented in this thesis accomplishes the return of the aircraft by means of onboard visual navigation, which removes the dependence of the aircraft on external sensors and systems. The system presented here uses an downward-facing onboard camera and computer to capture a string of overlapping images (keyframes) of the ground as the aircraft travels on its outbound journey. When a signal is received, the aircraft switches into return-to-home mode. The system uses the homography matrix and other vision processing techniques to produce information about the location of the current keyframe relative to the aircraft. This information is used to navigate the aircraft to the location of each saved keyframe in reverse order. As each keyframe is reached, the system programmatically loads the next target keyframe. By following the chain of keyframes in reverse, the system reaches the launch location. Contributions in this thesis include the return-to-home visual flight system for UAVs, which has been tested in simulation and with flight tests. Features of this system include methods for determining new keyframes and switching keyframes on the inbound flight, extracting data between images, and flight navigation based on this information. This system is a piece of the wider GPS-denied framework under development in the BYU MAGICC lab.
|
3 |
Monocular vision-aided inertial navigation for unmanned aerial vehiclesMagree, Daniel Paul 21 September 2015 (has links)
The reliance of unmanned aerial vehicles (UAVs) on GPS and other external navigation aids has become a limiting factor for many missions. UAVs are now physically able to fly in many enclosed or obstructed environments, due to the shrinking size and weight of electronics and other systems. These environments, such as urban canyons or enclosed areas, often degrade or deny external signals. Furthermore, many of the most valuable potential missions for UAVs are in hostile or disaster areas, where navigation infrastructure could be damaged, denied, or actively used against the vehicle. It is clear that developing alternative, independent, navigation techniques will increase the operating envelope of UAVs and make them more useful.
This thesis presents work in the development of reliable monocular vision-aided inertial navigation for UAVs. The work focuses on developing a stable and accurate navigation solution in a variety of realistic conditions. First, a vision-aided inertial navigation algorithm is developed which assumes uncorrelated feature and vehicle states. Flight test results on a 80 kg UAV are presented, which demonstrate that it is possible to bound the horizontal drift with vision aiding. Additionally, a novel implementation method is developed for integration with a variety of navigation systems. Finally, a vision-aided navigation algorithm is derived within a Bierman-Thornton factored extended Kalman Filter (BTEKF) framework, using fully correlated vehicle and feature states. This algorithm shows improved consistency and accuracy by 2 to 3 orders of magnitude over the previous implementation, both in simulation and flight testing. Flight test results of the BTEKF on large (80 kg) and small (600 g) vehicles show accurate navigation over numerous tests.
|
4 |
Biologically Inspired Algorithms for Visual Navigation and Object Perception in Mobile RoboticsNorthcutt, Brandon D. January 2016 (has links)
There is a large gap between the visual capabilities of biological organisms and visual capabilities of autonomous robots. Even the most simple of flying insects is able to fly within complex environments, locate food, avoid obstacles and elude predators with seeming ease. This stands in stark contrast to even the most advanced of modern ground based or flying autonomous robots, which are only capable of autonomous navigation within simple environments and will fail spectacularly if the expected environment is modified even slightly. This dissertation provides a narrative of the author's graduate research into biologically inspired algorithms for visual perception and navigation with autonomous robotics applications. This research led to several novel algorithms and neural network implementations, which provide improved capabilities of visual sensation with exceedingly light computational requirements. A new computationally-minimal approach to visual motion detection was developed and demonstrated to provide obstacle avoidance without the need for directional specificity. In addition, a novel method of calculating sparse range estimates to visual object boundaries was demonstrated for localization, navigation and mapping using one-dimensional image arrays. Lastly, an assembly of recurrent inhibitory neural networks was developed to provide multiple concurrent object detection, visual feature binding, and internal neural representation of visual objects. These algorithms are promising avenues for future research and are likely to lead to more general, robust and computationally minimal systems of passive visual sensation for a wide variety of autonomous robotics applications.
|
5 |
How is an ant navigation algorithm affected by visual parameters and ego-motion?Ardin, Paul Björn January 2017 (has links)
Ants typically use path integration and vision for navigation when the environment precludes the use of pheromones for trails. Recent simulations have been able to accurately mimic the retinotopic navigation behaviour of these ants using simple models of movement and memory of unprocessed visual images. Naturally it is interesting to test these navigation algorithms in more realistic circumstances, particularly with actual route data from the ant, in an accurate facsimile of the ant world and with visual input that draws on the characteristics of the animal. While increasing the complexity of the visual processing to include skyline extraction, inhomogeneous sampling and motion processing was conjectured to improve the performance of the simulations, the reverse appears to be the case. Examining closely the assumptions about motion, analysis of ants in the field shows that they experience considerable displacement of the head which when applied to the simulation leads to significant degradation in performance. The family of simulations rely upon continuous visual monitoring of the scene to determine heading and it was decided to test whether the animals were similarly dependent on this input. A field study demonstrated that ants with only visual navigation cues can return the nest when largely facing away from the direction of travel (moving backwards) and so it appears that ant visual navigation is not a process of continuous retinotopic image matching. We conclude ants may use vision to determine an initial heading by image matching and then continue to follow this direction using their celestial compass, or they may use a rotationally invariant form of the visual world for continuous course correction.
|
6 |
Biomimetic Visual Navigation Architectures for Autonomous Intelligent SystemsPant, Vivek January 2007 (has links)
Intelligent systems with even the bare minimum of sophistication require extensive computational power and complex processing units. At the same time, small insects like flies are adept at visual navigation, target pursuit, motionless hovering flight, and obstacle avoidance. Thus, biology provides engineers with an unconventional approach to solve complicated engineering design problems. Computational models of the neuronal architecture of the insect brain can provide algorithms for the development of software and hardware to accomplish sophisticated visual navigation tasks. In this research, we investigate biologically-inspired collision avoidance models primarily based on visual motion. We first present a comparative analysis of two leading collision avoidance models hypothesized in the insect brain. The models are simulated and mathematically analyzed for collision and non-collision scenarios. Based on this analysis it is proposed that along with the motion information, an estimate of distance from the obstacle is also required to reliably avoid collisions. We present models with tracking capability as solutions to this problem and show that tracking indirectly computes a measure of distance. We present a camera-based implementation of the collision avoidance models with tracking. The camera-based system was tested for collision and non-collision scenarios to verify our simulation claims that tracking improves collision avoidance. Next, we present a direct approach to estimate the distance from an obstacle by utilizing non-directional speed. We describe two simplified non-directional speed estimation models: the non-directional multiplication (ND-M) sensor, and the non-directional summation (ND-S) sensor. We also analyze the mathematical basis of their speed sensitivity. An analog VLSI chip was designed and fabricated to implement these models in silicon. The chip was fabricated in a 0.18 um process and its characterization results are reported here. As future work, the tracking algorithm and the collision avoidance models may be implemented as a sensor chip and used for autonomous navigation by intelligent systems.
|
7 |
Visuell Navigation : En studie om vägledande visuella element i spelBirgersson, William, Johansson, Moa January 2013 (has links)
Visuella vägledande element kan höja eller sänka en spelupplevelse. Därför har vi gjort en studie kring hur man kan uppnå intuitiv visuell navigation med fyra beprövade visuella vägledande element, nämligen ljussättning, färgkulör, färgmättnad samt objektplacering. Dessa element har använts för att skapa ett visuellt språk i en bana som vi har låtit ett antal respondenter spela och simultant kommentera. Resultaten har varit som väntat, att ljussättning fungerar som det starkast vägledande elementet, samt att respondenterna reagerade på, samt vägleddes av, nämnda element. / Visual navigation elements can make or break a game. Therefore we have conducted a study of how to achieve intuitive visual navigation using four visual guiding elements, namely lighting, color, saturation and object placement. We have used these elements to create a visual language for a level, and have had a number of respondents play the level whilst commenting. The results have been as predicted, that light gives the strongest guidance, and that the respondents reacted to, and was guided by, named elements.
|
8 |
Quantitative performance evaluation of autonomous visual navigationTian, Jingduo January 2017 (has links)
Autonomous visual navigation algorithms for ground mobile robotic systems working in unstructured environments have been extensively studied for decades. Among these work, algorithm performance evaluations between different design configurations mainly involve the use of benchmark datasets with a limited number of real-world trails. Such evaluations, however, have difficulties to provide sufficient statistical power for performance quantification. In addition, they are unable to independently assess the algorithm robustness to individual realistic uncertainty sources, including the environment variations and processing errors. This research presents a quantitative approach to performance and robustness evaluation and optimisation of autonomous visual navigation algorithms, using large scale Monte-Carlo analyses. The Monte-Carlo analyses are supported by a simulation environment designed to represent a real-world level of visual information, using the perturbations from realistic visual uncertainties and processing errors. With the proposed evaluation method, a stereo vision based autonomous visual navigation algorithm is designed and iteratively optimised. This algorithm encodes edge-based 3D patterns into a topological map, and use them for the subsequent global localisation and navigation. An evaluation on the performance perturbations from individual uncertainty sources indicates that the stereo match error produces significant limitation for the current system design. Therefore, an optimisation approach is proposed to mitigate such an error. This maximises the Fisher information available in stereo image pairs by manipulating the stereo geometry. Moreover, the simulation environment is further updated in association with the algorithm design, which include the quantitative modelling and simulation of localisation error to the subsequent navigation behaviour. During a long-term Monte-Carlo evaluation and optimisation, the algorithm performance has been significantly improved. Simulation experiments demonstrate that the navigation of a 3-DoF robotic system is achieved in an unstructured environment, while possessing sufficient robustness to realistic visual uncertainty sources and systematic processing errors.
|
9 |
Robust light source detection for AUV docking / Robust detektering av ljuskällor för AUV-dockningEdlund, Joar January 2023 (has links)
For Autonomous Underwater Vehicles (AUVs) to be able to conduct longterm surveys, the ability to return to a docking station for maintenance and recharging is crucial. A dynamic docking system where a slowly moving submarine acts as the docking station provides increased hydrodynamic control and reduces the impact of environmental disturbances. A vision-based relative positioning system using a camera, mounted on the AUV, and light sources, mounted on the docking station, is investigated as a suitable high-resolution and high-frequency solution for a short-range relative positioning system. Detection and identification of the true light sources in the presence of reflections, ambient light, and other luminaries, requires a robust tracking pipeline that can reject false positives. In this thesis, we present a complete tracking pipeline, from image processing to pose estimation, specifically for a soft docking scenario. We highlight the issues of light source detectors based on finding a unique global threshold and detectors based on gradient information and propose a novel method, based on using a suitable threshold for each light source. Rejection of false positives is handled systematically by rejecting pose estimates resulting in large re-projection errors, and a configuration of the light sources is proposed that enhances the pose estimation performance. The performance of the proposed light source detector is evaluated on the D-recovery dataset. Results show that the proposed method outperforms other methods in identifying the light sources. The tracking pipeline is evaluated with experiments as well as a simulation based on the Stonefish simulator. / För att autonoma undervattensfordon ska kunna utföra långsiktiga undersökningar är möjligheten att återvända till en dockningsstation för underhåll och laddning avgörande. Ett dynamiskt dockningssystem där en långsamtgående ubåt agerar som dockningsstation ger en ökad hydrodynamisk kontroll och minskar påverkan av omgivande miljöstörningar. Ett synbaserat, relativt positioneringssystem som använder en kamera, monterad på farkosten, och ljuskällor, monterad på dockningsstationen, undersöks som en lämplig högupplöst och högfrekvent lösning för ett relativt positioneringssystem med kort räckvidd. Detektering och identifiering av de verkliga ljuskällorna i närvaro av reflektioner, omgivande ljus och andra störande ljuskällor kräver ett robust spårningssystem som kan särskilja de sanna ljuskällorna från de omgivande störningarna. I denna uppsats presenteras ett komplett spårningssystem, från bildbehandling till positionsestimering, specifikt för ett soft docking scenario. Vi lyfter fram problem med detektorer baserat på att hitta ett unikt globalt tröskelvärde och detektorer baserade på gradientinformation. Vi föreslår en ny metod baserad på att använda ett lämpligt tröskelvärde för varje ljuskälla. Omgivande störningar hanteras systematiskt genom att avvisa positionestimeringar som resulterar i stora projektionsfel, och en konfiguration av ljuskällorna föreslås som förbättrar positionsestimeringens prestanda. Prestandan hos den föreslagna ljuskällsdetektorn utvärderas på datasetet D-recovery. Resultaten visar att den föreslagna metoden överträffar andra metoder i att identifiera ljuskällorna. Spårningsystemet utvärderas med experiment samt en simulering baserad på Stonefish-simulatorn.
|
10 |
Sistema de hardware reconfigurável para navegação visual de veículos autônomos / Reconfigurable hardware system for autonomous vehicles visual navigationDias, Mauricio Acconcia 04 October 2016 (has links)
O número de acidentes veiculares têm aumentado mundialmente e a principal causa associada a estes acidentes é a falha humana. O desenvolvimento de veículos autônomos é uma área que ganhou destaque em vários grupos de pesquisa do mundo, e um dos principais objetivos é proporcionar um meio de evitar estes acidentes. Os sistemas de navegação utilizados nestes veículos precisam ser extremamente confiáveis e robustos o que exige o desenvolvimento de soluções específicas para solucionar o problema. Devido ao baixo custo e a riqueza de informações, um dos sensores mais utilizados para executar navegação autônoma (e nos sistemas de auxílio ao motorista) são as câmeras. Informações sobre o ambiente são extraídas por meio do processamento das imagens obtidas pela câmera, e em seguida são utilizadas pelo sistema de navegação. O objetivo principal desta tese consiste do projeto, implementação, teste e otimização de um comitê de Redes Neurais Artificiais utilizadas em Sistemas de Visão Computacional para Veículos Autônomos (considerando em específico o modelo proposto e desenvolvido no Laboratório de Robótica Móvel (LRM)), em hardware, buscando acelerar seu tempo de execução, para utilização como classificadores de imagens nos veículos autônomos desenvolvidos pelo grupo de pesquisa do LRM. Dentre as contribuições deste trabalho, as principais são: um hardware configurado em um FPGA que executa a propagação do sinal em um comitê de redes neurais artificiais de forma rápida com baixo consumo de energia, comparado a um computador de propósito geral; resultados práticos avaliando precisão, consumo de hardware e temporização da estrutura para a classe de aplicações em questão que utiliza a representação de ponto-fixo; um gerador automático de look-up tables utilizadas para substituir o cálculo exato de funções de ativação em redes MLP; um co-projeto de hardware/software que obteve resultados relevantes para implementação do algoritmo de treinamento Backpropagation e, considerando todos os resultados, uma estrutura que permite uma grande diversidade de trabalhos futuros de hardware para robótica por implementar um sistema de processamento de imagens em hardware. / The number of vehicular accidents have increased worldwide and the leading associated cause is the human failure. Autonomous vehicles design is gathering attention throughout the world in industry and universities. Several research groups in the world are designing autonomous vehicles or driving assistance systems with the main goal of providing means to avoid these accidents. Autonomous vehicles navigation systems need to be reliable with real-time performance which requires the design of specific solutions to solve the problem. Due to the low cost and high amount of collected information, one of the most used sensors to perform autonomous navigation (and driving assistance systems) are the cameras.Information from the environment is extracted through obtained images and then used by navigation systems. The main goal of this thesis is the design, implementation, testing and optimization of an Artificial Neural Network ensemble used in an autonomous vehicle navigation system (considering the navigation system proposed and designed in Mobile Robotics Lab (LRM)) in hardware, in order to increase its capabilites, to be used as image classifiers for robot visual navigation. The main contributions of this work are: a reconfigurable hardware that performs a fast signal propagation in a neural network ensemble consuming less energy when compared to a general purpose computer, due to the nature of the hardware device; practical results on the tradeoff between precision, hardware consumption and timing for the class of applications in question using the fixed-point representation; a automatic generator of look-up tables widely used in hardware neural networks to replace the exact calculation of activation functions; a hardware/software co-design that achieve significant results for backpropagation training algorithm implementation, and considering all presented results, a structure which allows a considerable number of future works on hardware image processing for robotics applications by implementing a functional image processing hardware system.
|
Page generated in 0.2782 seconds