• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 25
  • 12
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 96
  • 96
  • 30
  • 27
  • 21
  • 21
  • 19
  • 19
  • 18
  • 17
  • 17
  • 16
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Apprentissage Intelligent des Robots Mobiles dans la Navigation Autonome / Intelligent Mobile Robot Learning in Autonomous Navigation

Xia, Chen 24 November 2015 (has links)
Les robots modernes sont appelés à effectuer des opérations ou tâches complexes et la capacité de navigation autonome dans un environnement dynamique est un besoin essentiel pour les robots mobiles. Dans l’objectif de soulager de la fastidieuse tâche de préprogrammer un robot manuellement, cette thèse contribue à la conception de commande intelligente afin de réaliser l’apprentissage des robots mobiles durant la navigation autonome. D’abord, nous considérons l’apprentissage des robots via des démonstrations d’experts. Nous proposons d’utiliser un réseau de neurones pour apprendre hors-ligne une politique de commande à partir de données utiles extraites d’expertises. Ensuite, nous nous intéressons à l’apprentissage sans démonstrations d’experts. Nous utilisons l’apprentissage par renforcement afin que le robot puisse optimiser une stratégie de commande pendant le processus d’interaction avec l’environnement inconnu. Un réseau de neurones est également incorporé et une généralisation rapide permet à l’apprentissage de converger en un certain nombre d’épisodes inférieur à la littérature. Enfin, nous étudions l’apprentissage par fonction de récompenses potentielles compte rendu des démonstrations d’experts optimaux ou non-optimaux. Nous proposons un algorithme basé sur l’apprentissage inverse par renforcement. Une représentation non-linéaire de la politique est désignée et la méthode du max-margin est appliquée permettant d’affiner les récompenses et de générer la politique de commande. Les trois méthodes proposées sont évaluées sur des robots mobiles afin de leurs permettre d’acquérir les compétences de navigation autonome dans des environnements dynamiques et inconnus / Modern robots are designed for assisting or replacing human beings to perform complicated planning and control operations, and the capability of autonomous navigation in a dynamic environment is an essential requirement for mobile robots. In order to alleviate the tedious task of manually programming a robot, this dissertation contributes to the design of intelligent robot control to endow mobile robots with a learning ability in autonomous navigation tasks. First, we consider the robot learning from expert demonstrations. A neural network framework is proposed as the inference mechanism to learn a policy offline from the dataset extracted from experts. Then we are interested in the robot self-learning ability without expert demonstrations. We apply reinforcement learning techniques to acquire and optimize a control strategy during the interaction process between the learning robot and the unknown environment. A neural network is also incorporated to allow a fast generalization, and it helps the learning to converge in a number of episodes that is greatly smaller than the traditional methods. Finally, we study the robot learning of the potential rewards underneath the states from optimal or suboptimal expert demonstrations. We propose an algorithm based on inverse reinforcement learning. A nonlinear policy representation is designed and the max-margin method is applied to refine the rewards and generate an optimal control policy. The three proposed methods have been successfully implemented on the autonomous navigation tasks for mobile robots in unknown and dynamic environments.
82

Autonomous Orbit Estimation For Near Earth Satellites Using Horizon Scanners

Nagarajan, N 07 1900 (has links)
Autonomous navigation is the determination of satellites position and velocity vectors onboard the satellite, using the measurements available onboard. The orbital information of a satellite needs to be obtained to support different house keeping operations such as routine tracking for health monitoring, payload data processing and annotation, orbit manoeuver planning, and prediction of intrusion in various sensors' field of view by celestial bodies like Sun, Moon etc. Determination of the satellites orbital parameters is done in a number of ways using a variety of measurements. These measurements may originate from ground based systems as range and range rate measurements, or from another satellite as in the case of GPS (Global Positioning System) and TDUSS (Tracking Data Relay Satellite Systems), or from the same satellite by using sensors like horizon sensor^ sun sensor, star tracker, landmark tracker etc. Depending upon the measurement errors, sampling rates, and adequacy of the estimation scheme, the navigation accuracy can be anywhere in the range of 10m - 10 kms in absolute location. A wide variety of tracking sensors have been proposed in the literature for autonomous navigation. They are broadly classified as (1) Satellite-satellite tracking, (2) Ground- satellite tracking, (3) fully autonomous tracking. Of the various navigation sensors, it may be cost effective to use existing onboard sensors which are well proven in space. Hence, in the current thesis, the Horizon scanner is employed as the primary navigation sensor-. It has been shown in the literature that by using horizon sensors and gyros, a high accuracy pointing of the order of .01 - .03 deg can be achieved in the case of low earth orbits. Motivated by such a fact, the current thesis deals with autonomous orbit determination using measurements from the horizon sensors with the assumption that the attitude is known to the above quoted accuracies. The horizon scanners are mounted on either side of the yaw axis in the pitch yaw plane at an angle of 70 deg with respect to the yaw axis. The Field Of View (FOV) moves about the scanner axis on a cone of 45 deg half cone angle. During each scan, the FOV generates two horizon points, one at the space-Earth entry and the other at the Earth-space exit. The horizon points, therefore, lie• on the edge of the Earth disc seen by the satellite. For a spherical earth, a minimum of three such horizon points are needed to estimate the angular radius and the center of the circular horizon disc. Since a total of four horizon points are available from a pair of scanners, they can be used to extract the satellite-earth distance and direction.These horizon points are corrupted by noise due to uncertainties in the Earth's radiation pattern, detector mechanism, the truncation and roundoff errors due to digitisation of the measurements. Owing to the finite spin rate of the scanning mechanism, the measurements are available at discrete time intervals. Thus a filtering algorithm with appropriate state dynamics becomes essential to handle the •noise in the measurements, to obtain the best estimate and to propagate the state between the measurements. The orbit of a low earth satellite can be represented by either a state vector (position and velocity vectors in inertial frame) or Keplerian elements. The choice depends upon the available processors, functions and the end use of the estimated orbit information. It is shown in the thesis that position and velocity vectors in inertial frame or the position vector in local reference frame, do result in a simplified, state representation. By using the f and g series method for inertial position and velocity, the state propagation is achieved in linear form. i.e. Xk+1 = AXK where X is the state (position, velocity) and A the state transition matrix derived from 'f' and 'g' series. The configuration of a 3 axis stabilised spacecraft with two horizon scanners is used to simulate the measurements. As a step towards establishing the feasibility of extracting the orbital parameters, the governing equations are formulated to compute the satellite-earth vector from the four horizon points generated by a pair of Horizon Scanners in the presence of measurement noise. Using these derived satellite-earth vectors as measurements, Kalman filter equations are developed, where both the state and measurements equations are linear. Based on simulations, it is shown that a position accuracy of about 2 kms can be achieved. Additionally, the effect of sudden disturbances like substantial slewing of the solar panels prior and after the payload operations are also analysed. It is shown that a relatively simple Low Pass Filter (LPF) in the measurements loop with a cut-off frequency of 10 Wo (Wo = orbital frequency) effectively suppresses the high frequency effects from sudden disturbances which otherwise camouflage the navigational information content of the signal. Then Kalman filter can continue to estimate the orbit with the same kind of accuracy as before without recourse to re-tuning of covariance matrices. Having established the feasibility of extracting the orbit information, the next step is to treat the measurements in its original form, namely, the non-linear form. The entry or exit timing pulses generated by the scanner when multiplied by the scan rate yield entry or exit azimuth angles in the scanner frame of reference, which in turn represents an effective measurement variable. These azimuth angles are obtained as inverse trigonometric functions of the satellite-earth vector. Thus the horizon scanner measurements are non-linear functions of the orbital state. The analytical equations for the horizon points as seen in the body frame are derived, first for a spherical earth case. To account for the oblate shape of the earth, a simple one step correction algorithm is developed to calculate the horizon points. The horizon points calculated from this simple algorithm matches well with the ones from accurate model within a bound of 5%. Since the horizon points (measurements) are non-linear functions of the state, an Extended Kalman Filter (EKF) is employed for state estimation. Through various simulation runs, it is observed that the along track state has got poor observability when the four horizon points are treated as measurements in their original form, as against the derived satellite-earth vector in the earlier strategy. This is also substantiated by means of condition number of the observability matrix. In order to examine this problem in detail, the observability of the three modes such as along-track, radial, and cross-track components (i.e. the local orbit frame of reference) are analysed. This difficulty in observability is obviated when an additional sensor is used in the roll-yaw plane. Subsequently the simulation studies are carried out with two scanners in pitch-yaw plane and one scanner in the roll-yaw plane (ie. a total of 6 horizon points at each time). Based on the simulations, it is shown that the achievable accuracy in absolute position is about 2 kms.- Since the scanner in the roll-yaw plane is susceptible to dazzling by Sun, the effect of data breaks due to sensor inhibition is also analysed. It is further established that such data breaks do not improve the accuracy of the estimates of the along-track component during the transient phase. However, filter does not diverge during this period. Following the analysis of the' filter performance, influence of Earth's oblateness on the measurement model studied. It is observed that the error in horizon points, due to spherical Earth approximation behave like a sinusoid of twice the orbital frequency alongwith a bias of about 0.21° in the case of a 900 kms sun synchronous orbit. The error in the 6 horizon points is shown to give rise to 6 sinusoids. Since the measurement model for a spherical earth is the simplest one, the feasibility of estimating these sinusoids along with the orbital state forms the next part of the thesis. Each sinusoid along with the bias is represented as a 3 state recursive equation in the following form where i refers to the ith sinusoid and T the sampling interval. The augmented or composite state variable X consists of bias, Sine and Cosine components of the sinusoids. The 6 sinusoids together with the three dimensional orbital position vector in local coordinate frame then lead to a 21 state augmented Kalman Filter. With the 21 state filter, observability problems are experienced. Hence the magnetic field strength, which is a function of radial distance as measured by an onboard magnetometer is proposed as additional measurement. Subsequently, on using 6 horizon point measurements and the radial distance measurements obtained from a magnetometer and taking advantage of relationships between sinusoids, it is shown that a ten state filter (ie. 3 local orbital states, one bias and 3 zero mean sinusoids) can effectively function as an onboard orbit filter. The filter performance is investigated for circular as well as low eccentricity orbits. The 10-state filter is shown to exhibit a lag while following the radial component in case of low eccentricity orbits. This deficiency is overcome by introducing two more states, namely the radial velocity and acceleration thus resulting in a 12-state filter. Simulation studies reveal that the 12-state filter performance is very good for low eccentricity orbits. The lag observed in 10-state filter is totally removed. Besides, the 12-state filter is able to follow the changes in orbit due to orbital manoeuvers which are part of orbit acquisition plans for any mission.
83

Evolução de redes imunologicas para coordenação automatica de comportamentos elementares em navegação autonoma de robos / Evolution of immune networks for automatic coordination of elementary behaviors on robot autonomous navigation

Michelan, Roberto 20 April 2006 (has links)
Orientadores: Fernando Jose Von Zuben, Mauricio Fernandes Figueiredo / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-06T19:35:31Z (GMT). No. of bitstreams: 1 Michelan_Roberto_M.pdf: 4495515 bytes, checksum: aed72feefc89070579190e862ea0f740 (MD5) Previous issue date: 2006 / Resumo: A concepção de sistemas autônomos de navegação para robôs móveis, havendo múltiplos objetivos simultâneos a serem atendidos, como a coleta de lixo com manutenção da integridade, requer a adoção de técnicas refinadas de coordenação de módulos de comportamento elementar. Modelos de redes imunológicas artificiais podem então ser empregados na proposição de um controlador concebido com base em um processo de mapeamento dinâmico. Os anticorpos da rede são responsáveis pelos módulos de comportamento elementar, na forma de regras do tipo <condição>-<ação>, e as conexões são responsáveis pelos mecanismos de estímulo e supressão entre os anticorpos. A rede iniciará uma resposta imunológica sempre que lhe forem apresentados os antígenos. Estes antígenos representam a situação atual capturada pelos sensores do robô. A dinâmica da rede é baseada no nível de concentração dos anticorpos, definida com base na interação dos anticorpos e dos anticorpos com os antígenos. De acordo com o nível de concentração, um anticorpo é escolhido para definir a ação do robô. Um processo evolutivo é então responsável por definir um padrão de conexões para a rede imunológica, a partir de uma população de redes candidatas, capaz de maximizar o atendimento dos objetivos durante a navegação. Resulta então um sistema híbrido que tem a rede imunológica como responsável por introduzir um processo dinâmico de tomada de decisão e tem agora a computação evolutiva como responsável por definir a estrutura da rede. Para que fosse possível avaliar os controladores (redes imunológicas) a cada geração do processo evolutivo, um ambiente virtual foi desenvolvido para simulação computacional, com base nas características do problema de navegação. As redes imunológicas obtidas através do processo evolutivo foram analisadas e testadas em novas situações, apresentando capacidade de coordenação em tarefas simples e complexas. Os experimentos preliminares com um robô real do tipo Khepera II indicaram a eficácia da ferramenta de navegação / Abstract: The design of an autonomous navigation system for mobile robots, with simultaneous objectives to be satisfied, as garbage collection with maintenance of integrity, requires refined coordination mechanisms to deal with modules of elementary behavior. Models of artificial immune networks can then be applied to produce a controller based on dynamic mapping. The antibodies of the immune network are responsible for the modules of elementary behavior, in the form of <condition>-<action> rules, and the connections are responsible for the mechanisms of stimulation and suppression of antibodies. The network will always start an immune response when antigens are presented. These antigens represent the current output of the robot sensors. The network dynamics is based on the levels of antibody concentration, provided by interaction among antibodies, and among antibodies and antigens. Based on its concentration level, an antibody is chosen to define the robot action. An evolutionary process is then used to define the connection pattern of the immune network, from a population of candidate networks, capable of maximizing the objectives during navigation. As a consequence, a hybrid system is conceived, with an immune network implementing a dynamic process of decision-making, and an evolutionary algorithm defining the network structure. To be able to evaluate the controllers (immune networks) at each iteration of the evolutionary process, a virtual environment was developed for computer simulation, based on the characteristics of the navigation problem. The immune networks obtained by evolution were analyzed and tested in new situations and presented coordination capability in simple and complex tasks. The preliminary experiments on a real Khepera II robot indicated the efficacy of the navigation tool / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
84

Amélioration de performance de la navigation basée vision pour la robotique autonome : une approche par couplage vision/commande / Performance improvment of vision-based navigation for autonomous robotics : a vision and control coupling approach

Roggeman, Hélène 13 December 2017 (has links)
L'objectif de cette thèse est de réaliser des missions diverses de navigation autonome en environnement intérieur et encombré avec des robots terrestres. La perception de l'environnement est assurée par un banc stéréo embarqué sur le robot et permet entre autres de calculer la localisation de l'engin grâce à un algorithme d'odométrie visuelle. Mais quand la qualité de la scène perçue par les caméras est faible, la localisation visuelle ne peut pas être calculée de façon précise. Deux solutions sont proposées pour remédier à ce problème. La première solution est l'utilisation d'une méthode de fusion de données multi-capteurs pour obtenir un calcul robuste de la localisation. La deuxième solution est la prédiction de la qualité de scène future afin d'adapter la trajectoire du robot pour s'assurer que la localisation reste précise. Dans les deux cas, la boucle de commande est basée sur l'utilisation de la commande prédictive afin de prendre en compte les différents objectifs de la mission : ralliement de points, exploration, évitement d'obstacles. Une deuxième problématique étudiée est la navigation par points de passage avec évitement d'obstacles mobiles à partir des informations visuelles uniquement. Les obstacles mobiles sont détectés dans les images puis leur position et vitesse sont estimées afin de prédire leur trajectoire future et ainsi de pouvoir anticiper leur déplacement dans la stratégie de commande. De nombreuses expériences ont été réalisées en situation réelle et ont permis de montrer l'efficacité des solutions proposées. / The aim of this thesis is to perform various autonomous navigation missions in indoor and cluttered environments with mobile robots. The environment perception is ensured by an embedded stereo-rig and a visual odometry algorithm which computes the localization of the robot. However, when the quality of the scene perceived by the cameras is poor, the visual localization cannot be computed with a high precision. Two solutions are proposed to tackle this problem. The first one is the data fusion from multiple sensors to perform a robust computation of the localization. The second solution is the prediction of the future scene quality in order to adapt the robot's trajectory to ensure that the localization remains accurate. In the two cases, the control loop is based on model predictive control, which offers the possibility to consider simultaneously the different objectives of the mission : waypoint navigation, exploration, obstacle avoidance. A second issue studied is waypoint navigation with avoidance of mobile obstacles using only the visual information. The mobile obstacles are detected in the images and their position and velocity are estimated in order to predict their future trajectory and consider it in the control strategy. Numerous experiments were carried out and demonstrated the effectiveness of the proposed solutions.
85

Navigace pomocí hlubokých konvolučních sítí / Navigation Using Deep Convolutional Networks

Skácel, Dalibor January 2018 (has links)
In this thesis I deal with the problem of navigation and autonomous driving using convolutional neural networks. I focus on the main approaches utilizing sensory inputs described in literature and the theory of neural networks, imitation and reinforcement learning. I also discuss the tools and methods applicable to driving systems. I created two deep learning models for autonomous driving in simulated environment. These models use the Dataset Aggregation and Deep Deterministic Policy Gradient algorithms. I tested the created models in the TORCS car racing simulator and compared the result with available sources.
86

Navigace pomocí hlubokých konvolučních sítí / Navigation Using Deep Convolutional Networks

Skácel, Dalibor January 2018 (has links)
This thesis studies navigation and autonomous driving using convolutional neural networks. It presents main approaches to this problem used in literature. It describes theory of neural networks and imitation and reinforcement learning. It also describes tools and methods suitable for a driving system. There are two simulation driving models created using learning algorithms DAGGER and DDPG. The models are then tested in car racing simulator TORCS.
87

Aplikace geometrických algeber / Geometric algebra applications

Machálek, Lukáš January 2021 (has links)
Tato diplomová práce se zabývá využitím geometrické algebry pro kuželosečky (GAC) v autonomní navigaci, prezentované na pohybu robota v trubici. Nejprve jsou zavedeny teoretické pojmy z geometrických algeber. Následně jsou prezentovány kuželosečky v GAC. Dále je provedena implementace enginu, který je schopný provádět základní operace v GAC, včetně zobrazování kuželoseček zadaných v kontextu GAC. Nakonec je ukázán algoritmus, který odhadne osu trubice pomocí bodů, které umístí do prostoru pomocí středů elips, umístěných v obrazu, získaných obrazovým filtrem a fitovacím algoritmem.
88

Détection d’obstacles par stéréovision en environnement non structuré / Obstacles detection by stereovision in unstructured environments

Dujardin, Aymeric 03 July 2018 (has links)
Les robots et véhicules autonomes représentent le futur des modes de déplacements et de production. Les enjeux de l’avenir reposent sur la robustesse de leurs perceptions et flexibilité face aux environnements changeant et situations inattendues. Les capteurs stéréoscopiques sont des capteurs passifs qui permettent d'obtenir à la fois image et information 3D de la scène à la manière de la vision humaine. Dans ces travaux nous avons développé un système de localisation, par odométrie visuelle permettant de déterminer la position dans l'espace du capteur de façon efficace et performante en tirant partie de la carte de profondeur dense mais également associé à un système de SLAM, rendant la localisation robuste aux perturbations et aux décalages potentiels. Nous avons également développé plusieurs solutions de cartographie et interprétation d’obstacles, à la fois pour le véhicule aérien et terrestre. Ces travaux sont en partie intégrés dans des produits commerciaux. / Autonomous vehicles and robots represent the future of transportation and production industries. The challenge ahead will come from the robustness of perception and flexibility from unexpected situations and changing environments. Stereoscopic cameras are passive sensors that provide color images and depth information of the scene by correlating 2 images like the human vision. In this work, we developed a localization system, by visual odometry that can determine efficiently the position in space of the sensor by exploiting the dense depth map. It is also combined with a SLAM system that enables robust localization against disturbances and potentials drifts. Additionally, we developed a few mapping and obstacles detections solutions, both for aerial and terrestrial vehicles. These algorithms are now partly integrated into commercial products.
89

Localisation visuelle multimodale visible/infrarouge pour la navigation autonome / Multimodal visible/infrared visual localisation for autonomous navigation

Bonardi, Fabien 23 November 2017 (has links)
On regroupe sous l’expression navigation autonome l’ensemble des méthodes visantà automatiser les déplacements d’un robot mobile. Les travaux présentés seconcentrent sur la problématique de la localisation en milieu extérieur, urbain etpériurbain, et approchent la problématique de la localisation visuelle soumise à lafois à un changement de capteurs (géométrie et modalité) ainsi qu’aux changementsde l’environnement à long terme, contraintes combinées encore très peu étudiéesdans l’état de l’art. Les recherches menées dans le cadre de cette thèse ont porté surl’utilisation exclusive de capteurs de vision. La contribution majeure de cette thèseporte sur la phase de description et compression des données issues des images sousla forme d’un histogramme de mots visuels que nous avons nommée PHROG (PluralHistograms of Restricted Oriented Gradients). Les expériences menées ont été réaliséessur plusieurs bases d’images avec différentes modalités visibles et infrarouges. Lesrésultats obtenus démontrent une amélioration des performances de reconnaissance descènes comparés aux méthodes de l’état de l’art. Par la suite, nous nous intéresseronsà la nature séquentielle des images acquises dans un contexte de navigation afin defiltrer et supprimer des estimations de localisation aberrantes. Les concepts d’un cadreprobabiliste Bayésien permettent deux applications de filtrage probabiliste appliquéesà notre problématique : une première solution définit un modèle de déplacementsimple du robot avec un filtre d’histogrammes et la deuxième met en place un modèleplus évolué faisant appel à l’odométrie visuelle au sein d’un filtre particulaire.123 / Autonomous navigation field gathers the set of algorithms which automate the moves of a mobile robot. The case study of this thesis focuses on the outdoor localisation issue with additionnal constraints : the use of visual sensors only with variable specifications (geometry, modality, etc) and long-term apparence changes of the surrounding environment. Both types of constraints are still rarely studied in the state of the art. Our main contribution concerns the description and compression steps of the data extracted from images. We developped a method called PHROG which represents data as a visual-words histogram. Obtained results on several images datasets show an improvment of the scenes recognition performance compared to methods from the state of the art. In a context of navigation, acquired images are sequential such that we can envision a filtering method to avoid faulty localisation estimation. Two probabilistic filtering approaches are proposed : a first one defines a simple movement model with a histograms filter and a second one sets up a more complex model using visual odometry and a particules filter.
90

Calibrage de caméra fisheye et estimation de la profondeur pour la navigation autonome

Brousseau, Pierre-André 08 1900 (has links)
Ce mémoire s’intéresse aux problématiques du calibrage de caméras grand angles et de l’estimation de la profondeur à partir d’une caméra unique, immobile ou en mouvement. Les travaux effectués se situent à l’intersection entre la vision 3D classique et les nouvelles méthodes par apprentissage profond dans le domaine de la navigation autonome. Ils visent à permettre la détection d’obstacles par un drone en mouvement muni d’une seule caméra à très grand angle de vue. D’abord, une nouvelle méthode de calibrage est proposée pour les caméras fisheyes à très grand angle de vue par calibrage planaire à correspondances denses obtenues par lumière structurée qui peuvent être modélisée par un ensemble de caméras génériques virtuelles centrales. Nous démontrons que cette approche permet de modéliser directement des caméras axiales, et validons sur des données synthétiques et réelles. Ensuite, une méthode est proposée pour estimer la profondeur à partir d’une seule image, à partir uniquement des indices de profondeurs forts, les jonctions en T. Nous démontrons que les méthodes par apprentissage profond sont susceptibles d’apprendre les biais de leurs ensembles de données et présentent des lacunes d’invariance. Finalement, nous proposons une méthode pour estimer la profondeur à partir d’une caméra en mouvement libre à 6 degrés de liberté. Ceci passe par le calibrage de la caméra fisheye sur le drone, l’odométrie visuelle et la résolution de la profondeur. Les méthodes proposées permettent la détection d’obstacle pour un drone. / This thesis focuses on the problems of calibrating wide-angle cameras and estimating depth from a single camera, stationary or in motion. The work carried out is at the intersection between traditional 3D vision and new deep learning methods in the field of autonomous navigation. They are designed to allow the detection of obstacles by a moving drone equipped with a single camera with a very wide field of view. First, a new calibration method is proposed for fisheye cameras with very large field of view by planar calibration with dense correspondences obtained by structured light that can be modelled by a set of central virtual generic cameras. We demonstrate that this approach allows direct modeling of axial cameras, and validate it on synthetic and real data. Then, a method is proposed to estimate the depth from a single image, using only the strong depth cues, the T-junctions. We demonstrate that deep learning methods are likely to learn from the biases of their data sets and have weaknesses to invariance. Finally, we propose a method to estimate the depth from a camera in free 6 DoF motion. This involves calibrating the fisheye camera on the drone, visual odometry and depth resolution. The proposed methods allow the detection of obstacles for a drone.

Page generated in 0.0367 seconds