• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Vision-based Navigation for Mobile Robots on Ill-structured Roads

Lee, Hyun Nam 16 January 2010 (has links)
Autonomous robots can replace humans to explore hostile areas, such as Mars and other inhospitable regions. A fundamental task for the autonomous robot is navigation. Due to the inherent difficulties in understanding natural objects and changing environments, navigation for unstructured environments, such as natural environments, has largely unsolved problems. However, navigation for ill-structured environments [1], where roads do not disappear completely, increases the understanding of these difficulties. We develop algorithms for robot navigation on ill-structured roads with monocular vision based on two elements: the appearance information and the geometric information. The fundamental problem of the appearance information-based navigation is road presentation. We propose a new type of road description, a vision vector space (V2-Space), which is a set of local collision-free directions in image space. We report how the V2-Space is constructed and how the V2-Space can be used to incorporate vehicle kinematic, dynamic, and time-delay constraints in motion planning. Failures occur due to the limitations of the appearance information-based navigation, such as a lack of geometric information. We expand the research to include consideration of geometric information. We present the vision-based navigation system using the geometric information. To compute depth with monocular vision, we use images obtained from different camera perspectives during robot navigation. For any given image pair, the depth error in regions close to the camera baseline can be excessively large. This degenerated region is named untrusted area, which could lead to collisions. We analyze how the untrusted areas are distributed on the road plane and predict them accordingly before the robot makes its move. We propose an algorithm to assist the robot in avoiding the untrusted area by selecting optimal locations to take frames while navigating. Experiments show that the algorithm can significantly reduce the depth error and hence reduce the risk of collisions. Although this approach is developed for monocular vision, it can be applied to multiple cameras to control the depth error. The concept of an untrusted area can be applied to 3D reconstruction with a two-view approach.
2

Indoor navigation of mobile robots based on visual memory and image-based visual servoing / Navigation d'un robot mobile en milieu intérieur par asservissement visuel à partir d'une mémoire visuelle

Bista, Suman Raj 20 December 2016 (has links)
Cette thèse présente une méthode de navigation par asservissement visuel à l'aide d'une mémoire d'images. Le processus de navigation est issu d'informations d'images 2D sans utiliser aucune connaissance 3D. L'environnement est représenté par un ensemble d'images de référence avec chevauchements, qui sont automatiquement sélectionnés au cours d'une phase d'apprentissage préalable. Ces images de référence définissent le chemin à suivre au cours de la navigation. La commutation des images de référence au cours de la navigation est faite en comparant l'image acquise avec les images de référence à proximité. Basé sur les images actuelles et deux images de référence suivantes, la vitesse de rotation d'un robot mobile est calculée en vertu d'une loi du commandé par asservissement visuel basé image. Tout d'abord, nous avons utilisé l'image entière comme caractéristique, où l'information mutuelle entre les images de référence et la vue actuelle est exploitée. Ensuite, nous avons utilisé des segments de droite pour la navigation en intérieur, où nous avons montré que ces segments sont de meilleurs caractéristiques en environnement intérieur structuré. Enfin, nous avons combiné les segments de droite avec des points pour augmenter l'application de la méthode à une large gamme de scénarios d'intérieur pour des mouvements sans heurt. La navigation en temps réel avec un robot mobile équipé d'une caméra perspective embarquée a été réalisée. Les résultats obtenus confirment la viabilité de notre approche et vérifient qu'une cartographie et une localisation précise ne sont pas nécessaire pour une navigation intérieure utile. / This thesis presents a method for appearance-based navigation from an image memory by Image-Based Visual Servoing (IBVS). The entire navigation process is based on 2D image information without using any 3D information at all. The environment is represented by a set of reference images with overlapping landmarks, which are selected automatically during a prior learning phase. These reference images define the path to follow during the navigation. The switching of reference images during navigation is done by comparing the current acquired image with nearby reference images. Based on the current image and two succeeding key images, the rotational velocity of a mobile robot is computed under IBVS control law. First, we have used the entire image as a fea-ture, where mutual information between reference images and the current view is exploited. Then, we have used line segments for the indoor navigation, where we have shown that line segments are better features for the structured indoor environment. Finally, we combined line segments with point-based features for increasing the application of the method to a wide range of indoor scenarios with smooth motion. Real-time navigation with a Pioneer 3DX equipped with an on-board perspective camera has been performed in indoor environment. The obtained results confirm the viability of our approach and verify that accurate mapping and localization are not mandatory for a useful indoor navigation.
3

Autonomous Goal-Based Mapping and Navigation Using a Ground Robot

Ferrin, Jeffrey L. 01 December 2016 (has links)
Ground robotic vehicles are used in many different applications. Many of these uses include tele-operation of the robot. This allows the robot to be deployed in locations that are too difficult or are unsafe for human access. The ability of a ground robot to autonomously navigate to a desired location without a-priori map information and without using GPS would allow robotic vehicles to be used in many of these situations and would free the operator to focus on other more important tasks. The purpose of this research is to develop algorithms that enable a ground robot to autonomously navigate to a user-selected location. The goal is selected from a video feed from the robot and the robot drives to the goal location while avoiding obstacles. The method uses a monocular camera for measuring the locations of the goal and landmarks. The method is validated in simulation and through experiments on an iRobot Packbot platform. A novel goal-based robocentric mapping algorithm is derived in Chapter 3. This map is created using an extended Kalman filter (EKF) by tracking the position of the goal along with other available landmarks surrounding the robot as it drives towards the goal. The mapping is robocentric, meaning that the map is a local map created in the robot-body frame. A unique state definition of the goal states and additional landmarks is presented that improves the estimate of the goal location. An improved 3D model is derived and used to allow the robot to drive on non-flat terrain while calculating the position of the goal and other landmarks. The observability and consistency of the proposed method are shown in Chapter 4. The visual tracking algorithm is explained in Chapter 5. This tracker is used with the EKF to improve tracking performance and to allow the objects to be tracked even after leaving the camera field of view for significant periods of time. This problem presents a difficult challenge for visual tracking because of the drastic change in size of the goal object as the robot approaches the goal. The tracking method is validated through experiments in real-world scenarios. The method of planning and control is derived in Chapter 6. A Model Predictive Control (MPC) formulation is designed that explicitly handles the sensor constraints of a monocular camera that is rigidly mounted to the vehicle. The MPC uses an observability-based cost function to drive the robot along a path that minimizes the position error of the goal in the robot-body frame. The MPC algorithm also avoids obstacles while driving to the goal. The conditions are explained that guarantee the robot will arrive within some specified distance of the goal. The entire system is implemented on an iRobot Packbot and experiments are conducted and presented in Chapter 7. The methods described in this work are shown to work on actual hardware allowing the robot to arrive at a user-selected goal in real-world scenarios.
4

Policy Hyperparameter Exploration for Behavioral Learning of Smartphone Robots / スマートフォンロボットの行動学習のための方策ハイパーパラメータ探索法

Wang, Jiexin 23 March 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第20519号 / 情博第647号 / 新制||情||112(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 石井 信, 教授 杉江 俊治, 教授 大塚 敏之, 銅谷 賢治 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
5

Vision-Based Precision Landings of a Tailsitter UAV

Millet, Paul Travis 24 November 2009 (has links) (PDF)
We present a method of performing precision landings of a vertical take-off and landing (VTOL) unmanned air vehicle (UAV) with the use of an onboard vision sensor and information about the aircraft's orientation and altitude above ground level (AGL). A method for calculating the 3-dimensional location of the UAV relative to a ground target of interest is presented as well as a navigational controller to position the UAV above the target. A method is also presented to prevent the UAV from moving in a way that will cause the ground target of interest to go out of view of the UAV's onboard camera. These methods are tested in simulation and in hardware and resulting data is shown. Hardware flight testing yielded an average position estimation error of 22 centimeters. The method presented is capable of performing precision landings of VTOL UAV's with submeter accuracy.
6

Vision-Based Localization and Guidance for Unmanned Aerial Vehicles

Conte, Gianpaolo January 2009 (has links)
The thesis has been developed as part of the requirements for a PhD degree at the Artificial Intelligence and Integrated Computer System division (AIICS) in the Department of Computer and Information Sciences at Linköping University.The work focuses on issues related to Unmanned Aerial Vehicle (UAV) navigation, in particular in the areas of guidance and vision-based autonomous flight in situations of short and long term GPS outage.The thesis is divided into two parts. The first part presents a helicopter simulator and a path following control mode developed and implemented on an experimental helicopter platform. The second part presents an approach to the problem of vision-based state estimation for autonomous aerial platforms which makes use of geo-referenced images for localization purposes. The problem of vision-based landing is also addressed with emphasis on fusion between inertial sensors and video camera using an artificial landing pad as reference pattern. In the last chapter, a solution to a vision-based ground object geo-location problem using a fixed-wing micro aerial vehicle platform is presented.The helicopter guidance and vision-based navigation methods developed in the thesis have been implemented and tested in real flight-tests using a Yamaha Rmax helicopter. Extensive experimental flight-test results are presented. / WITAS
7

Vision-Based Emergency Landing of Small Unmanned Aircraft Systems

Lusk, Parker Chase 01 November 2018 (has links)
Emergency landing is a critical safety mechanism for aerial vehicles. Commercial aircraft have triply-redundant systems that greatly increase the probability that the pilot will be able to land the aircraft at a designated airfield in the event of an emergency. In general aviation, the chances of always reaching a designated airfield are lower, but the successful pilot might use landmarks and other visual information to safely land in unprepared locations. For small unmanned aircraft systems (sUAS), triply- or even doubly-redundant systems are unlikely due to size, weight, and power constraints. Additionally, there is a growing demand for beyond visual line of sight (BVLOS) operations, where an sUAS operator would be unable to guide the vehicle safely to the ground. This thesis presents a machine vision-based approach to emergency landing for small unmanned aircraft systems. In the event of an emergency, the vehicle uses a pre-compiled database of potential landing sites to select the most accessible location to land based on vehicle health. Because it is impossible to know the current state of any ground environment, a camera is used for real-time visual feedback. Using the recently developed Recursive-RANSAC algorithm, an arbitrary number of moving ground obstacles can be visually detected and tracked. If obstacles are present in the selected ditch site, the emergency landing system chooses a new ditch site to mitigate risk. This system is called Safe2Ditch.
8

Navigation autonome par imagerie de terrain pour l'exploration planétaire / Autonomous vision-based terrain-relative navigation for planetary exploration

Simard Bilodeau, Vincent January 2015 (has links)
Abstract: The interest of major space agencies in the world for vision sensors in their mission designs has been increasing over the years. Indeed, cameras offer an efficient solution to address the ever-increasing requirements in performance. In addition, these sensors are multipurpose, lightweight, proven and a low-cost technology. Several researchers in vision sensing for space application currently focuse on the navigation system for autonomous pin-point planetary landing and for sample and return missions to small bodies. In fact, without a Global Positioning System (GPS) or radio beacon around celestial bodies, high-accuracy navigation around them is a complex task. Most of the navigation systems are based only on accurate initialization of the states and on the integration of the acceleration and the angular rate measurements from an Inertial Measurement Unit (IMU). This strategy can track very accurately sudden motions of short duration, but their estimate diverges in time and leads normally to high landing error. In order to improve navigation accuracy, many authors have proposed to fuse those IMU measurements with vision measurements using state estimators, such as Kalman filters. The first proposed vision-based navigation approach relies on feature tracking between sequences of images taken in real time during orbiting and/or landing operations. In that case, image features are image pixels that have a high probability of being recognized between images taken from different camera locations. By detecting and tracking these features through a sequence of images, the relative motion of the spacecraft can be determined. This technique, referred to as Terrain-Relative Relative Navigation (TRRN), relies on relatively simple, robust and well-developed image processing techniques. It allows the determination of the relative motion (velocity) of the spacecraft. Despite the fact that this technology has been demonstrated with space qualified hardware, its gain in accuracy remains limited since the spacecraft absolute position is not observable from the vision measurements. The vision-based navigation techniques currently studied consist in identifying features and in mapping them into an on-board cartographic database indexed by an absolute coordinate system, thereby providing absolute position determination. This technique, referred to as Terrain-Relative Absolute Navigation (TRAN), relies on very complex Image Processing Software (IPS) having an obvious lack of robustness. In fact, these software depend often on the spacecraft attitude and position, they are sensitive to illumination conditions (the elevation and azimuth of the Sun when the geo-referenced database is built must be similar to the ones present during mission), they are greatly influenced by the image noise and finally they hardly manage multiple varieties of terrain seen during the same mission (the spacecraft can fly over plain zone as well as mountainous regions, the images may contain old craters with noisy rims as well as young crater with clean rims and so on). At this moment, no real-time hardware-in-the-loop experiment has been conducted to demonstrate the applicability of this technology to space mission. The main objective of the current study is to develop autonomous vision-based navigation algorithms that provide absolute position and surface-relative velocity during the proximity operations of a planetary mission (orbiting phase and landing phase) using a combined approach of TRRN and TRAN technologies. The contributions of the study are: (1) reference mission definition, (2) advancements in the TRAN theory (image processing as well as state estimation) and (3) practical implementation of vision-based navigation. / Résumé: L’intérêt des principales agences spatiales envers les technologies basées sur la vision artificielle ne cesse de croître. En effet, les caméras offrent une solution efficace pour répondre aux exigences de performance, toujours plus élevées, des missions spatiales. De surcroît, ces capteurs sont multi-usages, légers, éprouvés et peu coûteux. Plusieurs chercheurs dans le domaine de la vision artificielle se concentrent actuellement sur les systèmes autonomes pour l’atterrissage de précision sur des planètes et sur les missions d’échantillonnage sur des astéroïdes. En effet, sans système de positionnement global « Global Positioning System (GPS) » ou de balises radio autour de ces corps célestes, la navigation de précision est une tâche très complexe. La plupart des systèmes de navigation sont basés seulement sur l’intégration des mesures provenant d’une centrale inertielle. Cette stratégie peut être utilisée pour suivre les mouvements du véhicule spatial seulement sur une courte durée, car les données estimées divergent rapidement. Dans le but d’améliorer la précision de la navigation, plusieurs auteurs ont proposé de fusionner les mesures provenant de la centrale inertielle avec des mesures d’images du terrain. Les premiers algorithmes de navigation utilisant l’imagerie du terrain qui ont été proposés reposent sur l’extraction et le suivi de traits caractéristiques dans une séquence d’images prises en temps réel pendant les phases d’orbite et/ou d’atterrissage de la mission. Dans ce cas, les traits caractéristiques de l’image correspondent à des pixels ayant une forte probabilité d’être reconnus entre des images prises avec différentes positions de caméra. En détectant et en suivant ces traits caractéristiques, le déplacement relatif du véhicule (la vitesse) peut être déterminé. Ces techniques, nommées navigation relative, utilisent des algorithmes de traitement d’images robustes, faciles à implémenter et bien développés. Bien que cette technologie a été éprouvée sur du matériel de qualité spatiale, le gain en précision demeure limité étant donné que la position absolue du véhicule n’est pas observable dans les mesures extraites de l’image. Les techniques de navigation basées sur la vision artificielle actuellement étudiées consistent à identifier des traits caractéristiques dans l’image pour les apparier avec ceux contenus dans une base de données géo-référencées de manière à fournir une mesure de position absolue au filtre de navigation. Cependant, cette technique, nommée navigation absolue, implique l’utilisation d’algorithmes de traitement d’images très complexes souffrant pour le moment des problèmes de robustesse. En effet, ces algorithmes dépendent souvent de la position et de l’attitude du véhicule. Ils sont très sensibles aux conditions d’illuminations (l’élévation et l’azimut du Soleil présents lorsque la base de données géo-référencée est construite doit être similaire à ceux observés pendant la mission). Ils sont grandement influencés par le bruit dans l’image et enfin ils supportent mal les multiples variétés de terrain rencontrées pendant la même mission (le véhicule peut survoler autant des zones de plaine que des régions montagneuses, les images peuvent contenir des vieux cratères avec des contours flous aussi bien que des cratères jeunes avec des contours bien définis, etc.). De plus, actuellement, aucune expérimentation en temps réel et sur du matériel de qualité spatiale n’a été réalisée pour démontrer l’applicabilité de cette technologie pour les missions spatiales. Par conséquent, l’objectif principal de ce projet de recherche est de développer un système de navigation autonome par imagerie du terrain qui fournit la position absolue et la vitesse relative au terrain d’un véhicule spatial pendant les opérations à basse altitude sur une planète. Les contributions de ce travail sont : (1) la définition d’une mission de référence, (2) l’avancement de la théorie de la navigation par imagerie du terrain (algorithmes de traitement d’images et estimation d’états) et (3) implémentation pratique de cette technologie.
9

Etude photométrique des lunes glacées de Jupiter / Photometric study of Jupiter's moons

Belgacem, Ines 15 November 2019 (has links)
Les satellites glacés de Jupiter sont d'un grand intérêt scientifique dans la recherche d'habitabilité au sein de notre système solaire. Elles abritent probablement toutes trois des océans d'eau liquide sous leur croûte glacée. Leurs surfaces présentent différents stades d’évolution – celle de Callisto est très ancienne et entièrement recouverte de cratères, celle de Ganymede est un mélange de terrains sombres et cratérisés et de plaines claires et plus jeunes et la surface d’Europa est la plus jeune et présente des signes d’activité récente. Cette thèse porte sur la photométrie, c’est à dire l’étude de l’énergie lumineuse réfléchie par une surface, en fonction des géométries d’éclairement et d’observation. Les études photométriques permettent de déterminer l’état physique et la microtexture des surfaces (porosité, structure interne, forme des grains, rugosité, transparence…). Une bonne connaissance photométrique est également d'une importance cruciale dans la correction des jeux de données pour toute étude cartographique ou spectroscopique ainsi que pour les futures missions de cette décennie : Europa Clipper de la NASA et JUpiter ICy Moons Explorer de l’ESA.Deux types d’information sont nécessaires pour réaliser une étude photométrique : des données de réflectance et des données géométriques (conditions d’illumination et d'observation). Pour obtenir les premières, nous avons utilisé et calibré des images de missions spatiales passées - Voyager, New Horizons et Galileo. Pour obtenir les secondes, nous avons développé des outils permettant de corriger les métadonnées de ces images (ex : la position et l'orientation des sondes) afin d’obtenir des informations géométriques précises. Nous avons, d’autre part, développé un outil d’inversion pour estimer les paramètres photométriques de Hapke sur des régions d’Europa, Ganymede et Callisto sur l’ensemble du jeu de données en un seul calcul. Enfin, nous discutons des liens possibles entre les paramètres photométriques estimés, la microtexture de la surface et les processus endogènes/exogènes mis en jeu. / Jupiter's icy moons are of great interest in the search for habitability in our Solar System. They probably all harbor liquid water ocean underneath their icy crust. Their surfaces present different stages of evolution – Callisto is heavily cratered and the oldest, Ganymede shows a combination of dark cratered terrain and younger bright plains and Europa’s surface is the youngest with signs of recent and maybe current activity. This work focuses on photometry, i.e. the study of the light scattered by a surface in relation to the illumination and observation geometry. Photometric studies give us insight on the physical state and microtexture of the surface (compaction, internal structure, shape, roughness, transparency…). A good photometric knowledge is also of crucial importance in the correction of datasets for any mapping or spectroscopic study as well as for the future missions of this decade – NASA’s Europa Clipper and ESA’s JUpiter ICy moons Explorer.Two pieces of information are necessary to conduct a photometric study – reflectance data and geometric information (illumination, viewing conditions). For the former, we have used and calibrated images from past space missions – Voyager, New Horizons and Galileo. For the latter, we have developed tools to correct these images metadata (e.g. spacecraft position and orientation) to derive precise geometric information. Moreover, we have developed a Bayesian inversion tool to estimate Hapke’s photometric parameters on regions of Europa, Ganymede and Callisto. We estimate all parameters on our entire dataset at once. Finally, we discuss the possible links between the photometric parameters, the surface microtexture and endogenic/exogenic processes.
10

Autonomous Navigation in Partially-Known Environment using Nano Drones with AI-based Obstacle Avoidance : A Vision-based Reactive Planning Approach for Autonomous Navigation of Nano Drones / Autonom Navigering i Delvis Kända Miljöer med Hjälp av Nanodrönare med AI-baserat Undvikande av Hinder : En Synbaserad Reaktiv Planeringsmetod för Autonom Navigering av Nanodrönare

Sartori, Mattia January 2023 (has links)
The adoption of small-size Unmanned Aerial Vehicles (UAVs) in the commercial and professional sectors is rapidly growing. The miniaturisation of sensors and processors, the advancements in connected edge intelligence and the exponential interest in Artificial Intelligence (AI) are boosting the affirmation of autonomous nano-size drones in the Internet of Things (IoT) ecosystem. However, achieving safe autonomous navigation and high-level tasks like exploration and surveillance with these tiny platforms is extremely challenging due to their limited resources. Lightweight and reliable solutions to this challenge are subject to ongoing research. This work focuses on enabling the autonomous flight of a pocket-size, 30-gram platform called Crazyflie in a partially known environment. We implement a modular pipeline for the safe navigation of the nano drone between waypoints. In particular, we propose an AI-aided, vision-based reactive planning method for obstacle avoidance. We deal with the constraints of the nano drone by splitting the navigation task into two parts: a deep learning-based object detector runs on external hardware while the planning algorithm is executed onboard. For designing the reactive approach, we take inspiration from existing sensorbased navigation solutions and obtain a novel method for obstacle avoidance that does not rely on distance information. In the study, we also analyse the communication aspect and the latencies involved in edge offloading. Moreover, we share insights into the finetuning of an SSD MobileNet V2 object detector on a custom dataset of low-resolution, grayscale images acquired with the drone. The results show the ability to command the drone at ∼ 8 FPS and a model performance reaching a COCO mAP of 60.8. Field experiments demonstrate the feasibility of the solution with the drone flying at a top speed of 1 m/s while steering away from an obstacle placed in an unknown position and reaching the target destination. Additionally, we study the impact of a parameter determining the strength of the avoidance action and its influence on total path length, traversal time and task completion. The outcome demonstrates the compatibility of the communication delay and the model performance with the requirements of the real-time navigation task and a successful obstacle avoidance rate reaching 100% in the best-case scenario. By exploiting the modularity of the proposed working pipeline, future work could target the improvement of the single parts and aim at a fully onboard implementation of the navigation task, pushing the boundaries of autonomous exploration with nano drones. / Användningen av små obemannade flygfarkoster (UAV) inom den kommersiella och professionella sektorn ökar snabbt. Miniatyriseringen av sensorer och processorer, framstegen inom connected edge intelligence och det exponentiella intresset för artificiell intelligens (AI) ökar användningen av autonoma drönare i nanostorlek i ekosystemet för sakernas internet (IoT). Att uppnå säker autonom navigering och uppgifter på hög nivå, som utforskning och övervakning, med dessa små plattformar är dock extremt utmanande på grund av deras begränsade resurser. Lättviktiga och tillförlitliga lösningar på denna utmaning är föremål för pågående forskning. Detta arbete fokuserar på att möjliggöra autonom flygning av en 30-grams plattform i fickformat som kallas Crazyflie i en delvis känd miljö. Vi implementerar en modulär pipeline för säker navigering av nanodrönaren mellan riktpunkter. I synnerhet föreslår vi en AI-assisterad, visionsbaserad reaktiv planeringsmetod för att undvika hinder. Vi hanterar nanodrönarens begränsningar genom att dela upp navigeringsuppgiften i två delar: en djupinlärningsbaserad objektdetektor körs på extern hårdvara medan planeringsalgoritmen exekveras ombord. För att utforma den reaktiva metoden hämtar vi inspiration från befintliga sensorbaserade navigeringslösningar och tar fram en ny metod för hinderundvikande som inte är beroende av avståndsinformation. I studien analyserar vi även kommunikationsaspekten och de svarstider som är involverade i edge offloading. Dessutom delar vi med oss av insikter om finjusteringen av en SSD MobileNet V2-objektdetektor på en skräddarsydd dataset av lågupplösta gråskalebilder som tagits med drönaren. Resultaten visar förmågan att styra drönaren med ∼ 8 FPS och en modellprestanda som når en COCO mAP på 60.8. Fältexperiment visar att lösningen är genomförbar med drönaren som flyger med en topphastighet på 1 m/s samtidigt som den styr bort från ett hinder som placerats i en okänd position och når måldestinationen. Vi studerar även effekten av en parameter som bestämmer styrkan i undvikandeåtgärden och dess påverkan på den totala väglängden, tidsåtgången och slutförandet av uppgiften. Resultatet visar att kommunikationsfördröjningen och modellens prestanda är kompatibla med kraven för realtidsnavigering och ett lyckat undvikande av hinder som i bästa fall uppgår till 100%. Genom att utnyttja modulariteten i den föreslagna arbetspipelinen kan framtida arbete inriktas på förbättring av de enskilda delarna och syfta till en helt inbyggd implementering av navigeringsuppgiften, vilket flyttar gränserna för autonom utforskning med nano-drönare.

Page generated in 0.5135 seconds