Spelling suggestions: "subject:"denied"" "subject:"undenied""
11 |
A Vision-Based Relative Navigation Approach for Autonomous Multirotor AircraftLeishman, Robert C. 29 April 2013 (has links) (PDF)
Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control loop are provided. We believe that the relative, vision-based framework described in this work is an important step in furthering the capabilities of indoor aerial navigation in confined, unknown environments. Current approaches incur challenging problems by requiring globally referenced states. Utilizinga relative approach allows more flexibility as the critical, real-time processes of localization and control do not depend on computationally-demanding optimization and loop-closure processes.
|
12 |
UAV Navigation and Radar OdometryQuist, Eric Blaine 01 March 2015 (has links) (PDF)
Prior to the wide deployment of robotic systems, they must be able to navigate autonomously. These systems cannot rely on good weather or daytime navigation and they must also be able to navigate in unknown environments. All of this must take place without human interaction. A majority of modern autonomous systems rely on GPS for position estimation. While GPS solutions are readily available, GPS is often lost and may even be jammed. To this end, a significant amount of research has focused on GPS-denied navigation. Many GPS-denied solutions rely on known environmental features for navigation. Others use vision sensors, which often perform poorly at high altitudes and are limited in poor weather. In contrast, radar systems accurately measure range at high and low altitudes. Additionally, these systems remain unaffected by inclimate weather. This dissertation develops the use of radar odometry for GPS-denied navigation. Using the range progression of unknown environmental features, the aircraft's motion is estimated. Results are presented for both simulated and real radar data. In Chapter 2 a greedy radar odometry algorithm is presented. It uses the Hough transform to identify the range progression of ground point-scatterers. A global nearest neighbor approach is implemented to perform data association. Assuming a piece-wise constant heading assumption, as the aircraft passes pairs of scatterers, the location of the scatterers are triangulated, and the motion of the aircraft is estimated. Real flight data is used to validate the approach. Simulated flight data explores the robustness of the approach when the heading assumption is violated. Chapter 3 explores a more robust radar odometry technique, where the relatively constant heading assumption is removed. This chapter uses the recursive-random sample consensus (R-RANSAC) Algorithm to identify, associate, and track the point scatterers. Using the measured ranges to the tracked scatterers, an extended Kalman filter (EKF) iteratively estimates the aircraft's position in addition to the relative locations of each reflector. Real flight data is used to validate the accuracy of this approach. Chapter 4 performs observability analysis of a range-only sensor. An observable, radar odometry approach is proposed. It improves the previous approaches by adding a more robust R-RANSAC above ground level (AGL) tracking algorithm to further improve the navigational accuracy. Real flight results are presented, comparing this approach to the techniques presented in previous chapters.
|
13 |
Infared Light-Based Data Association and Pose Estimation for Aircraft Landing in Urban EnvironmentsAkagi, David 10 June 2024 (has links) (PDF)
In this thesis we explore an infrared light-based approach to the problem of pose estimation during aircraft landing in urban environments where GPS is unreliable or unavailable. We introduce a novel fiducial constellation composed of sparse infrared lights that incorporates projective invariant properties in its design to allow for robust recognition and association from arbitrary camera perspectives. We propose a pose estimation pipeline capable of producing high accuracy pose measurements at real-time rates from monocular infrared camera views of the fiducial constellation, and present as part of that pipeline a data association method that is able to robustly identify and associate individual constellation points in the presence of clutter and occlusions. We demonstrate the accuracy and efficiency of this pose estimation approach on real-world data obtained from multiple flight tests, and show that we can obtain decimeter level accuracy from distances of over 100 m from the constellation. To achieve greater robustness to the potentially large number of outlier infrared detections that can arise in urban environments, we also explore learning-based approaches to the outlier rejection and data association problems. By formulating the problem of camera image data association as a 2D point cloud analysis, we can apply deep learning methods designed for 3D point cloud segmentation to achieve robust, high-accuracy associations at constant real-time speeds on infrared images with high outlier-to-inlier ratios. We again demonstrate the efficiency of our learning-based approach on both synthetic and real-world data, and compare the results and limitations of this method to our first-principles-based data association approach.
|
14 |
Exploration, Mapping and Scalar Field Estimation using a Swarm of Resource-Constrained RobotsJanuary 2018 (has links)
abstract: Robotic swarms can potentially perform complicated tasks such as exploration and mapping at large space and time scales in a parallel and robust fashion. This thesis presents strategies for mapping environmental features of interest – specifically obstacles, collision-free paths, generating a metric map and estimating scalar density fields– in an unknown domain using data obtained by a swarm of resource-constrained robots. First, an approach was developed for mapping a single obstacle using a swarm of point-mass robots with both directed and random motion. The swarm population dynamics are modeled by a set of advection-diffusion-reaction partial differential equations (PDEs) in which a spatially-dependent indicator function marks the presence or absence of the obstacle in the domain. The indicator function is estimated by solving an optimization problem with PDEs as constraints. Second, a methodology for constructing a topological map of an unknown environment was proposed, which indicates collision-free paths for navigation, from data collected by a swarm of finite-sized robots. As an initial step, the number of topological features in the domain was quantified by applying tools from algebraic topology, to a probability function over the explored region that indicates the presence of obstacles. A topological map of the domain is then generated using a graph-based wave propagation algorithm. This approach is further extended, enabling the technique to construct a metric map of an unknown domain with obstacles using uncertain position data collected by a swarm of resource-constrained robots, filtered using intensity measurements of an external signal. Next, a distributed method was developed to construct the occupancy grid map of an unknown environment using a swarm of inexpensive robots or mobile sensors with limited communication. In addition to this, an exploration strategy which combines information theoretic ideas with Levy walks was also proposed. Finally, the problem of reconstructing a two-dimensional scalar field using observations from a subset of a sensor network in which each node communicates its local measurements to its neighboring nodes was addressed. This problem reduces to estimating the initial condition of a large interconnected system with first-order linear dynamics, which can be solved as an optimization problem. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2018
|
15 |
Goal-Aware Robocentric Mapping and Navigation of a Quadrotor Unmanned Aerial VehicleBiswas, Srijanee 18 June 2019 (has links)
No description available.
|
16 |
Cooperative Navigation of Autonomous Vehicles in Challenging EnvironmentsForsgren, Brendon Peter 18 September 2023 (has links) (PDF)
As the capabilities of autonomous systems have increased so has interest in utilizing teams of autonomous systems to accomplish tasks more efficiently. This dissertation takes steps toward enabling the cooperation of unmanned systems in scenarios that are challenging, such as GPS-denied or perceptually aliased environments. This work begins by developing a cooperative navigation framework that is scalable in the number of agents, robust against communication latency or dropout, and requires little a priori information. Additionally, this framework is designed to be easily adopted by existing single-agent systems with minimal changes to existing software and software architectures. All systems in the framework are validated through Monte Carlo simulations. The second part of this dissertation focuses on making cooperative navigation robust in challenging environments. This work first focuses on enabling a more robust version of pose graph SLAM, called cycle-based pose graph optimization, to be run in real-time by implementing and validating an algorithm to incrementally approximate a minimum cycle basis. A new algorithm is proposed that is tailored to multi-agent systems by approximating the cycle basis of two graphs that have been joined. These algorithms are validated through extensive simulation and hardware experiments. The last part of this dissertation focuses on scenarios where perceptual aliasing and incorrect or unknown data association are present. This work presents a unification of the framework of consistency maximization, and extends the concept of pairwise consistency to group consistency. This work shows that by using group consistency, low-degree-of-freedom measurements can be rejected in high-outlier regimes if the measurements do not fit the distribution of other measurements. The efficacy of this method is verified extensively using both simulation and hardware experiments.
|
17 |
Beyond LiDAR for Unmanned Aerial Event-Based Localization in GPS Denied EnvironmentsMayalu Jr, Alfred Kulua 23 June 2021 (has links)
Finding lost persons, collecting information in disturbed communities, efficiently traversing urban areas after a blast or similar catastrophic events have motivated researchers to develop intelligent sensor frameworks to aid law enforcement, first responders, and military personnel with situational awareness. This dissertation consists of a two-part framework for providing situational awareness using both acoustic ground sensors and aerial sensing modalities. Ground sensors in the field of data-driven detection and classification approaches typically rely on computationally expensive inputs such as image or video-based methods [6, 91]. However, the information given by an acoustic signal offers several advantages, such as low computational needs and possible classification of occluded events including gunshots or explosions. Once an event is identified, responding to real-time events in urban areas is difficult using an Unmanned Aerial Vehicle (UAV) especially when GPS is unreliable due to coverage blackouts and/or GPS degradation [10].
Furthermore, if it is possible to deploy multiple in-situ static intelligent acoustic autonomous sensors that can identify anomalous sounds given context, then the sensors can communicate with an autonomous UAV that can navigate in a GPS-denied urban environment for investigation of the event; this could offer several advantages for time-critical and precise, localized response information necessary for life-saving decision-making.
Thus, in order to implement a complete intelligent sensor framework, the need for both an intelligent static ground acoustic autonomous unattended sensors (AAUS) and improvements to GPS-degraded localization has become apparent for applications such as anomaly detection, public safety, as well as intelligence surveillance and reconnaissance (ISR) operations. Distributed AAUS networks could provide end-users with near real-time actionable information for large urban environments with limited resources. Complete ISR mission profiles require a UAV to fly in GPS challenging or denied environments such as natural or urban canyons, at least in a part of a mission.
This dissertation addresses, 1) the development of intelligent sensor framework through the development of a static ground AAUS capable of machine learning for audio feature classification and 2) GPS impaired localization through a formal framework for trajectory-based flight navigation for unmanned aircraft systems (UAS) operating BVLOS in low-altitude urban airspace. Our AAUS sensor method utilizes monophonic sound event detection in which the sensor detects, records, and classifies each event utilizing supervised machine learning techniques [90]. We propose a simulated framework to enhance the performance of localization in GPS-denied environments. We do this by using a new representation of 3D geospatial data using planar features that efficiently capture the amount of information required for sensor-based GPS navigation in obstacle-rich environments. The results from this dissertation would impact both military and civilian areas of research with the ability to react to events and navigate in an urban environment. / Doctor of Philosophy / Emergency scenarios such as missing persons or catastrophic events in urban areas require first responders to gain situational awareness motivating researchers to investigate intelligent sensor frameworks that utilize drones for observation prompting questions such as: How can responders detect and classify acoustic anomalies using unattended sensors? and How do they remotely navigate in GPS-denied urban environments using drones to potentially investigate such an event?
This dissertation addresses the first question through the development of intelligent WSN systems that can provide time-critical and precise, localized environmental information necessary for decision-making. At Virginia Tech, we have developed a static ground Acoustic Autonomous Unattended Sensor (AAUS) capable of machine learning for audio feature classification. The prior arts of intelligent AAUS and network architectures do not account for network failure, jamming capabilities, or remote scenarios in which cellular data wifi coverage are unavailable [78, 90]. Lacking a framework for such scenarios illuminates vulnerability in operational integrity for proposed solutions in homeland security applications. We address this through data ferrying, a communication method in which a mobile node, such as a drone, physically carries data as it moves through the environment to communicate with other sensor nodes on the ground. When examining the second question of navigation/investigation, concerns of safety arise in urban areas regarding drones due to GPS signal loss which is one of the first problems that can occur when a drone flies into a city (such as New York City). If this happens, potential crashes, injury and damage to property are imminent because the drone does not know where it is in space. In these GPS-denied situations traditional methods use point clouds (a set of data points in space (X,Y,Z) representing a 3D object [107]) constructed from laser radar scanners (often seen in a Microsoft Xbox Kinect sensor) to find itself. The main drawback from using methods such as these is the accumulation of error and computational complexity of large data-sets such as New York City. An advantage of cities is that they are largely flat; thus, if you can represent a building with a plane instead of 10,000 points, you can greatly reduce your data and improve algorithm performance.
This dissertation addresses both the needs of an intelligent sensor framework through the development of a static ground AAUS capable of machine learning for audio feature classification as well as GPS-impaired localization through a formal framework for trajectory-based flight navigation for UAS operating BVLOS in low altitude urban and suburban environments.
|
18 |
GPS-Denied Localization of Landing eVTOL AircraftBrown, Aaron C. 16 April 2024 (has links) (PDF)
This thesis presents a dedicated GPS-denied landing system designed for electric vertical takeoff and landing (eVTOL) aircraft. The system employs active fiducial light pattern localization (AFLPL), which provides highly accurate and reliable navigation during critical landing phases. AFLPL utilizes images of a constellation comprised of modulating infrared lights strategically positioned on the landing site, to determine the aircraft pose through the use of a perspective-n-point (PnP) solver. The AFLPL system underwent thorough development, enhancement, and implementation to address and demonstrate its potential in navigation and its inherent limitations. A proposed method addresses the limitations of AFLPL by using an extended Kalman filter (EKF) to fuse PnP camera pose estimates with sensor measurements from an inertial measurement unit (IMU), attitude heading reference system (AHRS), and optional global positioning system (GPS). The EKF estimation is reported to significantly enhance the accuracy, reliability, and update frequency of the aircraft state estimation. To refine and validate the AFLPL and EKF algorithms, a simulation was developed, consisting of an eVTOL executing a glideslope landing trajectory. Furthermore, a hardware system consisting of a multirotor and infrared light ground units was implemented to test these methods under real-world conditions. This research culminated in the successful demonstration of the AFLPL-based estimation system's efficacy through an autonomous, GPS-denied landing flight test, affirming its potential to improve the navigation and control of eVTOL aircraft lacking access to GPS information.
|
19 |
Localization of autonomous ground vehicles in dense urban environmentsHimstedt, Marian 03 March 2014 (has links) (PDF)
The localization of autonomous ground vehicles in dense urban environments poses a challenge.
Applications in classical outdoor robotics rely on the availability of GPS
systems in order to estimate the position. However, the presence of complex building structures in dense urban environments hampers a reliable localization based on GPS. Alternative approaches have to be applied In order to tackle this problem.
This thesis proposes an approach which combines observations of a single perspective camera and odometry in a probabilistic framework. In particular, the localization in the space of appearance is addressed. First, a topological map of reference places in the environment is built. Each reference place is associated with a set of visual features.
A feature selection is carried out in order to obtain distinctive reference
places. The topological map is extended to a hybrid representation by the use of metric information from Geographic Information Systems (GIS) and satellite images.
The localization is solved in terms of the recognition of reference places. A particle lter implementation incorporating this and the vehicle's odometry is presented.
The proposed system is evaluated based on multiple experiments in exemplary urban environments characterized by high building structures and a multitude of dynamic objects.
|
20 |
Localization of autonomous ground vehicles in dense urban environmentsHimstedt, Marian 25 January 2011 (has links)
The localization of autonomous ground vehicles in dense urban environments poses a challenge.
Applications in classical outdoor robotics rely on the availability of GPS
systems in order to estimate the position. However, the presence of complex building structures in dense urban environments hampers a reliable localization based on GPS. Alternative approaches have to be applied In order to tackle this problem.
This thesis proposes an approach which combines observations of a single perspective camera and odometry in a probabilistic framework. In particular, the localization in the space of appearance is addressed. First, a topological map of reference places in the environment is built. Each reference place is associated with a set of visual features.
A feature selection is carried out in order to obtain distinctive reference
places. The topological map is extended to a hybrid representation by the use of metric information from Geographic Information Systems (GIS) and satellite images.
The localization is solved in terms of the recognition of reference places. A particle lter implementation incorporating this and the vehicle's odometry is presented.
The proposed system is evaluated based on multiple experiments in exemplary urban environments characterized by high building structures and a multitude of dynamic objects.
|
Page generated in 0.1346 seconds