• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 10
  • 10
  • 9
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Taking Man Out of the Loop: Methods to Reduce Human Involvement in Search and Surveillance Applications

Brink, Kevin Michael 2010 December 1900 (has links)
There has always been a desire to apply technology to human endeavors to increase a person's capabilities and reduce the numbers or skill level required of the people involved, or replace the people altogether. Three fundamental areas are investigated where technology can enable the reduction or removal of humans in complex tasks. The fi rst area of research is the rapid calibration of multiple camera systems when cameras share an overlapping fi eld of view allowing for 3D computer vision applications. A simple method for the rapid calibration of such systems is introduced. The second area of research is the autonomous exploration of hallways or other urbancanyon environments in the absence of a global positions system (GPS) using only an inertial motion unit (IMU) and a monocular camera. Desired paths that generate accurate vehicle state estimates for simple ground vehicles are identi fied and the bene fits of integrated estimation and control are investigated. It is demonstrated that considering estimation accuracy is essential to produce efficient guidance and control. The Schmidt-Kalman filter is applied to the vision-aided inertial navigation system in a novel manner, reducing the state vector size signi ficantly. The final area of research is a decentralized swarm based approach to source localization using a high fidelity environment model to directly provide vehicle updates. The approach is an extension of a standard quadratic model that provides linear updates. The new approach leverages information from the higher-order terms of the environment model showing dramatic improvement over the standard method.
2

Radio Resource Allocation and Beam Management under Location Uncertainty in 5G mmWave Networks

Yao, Yujie 16 June 2022 (has links)
Millimeter wave (mmWave) plays a critical role in the Fifth-generation (5G) new radio due to the rich bandwidth it provides. However, one shortcoming of mmWave is the substantial path loss caused by poor diffraction at high frequencies, and consequently highly directional beams are applied to mitigate this problem. A typical way of beam management is to cluster users based on their locations. However, localization uncertainty is unavoidable due to measurement accuracy, system performance fluctuation, and so on. Meanwhile, the traffic demand may change dynamically in wireless environments, which increases the complexity of network management. Therefore, a scheme that can handle both the uncertainty of localization and dynamic radio resource allocation is required. Moreover, since the localization uncertainty will influence the network performance, more state-of-the-art localization methods, such as vision-aided localization, are expected to reduce the localization error. In this thesis, we proposed two algorithms for joint radio resource allocation and beam management in 5G mmWave networks, namely UK-means-based Clustering and Deep Reinforcement Learning-based resource allocation (UK-DRL) and UK-medoids-based Clustering and Deep Reinforcement Learning-based resource allocation (UKM-DRL). Specifically, we deploy UK-means and UK-medoids clustering method in UK-DRL and UKM-DRL, respectively, which is designed to handle the clustering under location uncertainties. Meanwhile, we apply Deep Reinforcement Learning (DRL) for intra-beam radio resource allocations in UK-DRL and UKM-DRL. Moreover, to improve the localization accuracy, we develop a vision-aided localization scheme, where pixel characteristics-based features are extracted from satellite images as additional input features for location prediction. The simulations show that UK-DRL and UKM-DRL successfully improve the network performance in data rate and delay than baseline algorithms. When the traffic load is 4 Mbps, UK-DRL has a 172.4\% improvement in sum rate and 64.1\% improvement in latency than K-means-based Clustering and Deep Reinforcement Learning-based resource allocation (K-DRL). UKM-DRL has 17.2\% higher throughput and 7.7\% lower latency than UK-DRL, and 231\% higher throughput and 55.8\% lower latency than K-DRL. On the other hand, the vision-aided localization scheme can significantly reduce the localization error from 17.11 meters to 3.6 meters.
3

An Investigation in the Use of Hyperspectral Imagery Using Machine Learning for Vision-Aided Navigation

Ege, Isaac Thomas 15 May 2023 (has links)
No description available.
4

Integration of 3D and 2D Imaging Data for Assured Navigation in Unknown Environments

Dill, Evan T. 25 April 2011 (has links)
No description available.
5

Vision-Aided Inertial Navigation : low computational complexity algorithms with applications to Micro Aerial Vehicles / Navigation inertielle assistée par vision : algorithmes à faible complexité avec applications aux micro-véhicules aériens

Troiani, Chiara 17 March 2014 (has links)
L'estimation précise du mouvement 3D d'une caméra relativement à une scène rigideest essentielle pour tous les systèmes de navigation visuels. Aujourd'hui différentstypes de capteurs sont adoptés pour se localiser et naviguer dans des environnementsinconnus : GPS, capteurs de distance, caméras, capteurs magnétiques, centralesinertielles (IMU, Inertial Measurement Unit). Afin d'avoir une estimation robuste, lesmesures de plusieurs capteurs sont fusionnées. Même si le progrès technologiquepermet d'avoir des capteurs de plus en plus précis, et si la communauté de robotiquemobile développe algorithmes de navigation de plus en plus performantes, il y aencore des défis ouverts. De plus, l'intérêt croissant des la communauté de robotiquepour les micro robots et essaim de robots pousse vers l'emploi de capteurs à bas poids,bas coût et vers l'étude d'algorithmes à faible complexité. Dans ce contexte, capteursinertiels et caméras monoculaires, grâce à leurs caractéristiques complémentaires,faible poids, bas coût et utilisation généralisée, représentent une combinaison decapteur intéressante.Cette thèse présente une contribution dans le cadre de la navigation inertielle assistéepar vision et aborde les problèmes de fusion de données et estimation de la pose, envisant des algorithmes à faible complexité appliqués à des micro-véhicules aériens.Pour ce qui concerne l'association de données, une nouvelle méthode pour estimer lemouvement relatif entre deux vues de caméra consécutifs est proposée.Celle-ci ne nécessite l'observation que d'un seul point caractéristique de la scène et laconnaissance des vitesses angulaires fournies par la centrale inertielle, sousl'hypothèse que le mouvement de la caméra appartient localement à un planperpendiculaire à la direction de la gravité. Deux algorithmes très efficaces pouréliminer les fausses associations de données (outliers) sont proposés sur la base decette hypothèse de mouvement.Afin de généraliser l'approche pour des mouvements à six degrés de liberté, deuxpoints caracteristiques et les données gyroscopiques correspondantes sont nécessaires.Dans ce cas, deux algorithmes sont proposés pour éliminer les outliers.Nous montrons que dans le cas d'une caméra monoculaire montée sur un quadrotor,les informations de mouvement fournies par l'IMU peuvent être utilisées pouréliminer de mauvaises estimations.Pour ce qui concerne le problème d'estimation de la pose, cette thèse fournit unesolution analytique pour exprimer la pose du système à partir de l'observation de troispoints caractéristiques naturels dans une seule image, une fois que le roulis et letangage sont obtenus par les données inertielles sous l'hypothèse de terrain plan.Afin d'aborder le problème d'estimation de la pose dans des environnements sombresou manquant de points caractéristiques, un système équipé d'une caméra monoculaire,d'une centrale inertielle et d'un pointeur laser est considéré. Grace à une analysed'observabilité il est démontré que les grandeurs physiques qui peuvent êtredéterminées par l'exploitation des mesures fourni par ce systeme de capteurs pendantun court intervalle de temps sont : la distance entre le système et la surface plane ;la composante de la vitesse du système qui est orthogonale à la surface ; l'orientationrelative du système par rapport à la surface et l'orientation de la surface par rapport àla gravité. Une méthode récursive simple a été proposée pour l'estimation de toutesces quantités observables.Toutes les contributions de cette thèse sont validées par des expérimentations à l'aidedes données réelles et simulées. Grace à leur faible complexité de calcul, lesalgorithmes proposés sont très appropriés pour la mise en oeuvre en temps réel surdes systèmes ayant des ressources de calcul limitées. La suite de capteur considéréeest monté sur un quadrotor, mais les contributions de cette thèse peuvent êtreappliquées à n'importe quel appareil mobile. / Accurate egomotion estimation is of utmost importance for any navigation system.Nowadays di_erent sensors are adopted to localize and navigate in unknownenvironments such as GPS, range sensors, cameras, magnetic field sensors, inertialsensors (IMU). In order to have a robust egomotion estimation, the information ofmultiple sensors is fused. Although the improvements of technology in providingmore accurate sensors, and the efforts of the mobile robotics community in thedevelopment of more performant navigation algorithms, there are still openchallenges. Furthermore, the growing interest of the robotics community in microrobots and swarm of robots pushes towards the employment of low weight, low costsensors and low computational complexity algorithms. In this context inertial sensorsand monocular cameras, thanks to their complementary characteristics, low weight,low cost and widespread use, represent an interesting sensor suite.This dissertation represents a contribution in the framework of vision-aided inertialnavigation and tackles the problems of data association and pose estimation aimingfor low computational complexity algorithms applied to MAVs.For what concerns the data association, a novel method to estimate the relative motionbetween two consecutive camera views is proposed. It only requires the observationof a single feature in the scene and the knowledge of the angular rates from an IMU,under the assumption that the local camera motion lies in a plane perpendicular to thegravity vector. Two very efficient algorithms to remove the outliers of the featurematchingprocess are provided under the abovementioned motion assumption. Inorder to generalize the approach to a 6DoF motion, two feature correspondences andgyroscopic data from IMU measurements are necessary. In this case, two algorithmsare provided to remove wrong data associations in the feature-matching process. Inthe case of a monocular camera mounted on a quadrotor vehicle, motion priors fromIMU are used to discard wrong estimations.For what concerns the pose estimation problem, this thesis provides a closed formsolution which gives the system pose from three natural features observed in a singlecamera image, once the roll and the pitch angles are obtained by the inertialmeasurements under the planar ground assumption.In order to tackle the pose estimation problem in dark or featureless environments, asystem equipped with a monocular camera, inertial sensors and a laser pointer isconsidered. The system moves in the surrounding of a planar surface and the laserpointer produces a laser spot on the abovementioned surface. The laser spot isobserved by the monocular camera and represents the only point feature considered.Through an observability analysis it is demonstrated that the physical quantities whichcan be determined by exploiting the measurements provided by the aforementionedsensor suite during a short time interval are: the distance of the system from the planarsurface; the component of the system speed that is orthogonal to the planar surface;the relative orientation of the system with respect to the planar surface; the orientationof the planar surface with respect to the gravity. A simple recursive method toperform the estimation of all the aforementioned observable quantities is provided.All the contributions of this thesis are validated through experimental results usingboth simulated and real data. Thanks to their low computational complexity, theproposed algorithms are very suitable for real time implementation on systems withlimited on-board computation resources. The considered sensor suite is mounted on aquadrotor vehicle but the contributions of this dissertations can be applied to anymobile device.
6

Ground Plane Feature Detection in Mobile Vision-Aided Inertial Navigation

Panahandeh, Ghazaleh, Mohammadiha, Nasser, Jansson, Magnus January 2012 (has links)
In this paper, a method for determining ground plane features in a sequence of images captured by a mobile camera is presented. The hardware of the mobile system consists of a monocular camera that is mounted on an inertial measurement unit (IMU). An image processing procedure is proposed, first to extract image features and match them across consecutive image frames, and second to detect the ground plane features using a two-step algorithm. In the first step, the planar homography of the ground plane is constructed using an IMU-camera motion estimation approach. The obtained homography constraints are used to detect the most likely ground features in the sequence of images. To reject the remaining outliers, as the second step, a new plane normal vector computation approach is proposed. To obtain the normal vector of the ground plane, only three pairs of corresponding features are used for a general camera transformation. The normal-based computation approach generalizes the existing methods that are developed for specific camera transformations. Experimental results on real data validate the reliability of the proposed method. / <p>QC 20121107</p>
7

Relative Navigation of Micro Air Vehicles in GPS-Degraded Environments

Wheeler, David Orton 01 December 2017 (has links)
Most micro air vehicles rely heavily on reliable GPS measurements for proper estimation and control, and therefore struggle in GPS-degraded environments. When GPS is not available, the global position and heading of the vehicle is unobservable. This dissertation establishes the theoretical and practical advantages of a relative navigation framework for MAV navigation in GPS-degraded environments. This dissertation explores how the consistency, accuracy, and stability of current navigation approaches degrade during prolonged GPS dropout and in the presence of heading uncertainty. Relative navigation (RN) is presented as an alternative approach that maintains observability by working with respect to a local coordinate frame. RN is compared with several current estimation approaches in a simulation environment and in hardware experiments. While still subject to global drift, RN is shown to produce consistent state estimates and stable control. Estimating relative states requires unique modifications to current estimation approaches. This dissertation further provides a tutorial exposition of the relative multiplicative extended Kalman filter, presenting how to properly ensure observable state estimation while maintaining consistency. The filter is derived using both inertial and body-fixed state definitions and dynamics. Finally, this dissertation presents a series of prolonged flight tests, demonstrating the effectiveness of the relative navigation approach for autonomous GPS-degraded MAV navigation in varied, unknown environments. The system is shown to utilize a variety of vision sensors, work indoors and outdoors, run in real-time with onboard processing, and not require special tuning for particular sensors or environments. Despite leveraging off-the-shelf sensors and algorithms, the flight tests demonstrate stable front-end performance with low drift. The flight tests also demonstrate the onboard generation of a globally consistent, metric, and localized map by identifying and incorporating loop-closure constraints and intermittent GPS measurements. With this map, mission objectives are shown to be autonomously completed.
8

Vision-based navigation and mapping for flight in GPS-denied environments

Wu, Allen David 15 November 2010 (has links)
Traditionally, the task of determining aircraft position and attitude for automatic control has been handled by the combination of an inertial measurement unit (IMU) with a Global Positioning System (GPS) receiver. In this configuration, accelerations and angular rates from the IMU can be integrated forward in time, and position updates from the GPS can be used to bound the errors that result from this integration. However, reliance on the reception of GPS signals places artificial constraints on aircraft such as small unmanned aerial vehicles (UAVs) that are otherwise physically capable of operation in indoor, cluttered, or adversarial environments. Therefore, this work investigates methods for incorporating a monocular vision sensor into a standard avionics suite. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations by the vision sensor can in turn be used to resolve aircraft position and orientation while continuing to map out new features. An extended Kalman filter framework for performing the tasks of vision-based mapping and navigation is presented. Feature points are detected in each image using a Harris corner detector, and these feature measurements are corresponded from frame to frame using a statistical Z-test. When GPS is available, sequential observations of a single landmark point allow the point's location in inertial space to be estimated. When GPS is not available, landmarks that have been sufficiently triangulated can be used for estimating vehicle position and attitude. Simulation and real-time flight test results for vision-based mapping and navigation are presented to demonstrate feasibility in real-time applications. These methods are then integrated into a practical framework for flight in GPS-denied environments and verified through the autonomous flight of a UAV during a loss-of-GPS scenario. The methodology is also extended to the application of vehicles equipped with stereo vision systems. This framework enables aircraft capable of hovering in place to maintain a bounded pose estimate indefinitely without drift during a GPS outage.
9

Cooperative Navigation of Fixed-Wing Micro Air Vehicles in GPS-Denied Environments

Ellingson, Gary James 05 November 2019 (has links)
Micro air vehicles have recently gained popularity due to their potential as autonomous systems. Their future impact, however, will depend in part on how well they can navigate in GPS-denied and GPS-degraded environments. In response to this need, this dissertation investigates a potential solution for GPS-denied operations called relative navigation. The method utilizes keyframe-to-keyframe odometry estimates and their covariances in a global back end that represents the global state as a pose graph. The back end is able to effectively represent nonlinear uncertainties and incorporate opportunistic global constraints. The GPS-denied research community has, for the most part, neglected to consider fixed-wing aircraft. This dissertation enables fixed-wing aircraft to utilize relative navigation by accounting for their sensing requirements. The development of an odometry-like, front-end, EKF-based estimator that utilizes only a monocular camera and an inertial measurement unit is presented. The filter uses the measurement model of the multi-state-constraint Kalman filter and regularly performs relative resets in coordination with keyframe declarations. In addition to the front-end development, a method is provided to account for front-end velocity bias in the back-end optimization. Finally a method is presented for enabling multiple vehicles to improve navigational accuracy by cooperatively sharing information. Modifications to the relative navigation architecture are presented that enable decentralized, cooperative operations amidst temporary communication dropouts. The proposed framework also includes the ability to incorporate inter-vehicle measurements and utilizes a new concept called the coordinated reset, which is necessary for optimizing the cooperative odometry and improving localization. Each contribution is demonstrated through simulation and/or hardware flight testing. Simulation and Monte-Carlo testing is used to show the expected quality of the results. Hardware flight-test results show the front-end estimator performance, several back-end optimization examples, and cooperative GPS-denied operations.
10

Enabling Autonomous Operation of Micro Aerial Vehicles Through GPS to GPS-Denied Transitions

Jackson, James Scott 11 November 2019 (has links)
Micro aerial vehicles and other autonomous systems have the potential to truly transform life as we know it, however much of the potential of autonomous systems remains unrealized because reliable navigation is still an unsolved problem with significant challenges. This dissertation presents solutions to many aspects of autonomous navigation. First, it presents ROSflight, a software and hardware architure that allows for rapid prototyping and experimentation of autonomy algorithms on MAVs with lightweight, efficient flight control. Next, this dissertation presents improvments to the state-of-the-art in optimal control of quadrotors by utilizing the error-state formulation frequently utilized in state estimation. It is shown that performing optimal control directly over the error-state results in a vastly more computationally efficient system than competing methods while also dealing with the non-vector rotation components of the state in a principled way. In addition, real-time robust flight planning is considered with a method to navigate cluttered, potentially unknown scenarios with real-time obstacle avoidance. Robust state estimation is a critical component to reliable operation, and this dissertation focuses on improving the robustness of visual-inertial state estimation in a filtering framework by extending the state-of-the-art to include better modeling and sensor fusion. Further, this dissertation takes concepts from the visual-inertial estimation community and applies it to tightly-coupled GNSS, visual-inertial state estimation. This method is shown to demonstrate significantly more reliable state estimation than visual-inertial or GNSS-inertial state estimation alone in a hardware experiment through a GNSS-GNSS denied transition flying under a building and back out into open sky. Finally, this dissertation explores a novel method to combine measurements from multiple agents into a coherent map. Traditional approaches to this problem attempt to solve for the position of multiple agents at specific times in their trajectories. This dissertation instead attempts to solve this problem in a relative context, resulting in a much more robust approach that is able to handle much greater intial error than traditional approaches.

Page generated in 0.2134 seconds