• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Taking Man Out of the Loop: Methods to Reduce Human Involvement in Search and Surveillance Applications

Brink, Kevin Michael 2010 December 1900 (has links)
There has always been a desire to apply technology to human endeavors to increase a person's capabilities and reduce the numbers or skill level required of the people involved, or replace the people altogether. Three fundamental areas are investigated where technology can enable the reduction or removal of humans in complex tasks. The fi rst area of research is the rapid calibration of multiple camera systems when cameras share an overlapping fi eld of view allowing for 3D computer vision applications. A simple method for the rapid calibration of such systems is introduced. The second area of research is the autonomous exploration of hallways or other urbancanyon environments in the absence of a global positions system (GPS) using only an inertial motion unit (IMU) and a monocular camera. Desired paths that generate accurate vehicle state estimates for simple ground vehicles are identi fied and the bene fits of integrated estimation and control are investigated. It is demonstrated that considering estimation accuracy is essential to produce efficient guidance and control. The Schmidt-Kalman filter is applied to the vision-aided inertial navigation system in a novel manner, reducing the state vector size signi ficantly. The final area of research is a decentralized swarm based approach to source localization using a high fidelity environment model to directly provide vehicle updates. The approach is an extension of a standard quadratic model that provides linear updates. The new approach leverages information from the higher-order terms of the environment model showing dramatic improvement over the standard method.
2

Vision-Aided Inertial Navigation : low computational complexity algorithms with applications to Micro Aerial Vehicles / Navigation inertielle assistée par vision : algorithmes à faible complexité avec applications aux micro-véhicules aériens

Troiani, Chiara 17 March 2014 (has links)
L'estimation précise du mouvement 3D d'une caméra relativement à une scène rigideest essentielle pour tous les systèmes de navigation visuels. Aujourd'hui différentstypes de capteurs sont adoptés pour se localiser et naviguer dans des environnementsinconnus : GPS, capteurs de distance, caméras, capteurs magnétiques, centralesinertielles (IMU, Inertial Measurement Unit). Afin d'avoir une estimation robuste, lesmesures de plusieurs capteurs sont fusionnées. Même si le progrès technologiquepermet d'avoir des capteurs de plus en plus précis, et si la communauté de robotiquemobile développe algorithmes de navigation de plus en plus performantes, il y aencore des défis ouverts. De plus, l'intérêt croissant des la communauté de robotiquepour les micro robots et essaim de robots pousse vers l'emploi de capteurs à bas poids,bas coût et vers l'étude d'algorithmes à faible complexité. Dans ce contexte, capteursinertiels et caméras monoculaires, grâce à leurs caractéristiques complémentaires,faible poids, bas coût et utilisation généralisée, représentent une combinaison decapteur intéressante.Cette thèse présente une contribution dans le cadre de la navigation inertielle assistéepar vision et aborde les problèmes de fusion de données et estimation de la pose, envisant des algorithmes à faible complexité appliqués à des micro-véhicules aériens.Pour ce qui concerne l'association de données, une nouvelle méthode pour estimer lemouvement relatif entre deux vues de caméra consécutifs est proposée.Celle-ci ne nécessite l'observation que d'un seul point caractéristique de la scène et laconnaissance des vitesses angulaires fournies par la centrale inertielle, sousl'hypothèse que le mouvement de la caméra appartient localement à un planperpendiculaire à la direction de la gravité. Deux algorithmes très efficaces pouréliminer les fausses associations de données (outliers) sont proposés sur la base decette hypothèse de mouvement.Afin de généraliser l'approche pour des mouvements à six degrés de liberté, deuxpoints caracteristiques et les données gyroscopiques correspondantes sont nécessaires.Dans ce cas, deux algorithmes sont proposés pour éliminer les outliers.Nous montrons que dans le cas d'une caméra monoculaire montée sur un quadrotor,les informations de mouvement fournies par l'IMU peuvent être utilisées pouréliminer de mauvaises estimations.Pour ce qui concerne le problème d'estimation de la pose, cette thèse fournit unesolution analytique pour exprimer la pose du système à partir de l'observation de troispoints caractéristiques naturels dans une seule image, une fois que le roulis et letangage sont obtenus par les données inertielles sous l'hypothèse de terrain plan.Afin d'aborder le problème d'estimation de la pose dans des environnements sombresou manquant de points caractéristiques, un système équipé d'une caméra monoculaire,d'une centrale inertielle et d'un pointeur laser est considéré. Grace à une analysed'observabilité il est démontré que les grandeurs physiques qui peuvent êtredéterminées par l'exploitation des mesures fourni par ce systeme de capteurs pendantun court intervalle de temps sont : la distance entre le système et la surface plane ;la composante de la vitesse du système qui est orthogonale à la surface ; l'orientationrelative du système par rapport à la surface et l'orientation de la surface par rapport àla gravité. Une méthode récursive simple a été proposée pour l'estimation de toutesces quantités observables.Toutes les contributions de cette thèse sont validées par des expérimentations à l'aidedes données réelles et simulées. Grace à leur faible complexité de calcul, lesalgorithmes proposés sont très appropriés pour la mise en oeuvre en temps réel surdes systèmes ayant des ressources de calcul limitées. La suite de capteur considéréeest monté sur un quadrotor, mais les contributions de cette thèse peuvent êtreappliquées à n'importe quel appareil mobile. / Accurate egomotion estimation is of utmost importance for any navigation system.Nowadays di_erent sensors are adopted to localize and navigate in unknownenvironments such as GPS, range sensors, cameras, magnetic field sensors, inertialsensors (IMU). In order to have a robust egomotion estimation, the information ofmultiple sensors is fused. Although the improvements of technology in providingmore accurate sensors, and the efforts of the mobile robotics community in thedevelopment of more performant navigation algorithms, there are still openchallenges. Furthermore, the growing interest of the robotics community in microrobots and swarm of robots pushes towards the employment of low weight, low costsensors and low computational complexity algorithms. In this context inertial sensorsand monocular cameras, thanks to their complementary characteristics, low weight,low cost and widespread use, represent an interesting sensor suite.This dissertation represents a contribution in the framework of vision-aided inertialnavigation and tackles the problems of data association and pose estimation aimingfor low computational complexity algorithms applied to MAVs.For what concerns the data association, a novel method to estimate the relative motionbetween two consecutive camera views is proposed. It only requires the observationof a single feature in the scene and the knowledge of the angular rates from an IMU,under the assumption that the local camera motion lies in a plane perpendicular to thegravity vector. Two very efficient algorithms to remove the outliers of the featurematchingprocess are provided under the abovementioned motion assumption. Inorder to generalize the approach to a 6DoF motion, two feature correspondences andgyroscopic data from IMU measurements are necessary. In this case, two algorithmsare provided to remove wrong data associations in the feature-matching process. Inthe case of a monocular camera mounted on a quadrotor vehicle, motion priors fromIMU are used to discard wrong estimations.For what concerns the pose estimation problem, this thesis provides a closed formsolution which gives the system pose from three natural features observed in a singlecamera image, once the roll and the pitch angles are obtained by the inertialmeasurements under the planar ground assumption.In order to tackle the pose estimation problem in dark or featureless environments, asystem equipped with a monocular camera, inertial sensors and a laser pointer isconsidered. The system moves in the surrounding of a planar surface and the laserpointer produces a laser spot on the abovementioned surface. The laser spot isobserved by the monocular camera and represents the only point feature considered.Through an observability analysis it is demonstrated that the physical quantities whichcan be determined by exploiting the measurements provided by the aforementionedsensor suite during a short time interval are: the distance of the system from the planarsurface; the component of the system speed that is orthogonal to the planar surface;the relative orientation of the system with respect to the planar surface; the orientationof the planar surface with respect to the gravity. A simple recursive method toperform the estimation of all the aforementioned observable quantities is provided.All the contributions of this thesis are validated through experimental results usingboth simulated and real data. Thanks to their low computational complexity, theproposed algorithms are very suitable for real time implementation on systems withlimited on-board computation resources. The considered sensor suite is mounted on aquadrotor vehicle but the contributions of this dissertations can be applied to anymobile device.
3

Ground Plane Feature Detection in Mobile Vision-Aided Inertial Navigation

Panahandeh, Ghazaleh, Mohammadiha, Nasser, Jansson, Magnus January 2012 (has links)
In this paper, a method for determining ground plane features in a sequence of images captured by a mobile camera is presented. The hardware of the mobile system consists of a monocular camera that is mounted on an inertial measurement unit (IMU). An image processing procedure is proposed, first to extract image features and match them across consecutive image frames, and second to detect the ground plane features using a two-step algorithm. In the first step, the planar homography of the ground plane is constructed using an IMU-camera motion estimation approach. The obtained homography constraints are used to detect the most likely ground features in the sequence of images. To reject the remaining outliers, as the second step, a new plane normal vector computation approach is proposed. To obtain the normal vector of the ground plane, only three pairs of corresponding features are used for a general camera transformation. The normal-based computation approach generalizes the existing methods that are developed for specific camera transformations. Experimental results on real data validate the reliability of the proposed method. / <p>QC 20121107</p>
4

Vision-based navigation and mapping for flight in GPS-denied environments

Wu, Allen David 15 November 2010 (has links)
Traditionally, the task of determining aircraft position and attitude for automatic control has been handled by the combination of an inertial measurement unit (IMU) with a Global Positioning System (GPS) receiver. In this configuration, accelerations and angular rates from the IMU can be integrated forward in time, and position updates from the GPS can be used to bound the errors that result from this integration. However, reliance on the reception of GPS signals places artificial constraints on aircraft such as small unmanned aerial vehicles (UAVs) that are otherwise physically capable of operation in indoor, cluttered, or adversarial environments. Therefore, this work investigates methods for incorporating a monocular vision sensor into a standard avionics suite. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations by the vision sensor can in turn be used to resolve aircraft position and orientation while continuing to map out new features. An extended Kalman filter framework for performing the tasks of vision-based mapping and navigation is presented. Feature points are detected in each image using a Harris corner detector, and these feature measurements are corresponded from frame to frame using a statistical Z-test. When GPS is available, sequential observations of a single landmark point allow the point's location in inertial space to be estimated. When GPS is not available, landmarks that have been sufficiently triangulated can be used for estimating vehicle position and attitude. Simulation and real-time flight test results for vision-based mapping and navigation are presented to demonstrate feasibility in real-time applications. These methods are then integrated into a practical framework for flight in GPS-denied environments and verified through the autonomous flight of a UAV during a loss-of-GPS scenario. The methodology is also extended to the application of vehicles equipped with stereo vision systems. This framework enables aircraft capable of hovering in place to maintain a bounded pose estimate indefinitely without drift during a GPS outage.

Page generated in 0.1616 seconds