• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Accurate Localization Given Uncertain Sensors

Kramer, Jeffrey A 08 April 2010 (has links)
The necessity of accurate localization in mobile robotics is obvious - if a robot does not know where it is, it cannot navigate accurately to reach goal locations. Robots learn about their environment via sensors. Small robots require small, efficient, and, if they are to be deployed in large numbers, inexpensive sensors. The sensors used by robots to perceive the world are inherently inaccurate, providing noisy, erroneous data or even no data at all. Combined with estimation error due to imperfect modeling of the robot, there are many obstacles to successfully localizing in the world. Sensor fusion is used to overcome these difficulties - combining the available sensor data in order to derive a more accurate pose estimation for the robot. In this thesis, we dissect and analyze a wide variety of sensor fusion algorithms, with the goal of using a set of inexpensive sensors in a suite to provide real-time localization for a robot given unknown sensor errors and malfunctions. The sensor fusion algorithms will fuse GPS, INS, compass and control inputs into a more accurate position. The filters discussed include a SPKF-PF (Sigma-Point Kalman Filter - Particle Filter), a MHSPKF (Multi-hypothesis Sigma-Point Kalman Filter), a FSPKF (Fuzzy Sigma-Point Kalman Filter), a DFSPKF (Double Fuzzy Sigma-Point Kalman Filter), an EKF (Extended Kalman Filter), a MHEKF (Multi-hypothesis Extended Kalman Filter), a FEKF (Fuzzy Extended Kalman Filter), and a standard SIS PF (Sequential Importance Sampling Particle Filter). Our goal in this thesis is to provide a toolbox of algorithms for a researcher, presented in a concise manner. I will also simultaneously provide a solution to a difficult sensor fusion problem - an algorithm that is of low computational complexity (< O(n³)), real-time, accurate (equal in or more accurate than a DGPS (differential GPS) given lower quality sensors), and robust - able to provide a useful localization solution even when sensors are faulty or inaccurate. The goal is to find a locus between power requirements, computational complexity and chip requirements and accuracy/robustness that provides the best of breed for small robots with inaccurate sensors. While other fusion algorithms work well, the Sigma Point Kalman filter solves this problem best, providing accurate localization and fast response, while the Fuzzy EKF is a close second in the shorter sample with less error, and the Sigma-Point Kalman Particle Filter does very well in a longer example with more error. Fuzzy control is also discussed, especially the reason for its applicability and its use in sensor fusion.
2

Contributions au traitement spatio-temporel fondé sur un modèle autorégressif vectoriel des interférences pour améliorer la détection de petites cibles lentes dans un environnement de fouillis hétérogène Gaussien et non Gaussien / Contribution to space-time adaptive processing based on multichannel autoregressive modelling of interferences to improve small and slow target’s detection in non homogenous Gaussian and non-Gaussian clutter

Petitjean, Julien 06 December 2010 (has links)
Cette thèse traite du traitement adaptatif spatio-temporel dans le domaine radar. Pour augmenter les performances en détection, cette approche consiste à maximiser le rapport entre la puissance de la cible et celle des interférences, à savoir le bruit thermique et le fouillis. De nombreuses variantes de cet algorithme existent, une d’entre elles est fondée sur une modélisation autorégressive vectorielle des interférences. Sa principale difficulté réside dans l’estimation des matrices autorégressives à partir des données d’entrainement ; ce point constitue l’axe de notre travail de recherche. En particulier, notre contribution porte sur deux aspects. D’une part, dans le cas où l’on suppose que le bruit thermique est négligeable devant le fouillis non gaussien, les matrices autorégressives sont estimées en utilisant la méthode du point fixe. Ainsi, l’algorithme est robuste à la distribution non gaussienne du fouillis.D’autre part, nous proposons une nouvelle modélisation des interférences différenciant le bruit thermique et le fouillis : le fouillis est considéré comme un processus autorégressif vectoriel, gaussien et perturbé par le bruit blanc thermique. Ainsi, de nouvelles techniques d'estimation des matrices autorégressives sont proposées. La première est une estimation aveugle par bloc reposant sur la technique à erreurs dans les variables. Ainsi, l’estimation des matrices autorégressives reste robuste pour un rapport faible entre la puissance de la cible et celle du fouillis (< 5 dB). Ensuite, des méthodes récursives ont été développées. Elles sont fondées sur des approches du type Kalman : filtrage de Kalman étendu et filtrage par sigma point (UKF et CDKF), ainsi que sur le filtre H∞.Une étude comparative sur des données synthétiques et réelles, avec un fouillis gaussien ou non gaussien, est menée pour révéler la pertinence des différents estimateurs en terme de probabilité de détection. / This dissertation deals with space-time adaptive processing in the radar’s field. To improve the detection’s performances, this approach consists in maximizing the ratio between the target’s power and the interference’s one, i.e. the thermal noise and the clutter. Several variants of its algorithm exist, one of them is based on multichannel autoregressive modelling of interferences. Its main problem lies in the estimation of autoregressive matrices with training data and guides our research’s work. Especially, our contribution is twofold.On the one hand, when thermal noise is considered negligible, autoregressive matrices are estimated with fixed point method. Thus, the algorithm is robust against non-gaussian clutter.On the other hand, a new modelling of interferences is proposed. The clutter and thermal noise are separated : the clutter is considered as a multichannel autoregressive process which is Gaussian and disturbed by the white thermal noise. Thus, new estimation’s algorithms are developed. The first one is a blind estimation based on errors in variable methods. Then, recursive approaches are proposed and used extension of Kalman filter : the extended Kalman filter and the Sigma Point Kalman filter (UKF and CDKF), and the H∞ filter. A comparative study on synthetic and real data with Gausian and non Gaussian clutter is carried out to show the relevance of the different algorithms about detection’s probability.

Page generated in 0.0577 seconds