391 |
Fusing Laser and Radar Data for Enhanced Situation Awareness / Fusion av laser- och radardata för ökad omvärldsuppfattningEliasson, Emanuel January 2010 (has links)
<p>With an increasing traffic intensity the demands on vehicular safety is higher than ever before. Active safety systems that have been developed recent years are a response to that. In this master thesis Sensor Fusion is used to combine information from a laser scanner and a microwave radar in order to get more information about the surroundings in front of a vehicle. The Extended Kalman Filter method has been used to fuse the information from the sensors. The process model consists partly of a Constant Turn model to describe the motion of the ego vehicle as well as a tracked object. These individual motions are then put together in a framework for spatial relationships to describe the relationship between them. Two measurement models have been used to describe the two sensors. They have been derived from a general sensor model. This filter approach has been used to estimate the position and orientation of an object relative the ego vehicle. Also velocity, yaw rate and the width of the object have been estimated. The filter has been implemented and simulated in Matlab. The data that has been recorded and used in this work is coming from a scenario where the ego vehicle is following an object in a quite straight line. Where the ego vehicle is a truck and the object is a bus. One important conclusion from this work is that the filter is sensitive to the number of laser beams that hits the object of interest. No qualitative validation has been made though.</p>
|
392 |
Real time estimation and prediction of ship motions using Kalman filtering techniquesJanuary 1982 (has links)
Michael Triantafyllou, Marc Bodson, Michael Athans. / "July, 1982." / Bibliography: p. 118-120. / National Aeronautics and Space Administration and Langely Research Grant NGL-22-009-124
|
393 |
All learning is local: Multi-agent learning in global reward gamesChang, Yu-Han, Ho, Tracey, Kaelbling, Leslie P. 01 1900 (has links)
In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings. A sequence of increasingly complex empirical tests verifies the efficacy of this technique. / Singapore-MIT Alliance (SMA)
|
394 |
Dynamical Systems and Motion VisionHeel, Joachim 01 April 1988 (has links)
In this paper we show how the theory of dynamical systems can be employed to solve problems in motion vision. In particular we develop algorithms for the recovery of dense depth maps and motion parameters using state space observers or filters. Four different dynamical models of the imaging situation are investigated and corresponding filters/ observers derived. The most powerful of these algorithms recovers depth and motion of general nature using a brightness change constraint assumption. No feature-matching preprocessor is required.
|
395 |
Temporal Surface ReconstructionHeel, Joachim 01 May 1991 (has links)
This thesis investigates the problem of estimating the three-dimensional structure of a scene from a sequence of images. Structure information is recovered from images continuously using shading, motion or other visual mechanisms. A Kalman filter represents structure in a dense depth map. With each new image, the filter first updates the current depth map by a minimum variance estimate that best fits the new image data and the previous estimate. Then the structure estimate is predicted for the next time step by a transformation that accounts for relative camera motion. Experimental evaluation shows the significant improvement in quality and computation time that can be achieved using this technique.
|
396 |
Applications and Development of New Algorithms for Displacement Analysis Using InSAR Time SeriesOsmanoglu, Batuhan 19 July 2011 (has links)
Time series analysis of Synthetic Aperture Radar Interferometry (InSAR) data has become an important scientific tool for monitoring and measuring the displacement of Earth’s surface due to a wide range of phenomena, including earthquakes, volcanoes,landslides, changes in ground water levels, and wetlands. Time series analysis is a product of interferometric phase measurements, which become ambiguous when the observed motion is larger than half of the radar wavelength. Thus, phase observations must first be unwrapped in order to obtain physically meaningful results. Persistent Scatterer Interferometry (PSI), Stanford Method for Persistent Scatterers (StaMPS), Short Baselines Interferometry (SBAS) and Small Temporal Baseline Subset (STBAS)algorithms solve for this ambiguity using a series of spatio-temporal unwrapping algorithms and filters. In this dissertation, I improve upon current phase unwrapping algorithms, and apply the PSI method to study subsidence in Mexico City. PSI was used to obtain unwrapped deformation rates in Mexico City (Chapter 3),where ground water withdrawal in excess of natural recharge causes subsurface, clay-rich sediments to compact. This study is based on 23 satellite SAR scenes acquired between January 2004 and July 2006. Time series analysis of the data reveals a maximum line-of-sight subsidence rate of 300mm/yr at a high enough resolution that individual subsidence rates for large buildings can be determined. Differential motion and related structural damage along an elevated metro rail was evident from the results. Comparison of PSI subsidence rates with data from permanent GPS stations indicate root mean square(RMS) agreement of 6.9 mm/yr, about the level expected based on joint data uncertainty.The Mexico City results suggest negligible recharge, implying continuing degradation and loss of the aquifer in the third largest metropolitan area in the world. Chapters 4 and 5 illustrate the link between time series analysis and three-dimensional (3-D) phase unwrapping. Chapter 4 focuses on the unwrapping path.Unwrapping algorithms can be divided into two groups, path-dependent and path-independent algorithms. Path-dependent algorithms use local unwrapping functions applied pixel-by-pixel to the dataset. In contrast, path-independent algorithms use global optimization methods such as least squares, and return a unique solution. However, when aliasing and noise are present, path-independent algorithms can underestimate the signal in some areas due to global fitting criteria. Path-dependent algorithms do not underestimate the signal, but, as the name implies, the unwrapping path can affect the result. Comparison between existing path algorithms and a newly developed algorithm based on Fisher information theory was conducted. Results indicate that Fisher information theory does indeed produce lower misfit results for most tested cases. Chapter 5 presents a new time series analysis method based on 3-D unwrapping of SAR data using extended Kalman filters. Existing methods for time series generation using InSAR data employ special filters to combine two-dimensional (2-D) spatial unwrapping with one-dimensional (1-D) temporal unwrapping results. The new method,however, combines observations in azimuth, range and time for repeat pass interferometry. Due to the pixel-by-pixel characteristic of the filter, the unwrapping path is selected based on a quality map. This unwrapping algorithm is the first application of extended Kalman filters to the 3-D unwrapping problem. Time series analyses of InSAR data are used in a variety of applications with different characteristics. Consequently, it is difficult to develop a single algorithm that can provide optimal results in all cases, given that different algorithms possess a unique set of strengths and weaknesses. Nonetheless, filter-based unwrapping algorithms such as the one presented in this dissertation have the capability of joining multiple observations into a uniform solution, which is becoming an important feature with continuously growing datasets.
|
397 |
Nonlinear estimation and modeling of noisy time-series by dual Kalman filtering methodsNelson, Alex Tremain 09 1900 (has links) (PDF)
Ph.D. / Electrical and Computer Engineering / Numerous applications require either the estimation or prediction of a noisy time-series. Examples include speech enhancement, economic forecasting, and geophysical modeling. A noisy time-series can be described in terms of a probabilistic model, which accounts for both the deterministic and stochastic components of the dynamics. Such a model can be used with a Kalman filter (or extended Kalman filter) to estimate and predict the time-series from noisy measurements. When the model is unknown, it must be estimated as well; dual estimation refers to the problem of estimating both the time-series, and its underlying probabilistic model, from noisy data. The majority of dual estimation techniques in the literature are for signals described by linear models, and many are restricted to off-line application domains. Using a probabilistic approach to dual estimation, this work unifies many of the approaches in the literature within a common theoretical and algorithmic framework, and extends their capabilities to include sequential dual estimation of both linear and nonlinear signals. The dual Kalman filtering method is developed as a method for minimizing a variety of dual estimation cost functions, and is shown to be an effective general method for estimating the signal, model parameters, and noise variances in both on-line and off-line environments.
|
398 |
Vision based navigation system for autonomous proximity operations: an experimental and analytical studyDu, Ju-Young 17 February 2005 (has links)
This dissertation presents an experimental and analytical study of the Vision Based Navigation system (VisNav). VisNav is a novel intelligent optical sensor system invented by Texas A&M University recently for autonomous proximity operations. This dissertation is focused on system calibration techniques and navigation algorithms. This dissertation is composed of four parts. First, the fundamental hardware and software design configuration of the VisNav system is introduced. Second, system calibration techniques are discussed that should enable an accurate VisNav system application, as well as characterization of errors. Third, a new six degree-of-freedom navigation algorithm based on the Gaussian Least Squares Differential Correction is presented that provides a geometrical best position and attitude estimates through batch iterations. Finally, a dynamic state estimation algorithm utilizing the Extended Kalman Filter (EKF) is developed that recursively estimates position, attitude, linear velocities, and angular rates. Moreover, an approach for integration of VisNav measurements with those made by an Inertial Measuring Unit (IMU) is derived. This novel VisNav/IMU integration technique is shown to significantly improve the navigation accuracy and guarantee the robustness of the navigation system in the event of occasional dropout of VisNav data.
|
399 |
Hull/Mooring/Riser coupled motion simulations of thruster-assisted moored platformsRyu, Sangsoo 17 February 2005 (has links)
To reduce large motion responses of moored platforms in a harsh environment in deep waters, a thruster-assisted position mooring system can be applied. By applying the system, global dynamic responses can be improved in terms of the mooring line/riser top tensions, operational radii, and the top and bottom angle of the production risers. Kalman filtering as an optimum observer and estimator for stochastic disturbances is implemented in the developed control algorithm to filter out wave frequency responses. Investigation of the performance of thruster-assisted moored offshore platforms was conducted in terms of six-degree-of-freedom motions and mooring line/riser top tensions by means of a fully coupled hull/mooring/riser dynamic analysis program in the time domain and a spectral analysis. The two cases, motion analyses of a platform with thrusters and without thrusters, are extensively compared. The numerical examples illustrate that for deepwater position-keeping of platforms a thruster-assisted moored platform can be an effective solution compared to a conventionally moored platform.
|
400 |
Exploration of robust software sensor techniques with applications in vehicle positioning and bioprocess state estimationGoffaux, Guillaume 05 February 2010 (has links)
Résumé :
Le travail réalisé au cours de cette thèse traite de la mise au point de méthodes d’estimation d’état
robuste, avec deux domaines d’application en ligne de mire.
Le premier concerne le positionnement sécuritaire en transport. L’objectif est de fournir la position
et la vitesse du véhicule sous la forme d’intervalles avec un grand degré de confiance.
Le second concerne la synthèse de capteurs logiciels pour les bioprocédés, et en particulier la
reconstruction des concentrations de composants réactionnels à partir d’un nombre limité de
mesures et d’un modèle mathématique interprétant le comportement dynamique de ces composants.
L’objectif principal est de concevoir des algorithmes qui puissent fournir des estimations acceptables
en dépit des incertitudes provenant de la mauvaise connaissance du système comme les
incertitudes sur les paramètres du modèle ou les incertitudes de mesures.
Dans ce contexte, plusieurs algorithmes ont été étudiés et mis au point. Ainsi, dans le cadre
du positionnement de véhicule, la recherche s’est dirigée vers les méthodes robustes Hinfini et les
méthodes par intervalles.
Les méthodes Hinfini sont des méthodes linéaires prenant en compte une incertitude dans la modélisation
et réalisant une optimisation min-max, c’est-à-dire minimisant une fonction de coût qui
représente la pire situation compte tenu des incertitudes paramétriques. La contribution de ce
travail concerne l’extension à des modèles faiblement non linéaires et l’utilisation d’une fenêtre
glissante pour faire face à des mesures asynchrones.
Les méthodes par intervalles développées ont pour but de calculer les couloirs de confiance des
variables position et vitesse en se basant sur la combinaison d’intervalles issus des capteurs d’une
part et sur l’utilisation conjointe d’un modèle dynamique et cinématique du véhicule d’autre part.
Dans le cadre des capteurs logiciels pour bioprocédés, trois familles de méthodes ont été étudiées:
le filtrage particulaire, les méthodes par intervalles et le filtrage par horizon glissant.
Le filtrage particulaire est basé sur des méthodes de Monte-Carlo pour estimer la densité de probabilité
conditionnelle de l’état connaissant les mesures. Un de ses principaux inconvénients est
sa sensibilité aux erreurs paramétriques. La méthode développée s’applique aux bioprocédés et
profite de la structure particulière des modèles pour proposer une version du filtrage particulaire
robuste aux incertitudes des paramètres cinétiques.
Des méthodes d’estimation par intervalles sont adaptées à la situation où les mesures sont disponibles
à des instants discrets, avec une faible fréquence d’échantillonnage, en développant des
prédicteurs appropriés. L’utilisation d’un faisceau de prédicteurs grâce à des transformations d’état et le couplage entre les prédicteurs avec des réinitialisations fréquentes permettent d’améliorer
les résultats d’estimation.
Enfin, une méthode basée sur le filtre à horizon glissant est étudiée en effectuant une optimisation
min-max : la meilleure condition initiale est reconstruite pour le plus mauvais modèle. Des
solutions sont aussi proposées pour minimiser la quantité de calculs.
Pour conclure, les méthodes et résultats obtenus constituent un ensemble d’améliorations dans le
cadre de la mise au point d’algorithmes robustes vis-à-vis des incertitudes. Selon les applications
et les objectifs fixés, telle ou telle famille de méthodes sera privilégiée.
Cependant, dans un souci de robustesse, il est souvent utile de fournir les estimations sous forme
d’intervalles auxquels est associé un niveau de confiance lié aux conditions de l’estimation. C’est
pourquoi, une des méthodes les plus adaptées aux objectifs de robustesse est représentée par les
méthodes par intervalles de confiance et leur développement constituera un point de recherche
futur.
__________________________________________
Abstract :
This thesis work is about the synthesis of robust state estimation methods applied to two different
domains. The first area is dedicated to the safe positioning in transport. The objective
is to compute the vehicle position and velocity by intervals with a great confidence level. The
second area is devoted to the software sensor design in bioprocess applications. The component
concentrations are estimated from a limited number of measurements and a mathematical model
describing the dynamical behavior of the system.
The main interest is to design algorithms which achieve estimation performance and take uncertainties
into account coming from the model parameters and the measurement errors.
In this context, several algorithms have been studied and designed. Concerning the vehicle positioning,
the research activities have led to robust Hinfinity methods and interval estimation methods.
The robust Hinfinity methods use a linear model taking model uncertainty into account and perform a
min-max optimization, minimizing a cost function which describes the worst-case configuration.
The contribution in this domain is an extension to some systems with a nonlinear model and the
use of a receding time window facing with asynchronous data.
The developed interval algorithms compute confidence intervals of the vehicle velocity and position.
They use interval combinations by union and intersection operations obtained from sensors
along with kinematic and dynamic models.
In the context of bioprocesses, three families of state estimation methods have been investigated:
particle filtering, interval methods and moving-horizon filtering.
The particle filtering is based on Monte-Carlo drawings to estimate the posterior probability density
function of the state variables knowing the measurements. A major drawback is its sensitivity
to model uncertainties. The proposed algorithm is dedicated to bioprocess applications and takes
advantage of the characteristic structure of the models to design an alternative version of the
particle filter which is robust to uncertainties in the kinetic terms.
Moreover, interval observers are designed in the context of bioprocesses. The objective is to extend
the existing methods to discrete-time measurements by developing interval predictors. The
use of a bundle of interval predictors thanks to state transformations and the use of the predictor
coupling with reinitializations improve significantly the estimation performance.
Finally, a moving-horizon filter is designed, based on a min-max optimization problem. The
best initial conditions are generated from the model using the worst parameter configuration.
Furthermore, additional solutions have been provided to reduce the computational cost.
To conclude, the developed algorithms and related results can be seen as improvements in the design of estimation methods which are robust to uncertainties. According to the application and
the objectives, a family may be favored.
However, in order to satisfy some robustness criteria, an interval is preferred along with a measure
of the confidence level describing the conditions of the estimation. That is why, the development
of confidence interval observers represents an important topic in the future fields of
investigation.
|
Page generated in 0.0244 seconds