• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Aircraft Flight Data Processing And Parameter Identification With Iterative Extended Kalman Filter/Smoother And Two-Step Estimator

Yu, Qiuli 14 December 2001 (has links)
Aircraft flight test data are processed by optimal estimation programs to estimate the aircraft state trajectory (3 DOF) and to identify the unknown parameters, including constant biases and scale factor of the measurement instrumentation system. The methods applied in processing aircraft flight test data are the iterative extended Kalman filter/smoother and fixed-point smoother (IEKFSFPS) method and the two-step estimator (TSE) method. The models of an aircraft flight dynamic system and measurement instrumentation system are established. The principles of IEKFSFPS and TSE methods are derived and summarized, and their algorithms are programmed with MATLAB codes. Several numerical experiments of flight data processing and parameter identification are carried out by using IEKFSFPS and TSE algorithm programs. Comparison and discussion of the simulation results with the two methods are made. The TSE+IEKFSFPS combination method is presented and proven to be effective and practical. Figures and tables of the results are presented.
2

Méthodes numériques pour les problèmes des moindres carrés, avec application à l'assimilation de données / Numerical methods for least squares problems with application to data assimilation

Bergou, El Houcine 11 December 2014 (has links)
L'algorithme de Levenberg-Marquardt (LM) est parmi les algorithmes les plus populaires pour la résolution des problèmes des moindres carrés non linéaire. Motivés par la structure des problèmes de l'assimilation de données, nous considérons dans cette thèse l'extension de l'algorithme LM aux situations dans lesquelles le sous problème linéarisé, qui a la forme min||Ax - b ||^2, est résolu de façon approximative, et/ou les données sont bruitées et ne sont précises qu'avec une certaine probabilité. Sous des hypothèses appropriées, on montre que le nouvel algorithme converge presque sûrement vers un point stationnaire du premier ordre. Notre approche est appliquée à une instance dans l'assimilation de données variationnelles où les modèles stochastiques du gradient sont calculés par le lisseur de Kalman d'ensemble (EnKS). On montre la convergence dans L^p de l'EnKS vers le lisseur de Kalman, quand la taille de l'ensemble tend vers l'infini. On montre aussi la convergence de l'approche LM-EnKS, qui est une variante de l'algorithme de LM avec l'EnKS utilisé comme solveur linéaire, vers l'algorithme classique de LM ou le sous problème est résolu de façon exacte. La sensibilité de la méthode de décomposition en valeurs singulières tronquée est étudiée. Nous formulons une expression explicite pour le conditionnement de la solution des moindres carrés tronqués. Cette expression est donnée en termes de valeurs singulières de A et les coefficients de Fourier de b. / The Levenberg-Marquardt algorithm (LM) is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this thesis the extension of the LM algorithm to the scenarios where the linearized least squares subproblems, of the form min||Ax - b ||^2, are solved inexactly and/or the gradient model is noisy and accurate only within a certain probability. Under appropriate assumptions, we show that the modified algorithm converges globally and almost surely to a first order stationary point. Our approach is applied to an instance in variational data assimilation where stochastic models of the gradient are computed by the so-called ensemble Kalman smoother (EnKS). A convergence proof in L^p of EnKS in the limit for large ensembles to the Kalman smoother is given. We also show the convergence of LM-EnKS approach, which is a variant of the LM algorithm with EnKS as a linear solver, to the classical LM algorithm where the linearized subproblem is solved exactly. The sensitivity of the trucated sigular value decomposition method to solve the linearized subprobems is studied. We formulate an explicit expression for the condition number of the truncated least squares solution. This expression is given in terms of the singular values of A and the Fourier coefficients of b.
3

Robust spatio-temporal latent variable models

Christmas, Jacqueline January 2011 (has links)
Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are widely-used mathematical models for decomposing multivariate data. They capture spatial relationships between variables, but ignore any temporal relationships that might exist between observations. Probabilistic PCA (PPCA) and Probabilistic CCA (ProbCCA) are versions of these two models that explain the statistical properties of the observed variables as linear mixtures of an alternative, hypothetical set of hidden, or latent, variables and explicitly model noise. Both the noise and the latent variables are assumed to be Gaussian distributed. This thesis introduces two new models, named PPCA-AR and ProbCCA-AR, that augment PPCA and ProbCCA respectively with autoregressive processes over the latent variables to additionally capture temporal relationships between the observations. To make PPCA-AR and ProbCCA-AR robust to outliers and able to model leptokurtic data, the Gaussian assumptions are replaced with infinite scale mixtures of Gaussians, using the Student-t distribution. Bayesian inference calculates posterior probability distributions for each of the parameter variables, from which we obtain a measure of confidence in the inference. It avoids the pitfalls associated with the maximum likelihood method: integrating over all possible values of the parameter variables guards against overfitting. For these new models the integrals required for exact Bayesian inference are intractable; instead a method of approximation, the variational Bayesian approach, is used. This enables the use of automatic relevance determination to estimate the model orders. PPCA-AR and ProbCCA-AR can be viewed as linear dynamical systems, so the forward-backward algorithm, also known as the Baum-Welch algorithm, is used as an efficient method for inferring the posterior distributions of the latent variables. The exact algorithm is tractable because Gaussian assumptions are made regarding the distribution of the latent variables. This thesis introduces a variational Bayesian forward-backward algorithm based on Student-t assumptions. The new models are demonstrated on synthetic datasets and on real remote sensing and EEG data.

Page generated in 0.0694 seconds