• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 8
  • 2
  • 1
  • Tagged with
  • 47
  • 47
  • 47
  • 23
  • 22
  • 13
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Efficient formulation and implementation of ensemble based methods in data assimilation

Nino Ruiz, Elias David 11 January 2016 (has links)
Ensemble-based methods have gained widespread popularity in the field of data assimilation. An ensemble of model realizations encapsulates information about the error correlations driven by the physics and the dynamics of the numerical model. This information can be used to obtain improved estimates of the state of non-linear dynamical systems such as the atmosphere and/or the ocean. This work develops efficient ensemble-based methods for data assimilation. A major bottleneck in ensemble Kalman filter (EnKF) implementations is the solution of a linear system at each analysis step. To alleviate it an EnKF implementation based on an iterative Sherman Morrison formula is proposed. The rank deficiency of the ensemble covariance matrix is exploited in order to efficiently compute the analysis increments during the assimilation process. The computational effort of the proposed method is comparable to those of the best EnKF implementations found in the current literature. The stability analysis of the new algorithm is theoretically proven based on the positiveness of the data error covariance matrix. In order to improve the background error covariance matrices in ensemble-based data assimilation we explore the use of shrinkage covariance matrix estimators from ensembles. The resulting filter has attractive features in terms of both memory usage and computational complexity. Numerical results show that it performs better that traditional EnKF formulations. In geophysical applications the correlations between errors corresponding to distant model components decreases rapidly with the distance. We propose a new and efficient implementation of the EnKF based on a modified Cholesky decomposition for inverse covariance matrix estimation. This approach exploits the conditional independence of background errors between distant model components with regard to a predefined radius of influence. Consequently, sparse estimators of the inverse background error covariance matrix can be obtained. This implies huge memory savings during the assimilation process under realistic weather forecast scenarios. Rigorous error bounds for the resulting estimator in the context of data assimilation are theoretically proved. The conclusion is that the resulting estimator converges to the true inverse background error covariance matrix when the ensemble size is of the order of the logarithm of the number of model components. We explore high-performance implementations of the proposed EnKF algorithms. When the observational operator can be locally approximated for different regions of the domain, efficient parallel implementations of the EnKF formulations presented in this dissertation can be obtained. The parallel computation of the analysis increments is performed making use of domain decomposition. Local analysis increments are computed on (possibly) different processors. Once all local analysis increments have been computed they are mapped back onto the global domain to recover the global analysis. Tests performed with an atmospheric general circulation model at a T-63 resolution, and varying the number of processors from 96 to 2,048, reveal that the assimilation time can be decreased multiple fold for all the proposed EnKF formulations.Ensemble-based methods can be used to reformulate strong constraint four dimensional variational data assimilation such as to avoid the construction of adjoint models, which can be complicated for operational models. We propose a trust region approach based on ensembles in which the analysis increments are computed onto the space of an ensemble of snapshots. The quality of the resulting increments in the ensemble space is compared against the gains in the full space. Decisions on whether accept or reject solutions rely on trust region updating formulas. Results based on a atmospheric general circulation model with a T-42 resolution reveal that this methodology can improve the analysis accuracy. / Ph. D.
42

Data assimilation and uncertainty quantification in cardiovascular biomechanics / Assimilation de données et quantification des incertitudes en biomécanique cardiovasculaire

Lal, Rajnesh 14 June 2017 (has links)
Les simulations numériques des écoulements sanguins cardiovasculaires peuvent combler d’importantes lacunes dans les capacités actuelles de traitement clinique. En effet, elles offrent des moyens non invasifs pour quantifier l’hémodynamique dans le cœur et les principaux vaisseaux sanguins chez les patients atteints de maladies cardiovasculaires. Ainsi, elles permettent de recouvrer les caractéristiques des écoulements sanguins qui ne peuvent pas être obtenues directement à partir de l’imagerie médicale. Dans ce sens, des simulations personnalisées utilisant des informations propres aux patients aideraient à une prévision individualisée des risques. Nous pourrions en effet, disposer des informations clés sur la progression éventuelle d’une maladie ou détecter de possibles anomalies physiologiques. Les modèles numériques peuvent fournir également des moyens pour concevoir et tester de nouveaux dispositifs médicaux et peuvent être utilisés comme outils prédictifs pour la planification de traitement chirurgical personnalisé. Ils aideront ainsi à la prise de décision clinique. Cependant, une difficulté dans cette approche est que, pour être fiables, les simulations prédictives spécifiques aux patients nécessitent une assimilation efficace de leurs données médicales. Ceci nécessite la solution d’un problème hémodynamique inverse, où les paramètres du modèle sont incertains et sont estimés à l’aide des techniques d’assimilation de données.Dans cette thèse, le problème inverse pour l’estimation des paramètres est résolu par une méthode d’assimilation de données basée sur un filtre de Kalman d’ensemble (EnKF). Connaissant les incertitudes sur les mesures, un tel filtre permet la quantification des incertitudes liées aux paramètres estimés. Un algorithme d’estimation de paramètres, basé sur un filtre de Kalman d’ensemble, est proposé dans cette thèse pour des calculs hémodynamiques spécifiques à un patient, dans un réseau artériel schématique et à partir de mesures cliniques incertaines. La méthodologie est validée à travers plusieurs scenarii in silico utilisant des données synthétiques. La performance de l’algorithme d’estimation de paramètres est également évaluée sur des données expérimentales pour plusieurs réseaux artériels et dans un cas provenant d’un banc d’essai in vitro et des données cliniques réelles d’un volontaire (cas spécifique du patient). Le but principal de cette thèse est l’analyse hémodynamique spécifique du patient dans le polygone de Willis, appelé aussi cercle artériel du cerveau. Les propriétés hémodynamiques communes, comme celles de la paroi artérielle (module de Young, épaisseur de la paroi et coefficient viscoélastique), et les paramètres des conditions aux limites (coefficients de réflexion et paramètres du modèle de Windkessel) sont estimés. Il est également démontré qu’un modèle appelé compartiment d’ordre réduit (ou modèle dimension zéro) permet une estimation simple et fiable des caractéristiques du flux sanguin dans le polygone de Willis. De plus, il est ressorti que les simulations avec les paramètres estimés capturent les formes attendues pour les ondes de pression et de débit aux emplacements prescrits par le clinicien. / Cardiovascular blood flow simulations can fill several critical gaps in current clinical capabilities. They offer non-invasive ways to quantify hemodynamics in the heart and major blood vessels for patients with cardiovascular diseases, that cannot be directly obtained from medical imaging. Patient-specific simulations (incorporating data unique to the individual) enable individualised risk prediction, provide key insights into disease progression and/or abnormal physiologic detection. They also provide means to systematically design and test new medical devices, and are used as predictive tools to surgical and personalize treatment planning and, thus aid in clinical decision-making. Patient-specific predictive simulations require effective assimilation of medical data for reliable simulated predictions. This is usually achieved by the solution of an inverse hemodynamic problem, where uncertain model parameters are estimated using the techniques for merging data and numerical models known as data assimilation methods.In this thesis, the inverse problem is solved through a data assimilation method using an ensemble Kalman filter (EnKF) for parameter estimation. By using an ensemble Kalman filter, the solution also comes with a quantification of the uncertainties for the estimated parameters. An ensemble Kalman filter-based parameter estimation algorithm is proposed for patient-specific hemodynamic computations in a schematic arterial network from uncertain clinical measurements. Several in silico scenarii (using synthetic data) are considered to investigate the efficiency of the parameter estimation algorithm using EnKF. The usefulness of the parameter estimation algorithm is also assessed using experimental data from an in vitro test rig and actual real clinical data from a volunteer (patient-specific case). The proposed algorithm is evaluated on arterial networks which include single arteries, cases of bifurcation, a simple human arterial network and a complex arterial network including the circle of Willis.The ultimate aim is to perform patient-specific hemodynamic analysis in the network of the circle of Willis. Common hemodynamic properties (parameters), like arterial wall properties (Young’s modulus, wall thickness, and viscoelastic coefficient) and terminal boundary parameters (reflection coefficient and Windkessel model parameters) are estimated as the solution to an inverse problem using time series pressure values and blood flow rate as measurements. It is also demonstrated that a proper reduced order zero-dimensional compartment model can lead to a simple and reliable estimation of blood flow features in the circle of Willis. The simulations with the estimated parameters capture target pressure or flow rate waveforms at given specific locations.
43

Méthodes numériques pour les problèmes des moindres carrés, avec application à l'assimilation de données / Numerical methods for least squares problems with application to data assimilation

Bergou, El Houcine 11 December 2014 (has links)
L'algorithme de Levenberg-Marquardt (LM) est parmi les algorithmes les plus populaires pour la résolution des problèmes des moindres carrés non linéaire. Motivés par la structure des problèmes de l'assimilation de données, nous considérons dans cette thèse l'extension de l'algorithme LM aux situations dans lesquelles le sous problème linéarisé, qui a la forme min||Ax - b ||^2, est résolu de façon approximative, et/ou les données sont bruitées et ne sont précises qu'avec une certaine probabilité. Sous des hypothèses appropriées, on montre que le nouvel algorithme converge presque sûrement vers un point stationnaire du premier ordre. Notre approche est appliquée à une instance dans l'assimilation de données variationnelles où les modèles stochastiques du gradient sont calculés par le lisseur de Kalman d'ensemble (EnKS). On montre la convergence dans L^p de l'EnKS vers le lisseur de Kalman, quand la taille de l'ensemble tend vers l'infini. On montre aussi la convergence de l'approche LM-EnKS, qui est une variante de l'algorithme de LM avec l'EnKS utilisé comme solveur linéaire, vers l'algorithme classique de LM ou le sous problème est résolu de façon exacte. La sensibilité de la méthode de décomposition en valeurs singulières tronquée est étudiée. Nous formulons une expression explicite pour le conditionnement de la solution des moindres carrés tronqués. Cette expression est donnée en termes de valeurs singulières de A et les coefficients de Fourier de b. / The Levenberg-Marquardt algorithm (LM) is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this thesis the extension of the LM algorithm to the scenarios where the linearized least squares subproblems, of the form min||Ax - b ||^2, are solved inexactly and/or the gradient model is noisy and accurate only within a certain probability. Under appropriate assumptions, we show that the modified algorithm converges globally and almost surely to a first order stationary point. Our approach is applied to an instance in variational data assimilation where stochastic models of the gradient are computed by the so-called ensemble Kalman smoother (EnKS). A convergence proof in L^p of EnKS in the limit for large ensembles to the Kalman smoother is given. We also show the convergence of LM-EnKS approach, which is a variant of the LM algorithm with EnKS as a linear solver, to the classical LM algorithm where the linearized subproblem is solved exactly. The sensitivity of the trucated sigular value decomposition method to solve the linearized subprobems is studied. We formulate an explicit expression for the condition number of the truncated least squares solution. This expression is given in terms of the singular values of A and the Fourier coefficients of b.
44

Development Of Deterministic And Stochastic Algorithms For Inverse Problems Of Optical Tomography

Gupta, Saurabh 07 1900 (has links) (PDF)
Stable and computationally efficient reconstruction methodologies are developed to solve two important medical imaging problems which use near-infrared (NIR) light as the source of interrogation, namely, diffuse optical tomography (DOT) and one of its variations, ultrasound-modulated optical tomography (UMOT). Since in both these imaging modalities the system matrices are ill-conditioned owing to insufficient and noisy data, the emphasis in this work is to develop robust stochastic filtering algorithms which can handle measurement noise and also account for inaccuracies in forward models through an appropriate assignment of a process noise. However, we start with demonstration of speeding of a Gauss-Newton (GN) algorithm for DOT so that a video-rate reconstruction from data recorded on a CCD camera is rendered feasible. Towards this, a computationally efficient linear iterative scheme is proposed to invert the normal equation of a Gauss-Newton scheme in the context of recovery of absorption coefficient distribution from DOT data, which involved the singular value decomposition (SVD) of the Jacobian matrix appearing in the update equation. This has sufficiently speeded up the inversion that a video rate recovery of time evolving absorption coefficient distribution is demonstrated from experimental data. The SVD-based algorithm has made the number of operations in image reconstruction to be rather than. 2()ONN3()ONN The rest of the algorithms are based on different forms of stochastic filtering wherein we arrive at a mean-square estimate of the parameters through computing their joint probability distributions conditioned on the measurement up to the current instant. Under this, the first algorithm developed uses a Bootstrap particle filter which also uses a quasi-Newton direction within. Since keeping track of the Newton direction necessitates repetitive computation of the Jacobian, for all particle locations and for all time steps, to make the recovery computationally feasible, we devised a faster update of the Jacobian. It is demonstrated, through analytical reasoning and numerical simulations, that the proposed scheme, not only accelerates convergence but also yields substantially reduced sample variance in the estimates vis-à-vis the conventional BS filter. Both accelerated convergence and reduced sample variance in the estimates are demonstrated in DOT optical parameter recovery using simulated and experimental data. In the next demonstration a derivative free variant of the pseudo-dynamic ensemble Kalman filter (PD-EnKF) is developed for DOT wherein the size of the unknown parameter is reduced by representing of the inhomogeneities through simple geometrical shapes. Also the optical parameter fields within the inhomogeneities are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions). The EnKF is then used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the Pseudo-Dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ‘measurement’ equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. In our numerical simulations we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes ( such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as = 0.01 mm-1 and = 1.0 mm-1respectively. We also assume=0.02 mm-1 within the inhomogeneity (for the single inhomogeneity case) and=0.02 and 0.03 mm-1 (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. The superiority of a modified version of the PD-EnKF, which uses an ensemble square root filter, is also demonstrated in the context of UMOT by recovering the distribution of mean-squared amplitude of vibration, related to the Young’s modulus, in the ultrasound focal volume. Since the ability of a coherent light probe to pick-up the overall optical path-length change is limited to modulo an optical wavelength, the individual displacements suffered owing to the US forcing should be very small, say within a few angstroms. The sensitivity of modulation depth to changes in these small displacements could be very small, especially when the ROI is far removed from the source and detector. The contrast recovery of the unknown distribution in such cases could be seriously impaired whilst using a quasi-Newton scheme (e.g. the GN scheme) which crucially makes use of the derivative information. The derivative-free gain-based Monte Carlo filter not only remedies this deficiency, but also provides a regularization insensitive and computationally competitive alternative to the GN scheme. The inherent ability of a stochastic filter in accommodating the model error owing to a diffusion approximation of the correlation transport may be cited as an added advantage in the context of the UMOT inverse problem. Finally to speed up forward solve of the partial differential equation (PDE) modeling photon transport in the context of UMOT for which the PDE has time as a parameter, a spectral decomposition of the PDE operator is demonstrated. This allows the computation of the time dependent forward solution in terms of the eigen functions of the PDE operator which has speeded up the forward solution, which in turn has rendered the UMOT parameter recovery computationally efficient.
45

Numerical Methods for Bayesian Inference in Hilbert Spaces / Numerische Methoden für Bayessche Inferenz in Hilberträumen

Sprungk, Björn 15 February 2018 (has links) (PDF)
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice. / Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert.
46

Numerical Methods for Bayesian Inference in Hilbert Spaces

Sprungk, Björn 15 February 2018 (has links)
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice. / Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert.
47

Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems

Peiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>

Page generated in 0.0544 seconds