Spelling suggestions: "subject:"[een] KARHUNEN-LOEVE EXPANSION"" "subject:"[enn] KARHUNEN-LOEVE EXPANSION""
1 |
Covariance and Gramian matrices in control and systems theoryFernando, Kurukulasuriya Vicenza January 1983 (has links)
Covariance and Gramian matrices in control and systems theory and pattern recognition are studied in the context of reduction of dimensionality and hence complexity of large-scale systems. This is achieved by the removal of redundant or 'almost' redundant information contained in the covariance and Grarrdan matrices. The Karhunen-Loeve expansion (principal component analysis) and its extensions and the singular value decomposition of matrices provide the framework for the work presented in the thesis. The results given for linear dynamical systems are based on controllability and observability Gramians and some new developments in singular perturbational analysis are also presented.
|
2 |
Data assimilation for parameter estimation in coastal ocean hydrodynamics modelingMayo, Talea Lashea 25 February 2014 (has links)
Coastal ocean models are used for a vast array of applications. These
applications include modeling tidal and coastal flows, waves, and extreme
events, such as tsunamis and hurricane storm surges. Tidal and coastal flows are the primary application of this work as they play a critical role in many practical research areas such as contaminant transport, navigation through intracoastal waterways, development of coastal structures (e.g. bridges, docks,
and breakwaters), commercial fishing, and planning and execution of military operations in marine environments, in addition to recreational aquatic activities. Coastal ocean models are used to determine tidal amplitudes, time intervals between low and high tide, and the extent of the ebb and flow of tidal waters, often at specific locations of interest. However, modeling tidal flows can be quite complex, as factors such as the configuration of the coastline,
water depth, ocean floor topography, and hydrographic and meteorological
impacts can have significant effects and must all be considered.
Water levels and currents in the coastal ocean can be modeled by solv-
ing the shallow water equations. The shallow water equations contain many
parameters, and the accurate estimation of both tides and storm surge is dependent on the accuracy of their specification. Of particular importance are the parameters used to define the bottom stress in the domain of interest [50]. These parameters are often heterogeneous across the seabed of the domain. Their values cannot be measured directly and relevant data can be expensive
and difficult to obtain. The parameter values must often be inferred and the
estimates are often inaccurate, or contain a high degree of uncertainty [28].
In addition, as is the case with many numerical models, coastal ocean
models have various other sources of uncertainty, including the approximate
physics, numerical discretization, and uncertain boundary and initial conditions. Quantifying and reducing these uncertainties is critical to providing more reliable and robust storm surge predictions. It is also important to reduce the resulting error in the forecast of the model state as much as possible.
The accuracy of coastal ocean models can be improved using data assimilation methods. In general, statistical data assimilation methods are used to estimate the state of a model given both the original model output and observed data. A major advantage of statistical data assimilation methods is
that they can often be implemented non-intrusively, making them relatively
straightforward to implement. They also provide estimates of the uncertainty in the predicted model state. Unfortunately, with the exception of the estimation of initial conditions, they do not contribute to the information contained in the model. The model error that results from uncertain parameters is reduced, but information about the parameters in particular remains unknown.
Thus, the other commonly used approach to reducing model error is parameter estimation. Historically, model parameters such as the bottom stress terms have been estimated using variational methods. Variational methods formulate a cost functional that penalizes the difference between the modeled and observed state, and then minimize this functional over the unknown parameters. Though variational methods are an effective approach to solving inverse problems, they can be computationally intensive and difficult to code as they generally require the development of an adjoint model. They also are not formulated to estimate parameters in real time, e.g. as a hurricane approaches landfall. The goal of this research is to estimate parameters defining
the bottom stress terms using statistical data assimilation methods.
In this work, we use a novel approach to estimate the bottom stress
terms in the shallow water equations, which we solve numerically using the
Advanced Circulation (ADCIRC) model. In this model, a modified form of the 2-D shallow water equations is discretized in space by a continuous Galerkin finite element method, and in time by finite differencing. We use the Manning’s n formulation to represent the bottom stress terms in the model, and estimate various fields of Manning’s n coefficients by assimilating synthetic water elevation data using a square root Kalman filter. We estimate three types of fields
defined on both an idealized inlet and a more realistic spatial domain. For the
first field, a Manning’s n coefficient is given a constant value over the entire domain. For the second, we let the Manning’s n coefficient take two distinct values, letting one define the bottom stress in the deeper water of the domain and the other define the bottom stress in the shallower region. And finally, because bottom stress terms are generally spatially varying parameters, we consider the third field as a realization of a stochastic process. We represent a realization of the process using a Karhunen-Lo`ve expansion, and then seek to estimate the coefficients of the expansion.
We perform several observation system simulation experiments, and
find that we are able to accurately estimate the bottom stress terms in most of our test cases. Additionally, we are able to improve forecasts of the model state in every instance. The results of this study show that statistical data assimilation is a promising approach to parameter estimation. / text
|
3 |
Stochastic Modeling of the Equilibrium Speed-Density RelationshipWang, Haizhong 01 September 2010 (has links)
Fundamental diagram, a graphical representation of the relation among traffic flow, speed, and density, has been the foundation of traffic flow theory and transportation engineering for many years. For example, the analysis of traffic dynamics relies on input from this fundamental diagram to find when and where congestion builds up and how it dissipates; traffic engineers use a fundamental diagram to determine how well a highway facility serves its users and how to plan for new facilities in case of capacity expansion. Underlying a fundamental diagram is the relation between traffic speed and density which roughly corresponds to drivers’ speed choices under varying car-following distances. First rigorously documented by Greenshields some seventy-five years ago, such a relation has been explored in many follow-up studies, but these attempts are dominantly deterministic in nature, i.e. they model traffic speed as a function of traffic density. Though these functional speed-density models are able to coarsely explain how traffic slows down as more vehicles are crowded on highways, empirical observations show a wide-scattering of traffic speeds around the values predicted by these models. In addition, functional speed-density models lead to deterministic prediction of traffic dynamics, which lack the power to address the uncertainty brought about by random factors in traffic flow. Therefore, it appears more appropriate to view the speed-density relation as a stochastic process, in which a certain density level gives rise not only to an average value of traffic speed but also to its variation because of the randomness of drivers’ speed choices. The objective of this dissertation is to develop such a stochastic speed-density model to better represent empirical observations and provide a basis for a probabilistic prediction of traffic dynamics. It would be ideal if such a model is formulated with both mathematical elegance and empirical accuracy. The mathematical elegance of the model must include the features of: a single equation (single-regime) with physically meaningful parameters and must be easy to implement. The interpretation of empirical accuracy is twofold; on the one hand, the mean of the stochastic speeddensity model should match the average behavior of the empirical equilibrium speeddensity observations statistically. On the other hand, the magnitude of traffic speed variance is controlled by the variance function which is dependent on the response. Ultimately, it is expected that the stochastic speed-density model is able to reproduce the wide-scattering speed-density relation observed at a highway segment after being calibrated by a set of local parameters and, in return, the model can be used to perform probabilistic prediction of traffic dynamics at this location. The emphasis of this dissertation is on the former (i.e. the development, calibration, and validation of the stochastic speed-density model) with a few numerical applications of the model to demonstrate the latter (i.e. probabilistic prediction). Following the seminal Greenshields model, a great variety of deterministic speeddensity models have been proposed to mathematically represent the empirical speeddensity observations which underlie the fundamental diagram. Observed in the existing speed-density models was their deterministic nature striving to balance two competing goals: mathematical elegance and empirical accuracy. As the latest development of such a pursuit, we show that the stochastic speed-density model can be developed through discretizing a random traffic speed process using the Karhunen- Lo`eve expansion. The stochastic speed-density relationship model is largely motivated by the prevalent randomness exhibited in empirical observations that mainly comes from drivers, vehicles, roads, and environmental conditions. In a general setting, the proposed stochastic speed-density model has two components: deterministic and stochastic. For the deterministic component, we propose to use a family of logistic speed density models to track the average trend of empirical observations. In particular, the five-parameter logistic speed-density model arises as a natural candidate due to the following considerations: (1) The shape of the five-parameter logistic speed-density model can be adjusted by its physically meaningful parameters to match the average behavior of empirical observations. Statistically, the average behavior is modeled by the mean of empirical observations. (2) A three-parameter and four-parameter logistic speed-density model can be obtained by reducing the shape or scale parameter in the five-parameter model, but the counter-effect is the loss of empirical accuracy. (3) The five-parameter model yields the best accuracy compared to three-parameter and four-parameter model. The magnitude of the stochastic component is dominated by the variance of traffic speeds indexed by traffic density. The empirical traffic speed variance increases as density increases to around 25 - 30 veh/km, then starts decreasing as traffic density gets larger. It has been verified by empirical evidence that traffic speed variation shows a parabolic shape which makes the proposed variance function in a suitable formula to model its variation. The variance function is dependent on the logistic speed-density relationship with varying model parameters. A detailed analysis of empirical traffic speed variance can be found in Chapter 6. Modeling results show that by taking care of second-order statistics (i.e., variance and correlation) the proposed stochastic speed-density model is suitable for describing the observed phenomenon as well as for matching the empirical data. Following the results, a stochastic fundamental diagram of traffic flow can be established. On the application side, the stochastic speed-density relationship model can potentially be used for real-time on-line prediction and to explain phenomenons in a similar manner. This enables dynamic control and management systems to anticipate problems before they occur rather than simply reacting to existing conditions. Finally, we will summarize our findings and discuss our future research directions.
|
4 |
Practical Analysis Tools for Structures Subjected to Flow-Induced and Non-Stationary Random LoadsScott, Karen Mary Louise 14 July 2011 (has links)
There is a need to investigate and improve upon existing methods to predict response of sensors due to flow-induced vibrations in a pipe flow. The aim was to develop a tool which would enable an engineer to quickly evaluate the suitability of a particular design for a certain pipe flow application, without sacrificing fidelity. The primary methods, found in guides published by the American Society of Mechanical Engineers (ASME), of simple response prediction of sensors were found to be lacking in several key areas, which prompted development of the tool described herein. A particular limitation of the existing guidelines deals with complex stochastic stationary and non-stationary modeling and required much further study, therefore providing direction for the second portion of this body of work.
A tool for response prediction of fluid-induced vibrations of sensors was developed which allowed for analysis of low aspect ratio sensors. Results from the tool were compared to experimental lift and drag data, recorded for a range of flow velocities. The model was found to perform well over the majority of the velocity range showing superiority in prediction of response as compared to ASME guidelines. The tool was then applied to a design problem given by an industrial partner, showing several of their designs to be inadequate for the proposed flow regime. This immediate identification of unsuitable designs no doubt saved significant time in the product development process.
Work to investigate stochastic modeling in structural dynamics was undertaken to understand the reasons for the limitations found in fluid-structure interaction models. A particular weakness, non-stationary forcing, was found to be the most lacking in terms of use in the design stage of structures. A method was developed using the Karhunen Loeve expansion as its base to close the gap between prohibitively simple (stationary only) models and those which require too much computation time. Models were developed from SDOF through continuous systems and shown to perform well at each stage. Further work is needed in this area to bring this work full circle such that the lessons learned can improve design level turbulent response calculations. / Ph. D.
|
5 |
New Algorithms for Uncertainty Quantification and Nonlinear Estimation of Stochastic Dynamical SystemsDutta, Parikshit 2011 August 1900 (has links)
Recently there has been growing interest to characterize and reduce uncertainty in stochastic dynamical systems. This drive arises out of need to manage uncertainty
in complex, high dimensional physical systems. Traditional techniques of uncertainty quantification (UQ) use local linearization of dynamics and assumes Gaussian probability evolution. But several difficulties arise when these UQ models are applied to real world problems, which, generally are nonlinear in nature. Hence, to improve performance, robust algorithms, which can work efficiently in a nonlinear non-Gaussian setting are desired.
The main focus of this dissertation is to develop UQ algorithms for nonlinear systems, where uncertainty evolves in a non-Gaussian manner. The algorithms developed
are then applied to state estimation of real-world systems. The first part of the dissertation focuses on using polynomial chaos (PC) for uncertainty propagation, and then achieving the estimation task by the use of higher order moment updates and Bayes rule. The second part mainly deals with Frobenius-Perron (FP) operator theory, how it can be used to propagate uncertainty in dynamical systems, and then using it to estimate states by the use of Bayesian update. Finally, a method to represent the process noise in a stochastic dynamical system using a nite term Karhunen-Loeve (KL) expansion is proposed. The uncertainty in the resulting approximated system is propagated using FP operator.
The performance of the PC based estimation algorithms were compared with extended Kalman filter (EKF) and unscented Kalman filter (UKF), and the FP operator based techniques were compared with particle filters, when applied to a duffing oscillator system and hypersonic reentry of a vehicle in the atmosphere of Mars. It
was found that the accuracy of the PC based estimators is higher than EKF or UKF and the FP operator based estimators were computationally superior to the particle
filtering algorithms.
|
6 |
Curve Estimation and Signal Discrimination in Spatial ProblemsRau, Christian, rau@maths.anu.edu.au January 2003 (has links)
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be
likened to only a few existing approaches, and may thus be considered as our main contribution.
¶
Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners.
¶
Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently
developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
|
7 |
Bayesian Uncertainty Quantification for Large Scale Spatial Inverse ProblemsMondal, Anirban 2011 August 1900 (has links)
We considered a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a high dimension spatial field. The Bayesian approach contains a
natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provides a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo'eve expansion and Discrete Cosine transform were used for dimension reduction of the
random spatial field. Furthermore, we used a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. The need for multiple evaluations of the forward model on a high dimension spatial field (e.g. in the context of MCMC) together with the high dimensionality of the posterior, results in many computation challenges. We developed two-stage reversible jump MCMC method which has the ability to screen the bad proposals in the first inexpensive stage. Channelized spatial fields were represented by facies boundaries and
variogram-based spatial fields within each facies. Using level-set based approach, the shape of the channel boundaries was updated with dynamic data using a Bayesian
hierarchical model where the number of points representing the channel boundaries is assumed to be unknown. Statistical emulators on a large scale spatial field were introduced to avoid the expensive likelihood calculation, which contains the forward simulator, at each iteration of the MCMC step. To build the emulator, the original spatial field was represented by a low dimensional parameterization using Discrete Cosine Transform (DCT), then the Bayesian approach to multivariate adaptive regression spline (BMARS) was used to emulate the simulator. Various numerical results were presented by analyzing simulated as well as real data.
|
8 |
Uncertainty Quantification in Dynamic Problems With Large UncertaintiesMulani, Sameer B. 13 September 2006 (has links)
This dissertation investigates uncertainty quantification in dynamic problems. The Advanced Mean Value (AMV) method is used to calculate probabilistic sound power and the sensitivity of elastically supported panels with small uncertainty (coefficient of variation). Sound power calculations are done using Finite Element Method (FEM) and Boundary Element Method (BEM). The sensitivities of the sound power are calculated through direct differentiation of the FEM/BEM/AMV equations. The results are compared with Monte Carlo simulation (MCS). An improved method is developed using AMV, metamodel, and MCS. This new technique is applied to calculate sound power of a composite panel using FEM and Rayleigh Integral. The proposed methodology shows considerable improvement both in terms of accuracy and computational efficiency.
In systems with large uncertainties, the above approach does not work. Two Spectral Stochastic Finite Element Method (SSFEM) algorithms are developed to solve stochastic eigenvalue problems using Polynomial chaos. Presently, the approaches are restricted to problems with real and distinct eigenvalues. In both the approaches, the system uncertainties are modeled by Wiener-Askey orthogonal polynomial functions. Galerkin projection is applied in the probability space to minimize the weighted residual of the error of the governing equation. First algorithm is based on inverse iteration method. A modification is suggested to calculate higher eigenvalues and eigenvectors. The above algorithm is applied to both discrete and continuous systems. In continuous systems, the uncertainties are modeled as Gaussian processes using Karhunen-Loeve (KL) expansion. Second algorithm is based on implicit polynomial iteration method. This algorithm is found to be more efficient when applied to discrete systems. However, the application of the algorithm to continuous systems results in ill-conditioned system matrices, which seriously limit its application.
Lastly, an algorithm to find the basis random variables of KL expansion for non-Gaussian processes, is developed. The basis random variables are obtained via nonlinear transformation of marginal cumulative distribution function using standard deviation. Results are obtained for three known skewed distributions, Log-Normal, Beta, and Exponential. In all the cases, it is found that the proposed algorithm matches very well with the known solutions and can be applied to solve non-Gaussian process using SSFEM. / Ph. D.
|
9 |
Numerical Complexity Analysis of Weak Approximation of Stochastic Differential EquationsTempone Olariaga, Raul January 2002 (has links)
The thesis consists of four papers on numerical complexityanalysis of weak approximation of ordinary and partialstochastic differential equations, including illustrativenumerical examples. Here by numerical complexity we mean thecomputational work needed by a numerical method to solve aproblem with a given accuracy. This notion offers a way tounderstand the efficiency of different numerical methods. The first paper develops new expansions of the weakcomputational error for Ito stochastic differentialequations using Malliavin calculus. These expansions have acomputable leading order term in a posteriori form, and arebased on stochastic flows and discrete dual backward problems.Beside this, these expansions lead to efficient and accuratecomputation of error estimates and give the basis for adaptivealgorithms with either deterministic or stochastic time steps.The second paper proves convergence rates of adaptivealgorithms for Ito stochastic differential equations. Twoalgorithms based either on stochastic or deterministic timesteps are studied. The analysis of their numerical complexitycombines the error expansions from the first paper and anextension of the convergence results for adaptive algorithmsapproximating deterministic ordinary differential equations.Both adaptive algorithms are proven to stop with an optimalnumber of time steps up to a problem independent factor definedin the algorithm. The third paper extends the techniques to theframework of Ito stochastic differential equations ininfinite dimensional spaces, arising in the Heath Jarrow Mortonterm structure model for financial applications in bondmarkets. Error expansions are derived to identify differenterror contributions arising from time and maturitydiscretization, as well as the classical statistical error dueto finite sampling. The last paper studies the approximation of linear ellipticstochastic partial differential equations, describing andanalyzing two numerical methods. The first method generates iidMonte Carlo approximations of the solution by sampling thecoefficients of the equation and using a standard Galerkinfinite elements variational formulation. The second method isbased on a finite dimensional Karhunen- Lo`eve approximation ofthe stochastic coefficients, turning the original stochasticproblem into a high dimensional deterministic parametricelliptic problem. Then, adeterministic Galerkin finite elementmethod, of either h or p version, approximates the stochasticpartial differential equation. The paper concludes by comparingthe numerical complexity of the Monte Carlo method with theparametric finite element method, suggesting intuitiveconditions for an optimal selection of these methods. 2000Mathematics Subject Classification. Primary 65C05, 60H10,60H35, 65C30, 65C20; Secondary 91B28, 91B70. / QC 20100825
|
10 |
[en] AN INTRODUCTION TO MODEL REDUCTION THROUGH THE KARHUNEN-LOÈVE EXPANSION / [pt] UMA INTRODUÇÃO À REDUÇÃO DE MODELOS ATRAVÉS DA EXPANSÃO DE KARHUNEN-LOÈVECLAUDIO WOLTER 10 April 2002 (has links)
[pt] Esta dissertação tem como principal objetivo estudar
aplicações da expansão ou decomposição de Karhunen-Loève em
dinâmica de estruturas. Esta técnica consiste, basicamente,
na obtenção de uma decomposição linear da resposta dinâmica
de um sistema qualquer, representado por um campo vetorial
estocástico, tendo a importante propriedade de ser ótima,
no sentido que dado um certo número de modos, nenhuma outra
decomposição linear pode melhor representar esta resposta.
Esta capacidade de compressão de informação faz desta
decomposição uma poderosa ferramenta para a construção de
modelos reduzidos para sistemas mecânicos em geral. Em
particular, este trabalho aborda problemas em dinâmica
estrutural, onde sua aplicação ainda é bem recente.
Inicialmente, são apresentadas as principais hipóteses
necessárias à aplicação da expansão de Karhunen-Loève, bem
como duas técnicas existentes para sua implementação, com
domínios distintos de utilização.É dada especial atenção à
relação entre os modos empíricos fornecidos pela expansão e
os modos de vibração intrínsecos a sistemas vibratórios
lineares, tanto discretos quanto contínuos, exemplificados
por uma treliça bidimensional e uma placa retangular. Na
mesma linha, são discutidas as vantagens e desvantagens de
se usar esta expansão como ferramenta alternativa à análise
modal clássica. Como aplicação a sistemas não-lineares, é
apresentado o estudo de um sistema de vibroimpacto definido
por uma viga em balanço cujo deslocamento transversal é
limitado por dois batentes elásticos. Os modos empíricos
obtidos através da expansão de Karhunen-Loève são, então,
usados na formulação de um modelo de ordem reduzida,
através do método de Galerkin, e o desempenho deste novo
modelo investigado. / [en] This dissertation has the main objetive of studying
applications of the Karhunen-Loève expansion or
decomposition in structural dynamics. This technique
consists basically in obtaining a linear decomposition of
the dynamic response of a general system represented by a
stochastic vector field. It has the important property of
optimality, meaning that for a given number of modes, no
other linear decomposition is able of better representing
this response. This information compression capability
characterizes this decomposition as powerful tool for the
construction of reduced-order models of mechanical systems
in general. Particularly, this work deals with structural
dyamics problems where its application is still quite new.
Initially, the main hypothesis necessary to the application
of the Karhunen-Loève expansion are presented, as well as
two existing techniques for its implementation that
have different domains of use. Special attention is payed
to the relation between empirical eigenmodes provided by
the expansion and mode shapes intrinsic to linear vibrating
systems, both discrete and continuous, exemplified by a
bidimensional truss and a rectangular plate. Furthermore,
the advantages and disadvantages of using this expansion as
an alternative tool for classical modal analysis are
discussed. As a nonlinear application, the study of a
vibroimpact system consisting of a cantilever beam whose
transversal displacement is constrained by two elastic
barriers is presented. The empirical eigenmodes provided by
the Karhunen-Loève expansion are then used to formulate a
reduced-order model through Galerkin projection and the
performance of this new model is investigated.
|
Page generated in 0.0423 seconds