Spelling suggestions: "subject:"polynomial's"" "subject:"polynomial.to""
11 |
Estimation of Uncertain Vehicle Center of Gravity using Polynomial Chaos ExpansionsPrice, Darryl Brian 14 August 2008 (has links)
The main goal of this study is the use of polynomial chaos expansion (PCE) to analyze the uncertainty in calculating the lateral and longitudinal center of gravity for a vehicle from static load cell measurements. A secondary goal is to use experimental testing as a source of uncertainty and as a method to confirm the results from the PCE simulation. While PCE has often been used as an alternative to Monte Carlo, PCE models have rarely been based on experimental data. The 8-post test rig at the Virginia Institute for Performance Engineering and Research facility at Virginia International Raceway is the experimental test bed used to implement the PCE model. Experimental tests are conducted to define the true distribution for the load measurement systems' uncertainty. A method that does not require a new uncertainty distribution experiment for multiple tests with different goals is presented. Moved mass tests confirm the uncertainty analysis using portable scales that provide accurate results.
The polynomial chaos model used to find the uncertainty in the center of gravity calculation is derived. Karhunen-Loeve expansions, similar to Fourier series, are used to define the uncertainties to allow for the polynomial chaos expansion. PCE models are typically computed via the collocation method or the Galerkin method. The Galerkin method is chosen as the PCE method in order to formulate a more accurate analytical result. The derivation systematically increases from one uncertain load cell to all four uncertain load cells noting the differences and increased complexity as the uncertainty dimensions increase. For each derivation the PCE model is shown and the solution to the simulation is given. Results are presented comparing the polynomial chaos simulation to the Monte Carlo simulation and to the accurate scales. It is shown that the PCE simulations closely match the Monte Carlo simulations. / Master of Science
|
12 |
Surrogate-assisted optimisation-based verification & validationKamath, Atul Krishna January 2014 (has links)
This thesis deals with the application of optimisation based Validation and Verification (V&V) analysis on aerospace vehicles in order to determine their worst case performance metrics. To this end, three aerospace models relating to satellite and launcher vehicles provided by European Space Agency (ESA) on various projects are utilised. As a means to quicken the process of optimisation based V&V analysis, surrogate models are developed using polynomial chaos method. Surro- gate models provide a quick way to ascertain the worst case directions as computation time required for evaluating them is very small. A sin- gle evaluation of a surrogate model takes less than a second. Another contribution of this thesis is the evaluation of operational safety margin metric with the help of surrogate models. Operational safety margin is a metric defined in the uncertain parameter space and is related to the distance between the nominal parameter value and the first instance of performance criteria violation. This metric can help to gauge the robustness of the controller but requires the evaluation of the model in the constraint function and hence could be computationally intensive. As surrogate models are computationally very cheap, they are utilised to rapidly compute the operational safety margin metric. But this metric focuses only on finding a safe region around the nominal parameter value and the possibility of other disjoint safe regions are not explored. In order to find other safe or failure regions in the param- eter space, the method of Bernstein expansion method is utilised on surrogate polynomial models to help characterise the uncertain param- eter space into safe and failure regions. Furthermore, Binomial failure analysis is used to assign failure probabilities to failure regions which might help the designer to determine if a re-design of the controller is required or not. The methodologies of optimisation based V&V, surrogate modelling, operational safety margin, Bernstein expansion method and risk assessment have been combined together to form the WCAT-II MATLAB toolbox.
|
13 |
Résolution de problème inverse et propagation d'incertitudes : application à la dynamique des gaz compressibles / Inverse problem and uncertainty quantification : application to compressible gas dynamicsBirolleau, Alexandre 30 April 2014 (has links)
Cette thèse porte sur la propagation d'incertitudes et la résolution de problème inverse et leur accélération par Chaos Polynomial. L'objectif est de faire un état de l'art et une analyse numérique des méthodes spectrales de type Chaos Polynomial, d'en comprendre les avantages et les inconvénients afin de l'appliquer à l'étude probabiliste d'instabilités hydrodynamiques dans des expériences de tubes à choc de type Richtmyer-Meshkov. Le second chapitre fait un état de l'art illustré sur plusieurs exemples des méthodes de type Chaos Polynomial. Nous y effectuons son analyse numérique et mettons en évidence la possibilité d'améliorer la méthode, notamment sur des solutions irrégulières (en ayant en tête les difficultés liées aux problèmes hydrodynamiques), en introduisant le Chaos Polynomial généralisé itératif. Ce chapitre comporte également l'analyse numérique complète de cette nouvelle méthode. Le chapitre 3 a fait l'objet d'une publication dans Communication in Computational Physics, celle-ci a récemment été acceptée. Il fait l'état de l'art des méthodes d'inversion probabilistes et focalise sur l'inférence bayesienne. Il traite enfin de la possibilité d'accélérer la convergence de cette inférence en utilisant les méthodes spectrales décrites au chapitre précédent. La convergence théorique de la méthode d'accélération est démontrée et illustrée sur différents cas-test. Nous appliquons les méthodes et algorithmes des deux chapitres précédents à un problème complexe et ambitieux, un écoulement de gaz compressible physiquement instable (configuration tube à choc de Richtmyer-Meshkov) avec une analyse poussée des phénomènes physico-numériques en jeu. Enfin en annexe, nous présentons quelques pistes de recherche supplémentaires rapidement abordées au cours de cette thèse. / This thesis deals with uncertainty propagation and the resolution of inverse problems together with their respective acceleration via Polynomial Chaos. The object of this work is to present a state of the art and a numerical analysis of this stochastic spectral method, in order to understand its pros and cons when tackling the probabilistic study of hydrodynamical instabilities in Richtmyer-Meshkov shock tube experiments. The first chapter is introductory and allows understanding the stakes of being able to accurately take into account uncertainties in compressible gas dynamics simulations. The second chapter is both an illustrative state of the art on generalized Polynomial Chaos and a full numerical analysis of the method keeping in mind the final application on hydrodynamical problems developping shocks and discontinuous solutions. In this chapter, we introduce a new method, naming iterative generalized Polynomial Chaos, which ensures a gain with respect to generalized Polynomial Chaos, especially with non smooth solutions. Chapter three is closely related to an accepted publication in Communication in Computational Physics. It deals with stochastic inverse problems and introduces bayesian inference. It also emphasizes the possibility of accelerating the bayesian inference thanks to iterative generalized Polynomial Chaos described in the previous chapter. Theoretical convergence is established and illustrated on several test-cases. The last chapter consists in the application of the above materials to a complex and ambitious compressible gas dynamics problem (Richtmyer-Meshkov shock tube configuration) together with a deepened study of the physico-numerical phenomenon at stake. Finally, in the appendix, we also present some interesting research paths we quickly tackled during this thesis.
|
14 |
History matching of surfactant-polymer floodingPratik Kiranrao Naik (5930765) 17 January 2019 (has links)
This thesis presents a framework for history matching and model calibration of surfactant-polymer (SP) flooding. At first, a high-fidelity mechanistic SP flood model is constructed by performing extensive lab-scale experiments on Berea cores. Then, incorporating Sobol based sensitivity analysis, polynomial chaos expansion based surrogate modelling (PCE-proxy) and Genetic algorithm based inverse optimization, an optimized model parameter set is determined by minimizing the miss-fit between PCE-proxy response and experimental observations for quantities of interests such as cumulative oil recovery and pressure profile. The epistemic uncertainty in PCE-proxy is quantified using a Gaussian regression process called Kriging. The framework is then extended to Bayesian calibration where the posterior of model parameters is inferred by directly sampling from it using Markov chain Monte Carlo (MCMC). Finally, a stochastic multi-objective optimization problem is posed under uncertainties in model parameters and oil price which is solved using a variant of Bayesian global optimization routine.
<br>
|
15 |
Otimização de riscos sob processos aleatórios de corrosão e fadiga / Risk optimization under random corrosion and fatigue processesGomes, Wellison José de Santana 07 March 2013 (has links)
Processos aleatórios de corrosão e fadiga reduzem lentamente a resistência de estruturas e componentes estruturais, provocando um aumento gradual nas probabilidades de falha. A gestão do risco de falha de componentes sujeitos a corrosão e/ou fadiga é feita através de políticas de inspeção, manutenção e substituição, atividades que implicam em custos, mas visam manter a confiabilidade em níveis aceitáveis, enquanto o componente permanecer em operação. Aparentemente, os objetivos economia e segurança competem entre si, no entanto, a redução de recursos para inspeção e manutenção pode levar a maiores e crescentes probabilidades de falha, implicando em maiores custos esperados de falha, ou seja, maior risco. A otimização de risco estrutural é uma formulação que permite equacionar este problema, através do chamado custo esperado total. Nesta Tese, a otimização de risco é utilizada no intuito de encontrar políticas ótimas de inspeção e manutenção, isto é, quantidades de recursos a serem alocadas nestas atividades que levem ao menor custo esperado total possível. Os processos de corrosão e fadiga são representados através de modelos em polinômios de caos, construídos de maneira inédita, com base em dados experimentais ou observados da literatura. Com base nestes modelos, os problemas de otimização de risco envolvendo processos de fadiga e corrosão são resolvidos para diferentes configurações de custos de falha e de inspeções. Verifica-se que as políticas ótimas de inspeção, manutenção e substituição podem ser bastante diferentes para configurações de custo distintas, e que a determinação destas políticas é bastante desafiadora, devido, dentre outros fatores, à grande quantidade de mínimos locais do problema de otimização em questão, causadas por descontinuidades e oscilações da função custo esperado total. / Random corrosion and fatigue processes reduce slowly but gradually the resistance of structures and mechanical components, leading to gradual increase in failure probabilities. Risk management for mechanical components subject to corrosion and fatigue is made by means of policies of inspection, maintenance and substitution. These activities imply costs, but are made to maintain the reliability at acceptable levels, while the component remains in operation. Apparently, economy and safety are competing objectives; however, reduction in inspection and maintenance spending may lead to larger failure probabilities, increasing expected costs of failure (risk). Risk optimization allows one to solve this problem, by means of the so-called total expected cost. In this Thesis, risk optimization is used in order to find the best inspection and maintenance policy, i.e., the proper amount of resources to allocate to such activities in order to obtain minimum total expected cost. Corrosion and fatigue are modeled by means of polynomial chaos expansions, using a novel approach developed herein and experimental or observed data obtained from the literature. These models are employed within two risk optimization problems, solved for different failure and inspection cost configurations. Results show that the optimal policies of inspection, maintenance and replacements can be very different, for different cost configurations, and that the solution of the associated risk optimization problems is a very challenging task, due to the large number of local minima, caused by discontinuities and fluctuations in the total expected costs.
|
16 |
Novel Computational Methods for Solving High-Dimensional Random Eigenvalue ProblemsYadav, Vaibhav 01 July 2013 (has links)
The primary objective of this study is to develop new computational
methods for solving a general random eigenvalue problem (REP) commonly encountered in modeling and simulation of high-dimensional, complex dynamic systems. Four major research directions, all anchored in polynomial dimensional decomposition (PDD), have been defined to meet the objective. They involve: (1) a rigorous comparison of accuracy, efficiency, and convergence properties of the polynomial chaos expansion (PCE) and PDD methods; (2) development of two novel multiplicative PDD methods for addressing multiplicative structures in REPs; (3) development of a new hybrid PDD method to account for the combined effects of the multiplicative and additive structures in REPs; and (4) development of adaptive and sparse algorithms in conjunction with the PDD methods.
The major findings are as follows. First, a rigorous comparison of the PCE and PDD methods indicates that the infinite series from the two expansions are equivalent but their truncations endow contrasting dimensional structures, creating significant difference between the two approximations. When the cooperative effects of input variables on an eigenvalue attenuate rapidly or vanish altogether, the PDD approximation commits smaller error than does the PCE approximation for identical expansion orders. Numerical analysis reveal higher convergence rates and significantly higher efficiency of the PDD approximation than the PCE approximation. Second, two novel multiplicative PDD methods, factorized PDD and logarithmic PDD, were developed to exploit the hidden multiplicative structure of an REP, if it exists. Since a multiplicative PDD recycles the same component functions of the additive PDD, no additional cost is incurred. Numerical results show that indeed both the multiplicative PDD methods are capable of effectively utilizing the multiplicative structure of a random response. Third, a new hybrid PDD method was constructed for uncertainty quantification of high-dimensional complex systems. The method is based on a linear combination of an additive and a multiplicative PDD approximation. Numerical results indicate that the univariate hybrid PDD method, which is slightly more expensive than the univariate additive or multiplicative PDD approximations, yields more accurate stochastic solutions than the latter two methods. Last, two novel adaptive-sparse PDD methods were developed that entail global sensitivity analysis for defining the relevant pruning criteria. Compared with the past developments, the adaptive-sparse PDD methods do not require its truncation parameter(s) to be assigned a priori or arbitrarily. Numerical results reveal that an adaptive-sparse PDD method achieves a desired level of accuracy with considerably fewer coefficients compared with existing PDD approximations.
|
17 |
On Stability and Monotonicity Requirements of Finite Difference Approximations of Stochastic Conservation Laws with Random ViscosityPettersson, Per, Doostan, Alireza, Nordström, Jan January 2013 (has links)
The stochastic Galerkin and collocation methods are used to solve an advection-diusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diusion equation onto the stochastic basis functions. High-order summationby- parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system. It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steadystate
|
18 |
Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty QuantificationWinokur, Justin Gregory January 2015 (has links)
<p>Polynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.</p><p>Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.</p><p>We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.</p><p>In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. </p><p>In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.</p> / Dissertation
|
19 |
Uncertainty in the Bifurcation Diagram of a Model of Heart Rhythm DynamicsRing, Caroline January 2014 (has links)
<p>To understand the underlying mechanisms of cardiac arrhythmias, computational models are used to study heart rhythm dynamics. The parameters of these models carry inherent uncertainty. Therefore, to interpret the results of these models, uncertainty quantification (UQ) and sensitivity analysis (SA) are important. Polynomial chaos (PC) is a computationally efficient method for UQ and SA in which a model output Y, dependent on some independent uncertain parameters represented by a random vector ξ, is approximated as a spectral expansion in multidimensional orthogonal polynomials in ξ. The expansion can then be used to characterize the uncertainty in Y.</p><p>PC methods were applied to UQ and SA of the dynamics of a two-dimensional return-map model of cardiac action potential duration (APD) restitution in a paced single cell. Uncertainty was considered in four parameters of the model: three time constants and the pacing stimulus strength. The basic cycle length (BCL) (the period between stimuli) was treated as the control parameter. Model dynamics was characterized with bifurcation analysis, which determines the APD and stability of fixed points of the model at a range of BCLs, and the BCLs at which bifurcations occur. These quantities can be plotted in a bifurcation diagram, which summarizes the dynamics of the model. PC UQ and SA were performed for these quantities. UQ results were summarized in a novel probabilistic bifurcation diagram that visualizes the APD and stability of fixed points as uncertain quantities.</p><p>Classical PC methods assume that model outputs exist and reasonably smooth over the full domain of ξ. Because models of heart rhythm often exhibit bifurcations and discontinuities, their outputs may not obey the existence and smoothness assumptions on the full domain, but only on some subdomains which may be irregularly shaped. On these subdomains, the random variables representing the parameters may no longer be independent. PC methods therefore must be modified for analysis of these discontinuous quantities. The Rosenblatt transformation maps the variables on the subdomain onto a rectangular domain; the transformed variables are independent and uniformly distributed. A new numerical estimation of the Rosenblatt transformation was developed that improves accuracy and computational efficiency compared to existing kernel density estimation methods. PC representations of the outputs in the transformed variables were then constructed. Coefficients of the PC expansions were estimated using Bayesian inference methods. For discontinuous model outputs, SA was performed using a sampling-based variance-reduction method, with the PC estimation used as an efficient proxy for the full model.</p><p>To evaluate the accuracy of the PC methods, PC UQ and SA results were compared to large-sample Monte Carlo UQ and SA results. PC UQ and SA of the fixed point APDs, and of the probability that a stable fixed point existed at each BCL, was very close to MC UQ results for those quantities. However, PC UQ and SA of the bifurcation BCLs was less accurate compared to MC results.</p><p>The computational time required for PC and Monte Carlo methods was also compared. PC analysis (including Rosenblatt transformation and Bayesian inference) required less than 10 total hours of computational time, of which approximately 30 minutes was devoted to model evaluations, compared to approximately 65 hours required for Monte Carlo sampling of the model outputs at 1 × 10<super>6</super> ξ points.</p><p>PC methods provide a useful framework for efficient UQ and SA of the bifurcation diagram of a model of cardiac APD dynamics. Model outputs with bifurcations and discontinuities can be analyzed using modified PC methods. The methods applied and developed in this study may be extended to other models of heart rhythm dynamics. These methods have potential for use for uncertainty and sensitivity analysis in many applications of these models, including simulation studies of heart rate variability, cardiac pathologies, and interventions.</p> / Dissertation
|
20 |
Analyse de sensibilité globale et polynômes de chaos pour l'estimation des paramètres : application aux transferts en milieu poreux / Sensitivity analysis and polynomial chaos expansion for parameter estimation : application to transfer in porous mediaFajraoui, Noura 21 January 2014 (has links)
La gestion des transferts des contaminants en milieu poreux représentent une préoccupation croissante et revêtent un intérêt particulier pour le contrôle de la pollution dans les milieux souterrains et la gestion de la ressource en eau souterraine, ou plus généralement la protection de l’environnement. Les phénomènes d’écoulement et de transport de polluants sont décrits par des lois physiques traduites sous forme d'équations algébro-différentielles qui dépendent d'un grand nombre de paramètres d'entrée. Pour la plupart, ces paramètres sont mal connus et souvent ne sont pas directement mesurables et/ou leur mesure peut être entachée d’incertitude. Ces travaux de thèse concernent l’étude de l’analyse de sensibilité globale et l’estimation des paramètres pour des problèmes d’écoulement et de transport en milieux poreux. Pour mener à bien ces travaux, la décomposition en polynômes de chaos est utilisée pour quantifier l'influence des paramètres sur la sortie des modèles numériques utilisés. Cet outil permet non seulement de calculer les indices de sensibilité de Sobol mais représente également un modèle de substitution (ou métamodèle) beaucoup plus rapide à exécuter. Cette dernière caractéristique est alors exploitée pour l'inversion des modèles à partir des données observées. Pour le problème inverse, nous privilégions l'approche Bayésienne qui offre un cadre rigoureux pour l'estimation des paramètres. Dans un second temps, nous avons développé une stratégie efficace permettant de construire des polynômes de chaos creux, où seuls les coefficients dont la contribution sur la variance du modèle est significative, sont retenus. Cette stratégie a donné des résultats très encourageants pour deux problèmes de transport réactif. La dernière partie de ce travail est consacrée au problème inverse lorsque les entrées du modèle sont des champs stochastiques gaussiens spatialement distribués. La particularité d'un tel problème est qu'il est mal posé car un champ stochastique est défini par une infinité de coefficients. La décomposition de Karhunen-Loève permet de réduire la dimension du problème et également de le régulariser. Toutefois, les résultats de l'inversion par cette méthode fournit des résultats sensibles au choix à priori de la fonction de covariance du champ. Un algorithme de réduction de la dimension basé sur un critère de sélection (critère de Schwartz) est proposé afin de rendre le problème moins sensible à ce choix. / The management of transfer of contaminants in porous media is a growing concern and is of particular interest for the control of pollution in underground environments and management of groundwater resources, or more generally the protection of the environment. The flow and transport of pollutants are modeled by physical and phenomenological laws that take the form of differential-algebraic equations. These models may depend on a large number of input parameters. Most of these parameters are well known and are often not directly observable.This work is concerned with the impact of parameter uncertainty onto model predictions. To this end, the uncertainty and sensitivity analysis is an important step in the numerical simulation, as well as inverse modeling. The first study consists in estimating the model predictive uncertainty given the parameters uncertainty and identifying the most relevant ones. The second study is concerned with the reduction of parameters uncertainty from available observations.This work concerns the study of global sensitivity analysis and parameter estimation for problems of flow and transport in porous media. To carry out this work, the polynomials chaos expansion is used to quantify the influence of the parameters on the predictions of the numerical model. This tool not only calculate Sobol' sensitivity indices but also provides a surrogate model (or metamodel) that is faster to run. This feature is then exploited for models inversion when observations are available. For the inverse problem, we focus on Bayesian approach that offers a rigorous framework for parameter estimation.In a second step, we developed an effective strategy for constructing a sparse polynomials chaos expansion, where only coefficients whose contribution to the variance of the model is significant, are retained. This strategy has produced very encouraging results for two reactive transport problems.The last part of this work is devoted to the inverse problem when the inputs of the models are spatially distributed. Such an input is then treated as stochastic fields. The peculiarity of such a problem is that it is ill-posed because a stochastic field is defined by an infinite number of coefficients. The Karhunen-Loeve reduces the dimension of the problem and also allows regularizing it. However, the inversion with this method provides results that are sensitive to the presumed covariance function. An algorithm based on the selection criterion (Schwartz criterion) is proposed to make the problem less sensitive to this choice.
|
Page generated in 0.0701 seconds