• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 20
  • 16
  • 8
  • 1
  • Tagged with
  • 218
  • 218
  • 55
  • 55
  • 47
  • 35
  • 33
  • 31
  • 31
  • 25
  • 25
  • 24
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty Quantification

Winokur, Justin Gregory January 2015 (has links)
<p>Polynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.</p><p>Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.</p><p>We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.</p><p>In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. </p><p>In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.</p> / Dissertation
42

Uncertainty in the Bifurcation Diagram of a Model of Heart Rhythm Dynamics

Ring, Caroline January 2014 (has links)
<p>To understand the underlying mechanisms of cardiac arrhythmias, computational models are used to study heart rhythm dynamics. The parameters of these models carry inherent uncertainty. Therefore, to interpret the results of these models, uncertainty quantification (UQ) and sensitivity analysis (SA) are important. Polynomial chaos (PC) is a computationally efficient method for UQ and SA in which a model output Y, dependent on some independent uncertain parameters represented by a random vector &xi;, is approximated as a spectral expansion in multidimensional orthogonal polynomials in &xi;. The expansion can then be used to characterize the uncertainty in Y.</p><p>PC methods were applied to UQ and SA of the dynamics of a two-dimensional return-map model of cardiac action potential duration (APD) restitution in a paced single cell. Uncertainty was considered in four parameters of the model: three time constants and the pacing stimulus strength. The basic cycle length (BCL) (the period between stimuli) was treated as the control parameter. Model dynamics was characterized with bifurcation analysis, which determines the APD and stability of fixed points of the model at a range of BCLs, and the BCLs at which bifurcations occur. These quantities can be plotted in a bifurcation diagram, which summarizes the dynamics of the model. PC UQ and SA were performed for these quantities. UQ results were summarized in a novel probabilistic bifurcation diagram that visualizes the APD and stability of fixed points as uncertain quantities.</p><p>Classical PC methods assume that model outputs exist and reasonably smooth over the full domain of &xi;. Because models of heart rhythm often exhibit bifurcations and discontinuities, their outputs may not obey the existence and smoothness assumptions on the full domain, but only on some subdomains which may be irregularly shaped. On these subdomains, the random variables representing the parameters may no longer be independent. PC methods therefore must be modified for analysis of these discontinuous quantities. The Rosenblatt transformation maps the variables on the subdomain onto a rectangular domain; the transformed variables are independent and uniformly distributed. A new numerical estimation of the Rosenblatt transformation was developed that improves accuracy and computational efficiency compared to existing kernel density estimation methods. PC representations of the outputs in the transformed variables were then constructed. Coefficients of the PC expansions were estimated using Bayesian inference methods. For discontinuous model outputs, SA was performed using a sampling-based variance-reduction method, with the PC estimation used as an efficient proxy for the full model.</p><p>To evaluate the accuracy of the PC methods, PC UQ and SA results were compared to large-sample Monte Carlo UQ and SA results. PC UQ and SA of the fixed point APDs, and of the probability that a stable fixed point existed at each BCL, was very close to MC UQ results for those quantities. However, PC UQ and SA of the bifurcation BCLs was less accurate compared to MC results.</p><p>The computational time required for PC and Monte Carlo methods was also compared. PC analysis (including Rosenblatt transformation and Bayesian inference) required less than 10 total hours of computational time, of which approximately 30 minutes was devoted to model evaluations, compared to approximately 65 hours required for Monte Carlo sampling of the model outputs at 1 &times; 10<super>6</super> &xi; points.</p><p>PC methods provide a useful framework for efficient UQ and SA of the bifurcation diagram of a model of cardiac APD dynamics. Model outputs with bifurcations and discontinuities can be analyzed using modified PC methods. The methods applied and developed in this study may be extended to other models of heart rhythm dynamics. These methods have potential for use for uncertainty and sensitivity analysis in many applications of these models, including simulation studies of heart rate variability, cardiac pathologies, and interventions.</p> / Dissertation
43

Closing the building energy performance gap by improving our predictions

Sun, Yuming 27 August 2014 (has links)
Increasing studies imply that predicted energy performance of buildings significantly deviates from actual measured energy use. This so-called "performance gap" may undermine one's confidence in energy-efficient buildings, and thereby the role of building energy efficiency in the national carbon reduction plan. Closing the performance gap becomes a daunting challenge for the involved professions, stimulating them to reflect on how to investigate and better understand the size, origins, and extent of the gap. The energy performance gap underlines the lack of prediction capability of current building energy models. Specifically, existing predictions are predominantly deterministic, providing point estimation over the future quantity or event of interest. It, thus, largely ignores the error and noise inherent in an uncertain future of building energy consumption. To overcome this, the thesis turns to a thriving area in engineering statistics that focuses on computation-based uncertainty quantification. The work provides theories and models that enable probabilistic prediction over future energy consumption, forming the basis of risk assessment in decision-making. Uncertainties that affect the wide variety of interacting systems in buildings are organized into five scales (meteorology - urban - building - systems - occupants). At each level both model form and input parameter uncertainty are characterized with probability, involving statistical modeling and parameter distributional analysis. The quantification of uncertainty at different system scales is accomplished using the network of collaborators established through an NSF-funded research project. The bottom-up uncertainty quantification approach, which deals with meta uncertainty, is fundamental for generic application of uncertainty analysis across different types of buildings, under different urban climate conditions, and in different usage scenarios. Probabilistic predictions are evaluated by two criteria: coverage and sharpness. The goal of probabilistic prediction is to maximize the sharpness of the predictive distributions subject to the coverage of the realized values. The method is evaluated on a set of buildings on the Georgia Tech campus. The energy consumption of each building is monitored in most cases by a collection of hourly sub-metered consumption data. This research shows that a good match of probabilistic predictions and the real building energy consumption in operation is achievable. Results from the six case buildings show that using the best point estimations of the probabilistic predictions reduces the mean absolute error (MAE) from 44% to 15% and the root mean squared error (RMSE) from 49% to 18% in total annual cooling energy consumption. As for monthly cooling energy consumption, the MAE decreases from 44% to 21% and the RMSE decreases from 53% to 28%. More importantly, the entire probability distributions are statistically verified at annual level of building energy predictions. Based on uncertainty and sensitivity analysis applied to these buildings, the thesis concludes that the proposed method significantly reduces the magnitude and effectively infers the origins of the building energy performance gap.
44

Towards multifidelity uncertainty quantification for multiobjective structural design

Lebon, Jérémy 12 December 2013 (has links) (PDF)
This thesis aims at Multi-Objective Optimization under Uncertainty in structural design. We investigate Polynomial Chaos Expansion (PCE) surrogates which require extensive training sets. We then face two issues: high computational costs of an individual Finite Element simulation and its limited precision. From numerical point of view and in order to limit the computational expense of the PCE construction we particularly focus on sparse PCE schemes. We also develop a custom Latin Hypercube Sampling scheme taking into account the finite precision of the simulation. From the modeling point of view,we propose a multifidelity approach involving a hierarchy of models ranging from full scale simulations through reduced order physics up to response surfaces. Finally, we investigate multiobjective optimization of structures under uncertainty. We extend the PCE model of design objectives by taking into account the design variables. We illustrate our work with examples in sheet metal forming and optimal design of truss structures.
45

Multiscale Simulation and Uncertainty Quantification Techniques for Richards' Equation in Heterogeneous Media

Kang, Seul Ki 2012 August 1900 (has links)
In this dissertation, we develop multiscale finite element methods and uncertainty quantification technique for Richards' equation, a mathematical model to describe fluid flow in unsaturated porous media. Both coarse-level and fine-level numerical computation techniques are presented. To develop an accurate coarse-scale numerical method, we need to construct an effective multiscale map that is able to capture the multiscale features of the large-scale solution without resolving the small scale details. With a careful choice of the coarse spaces for multiscale finite element methods, we can significantly reduce errors. We introduce several methods to construct coarse spaces for multiscale finite element methods. A coarse space based on local spectral problems is also presented. The construction of coarse spaces begins with an initial choice of multiscale basis functions supported in coarse regions. These basis functions are complemented using weighted local spectral eigenfunctions. These newly constructed basis functions can capture the small scale features of the solution within a coarse-grid block and give us an accurate coarse-scale solution. However, it is expensive to compute the local basis functions for each parameter value for a nonlinear equation. To overcome this difficulty, local reduced basis method is discussed, which provides smaller dimension spaces with which to compute the basis functions. Robust solution techniques for Richards' equation at a fine scale are discussed. We construct iterative solvers for Richards' equation, whose number of iterations is independent of the contrast. We employ two-level domain decomposition pre-conditioners to solve linear systems arising in approximation of problems with high contrast. We show that, by using the local spectral coarse space for the preconditioners, the number of iterations for these solvers is independent of the physical properties of the media. Several numerical experiments are given to support the theoretical results. Last, we present numerical methods for uncertainty quantification applications for Richards' equation. Numerical methods combined with stochastic solution techniques are proposed to sample conductivities of porous media given in integrated data. Our proposed algorithm is based on upscaling techniques and the Markov chain Monte Carlo method. Sampling results are presented to prove the efficiency and accuracy of our algorithm.
46

Statistical methods for post-processing ensemble weather forecasts

Williams, Robin Mark January 2016 (has links)
Until recent times, weather forecasts were deterministic in nature. For example, a forecast might state ``The temperature tomorrow will be $20^\circ$C.'' More recently, however, increasing interest has been paid to the uncertainty associated with such predictions. By quantifying the uncertainty of a forecast, for example with a probability distribution, users can make risk-based decisions. The uncertainty in weather forecasts is typically based upon `ensemble forecasts'. Rather than issuing a single forecast from a numerical weather prediction (NWP) model, ensemble forecasts comprise multiple model runs that differ in either the model physics or initial conditions. Ideally, ensemble forecasts would provide a representative sample of the possible outcomes of the verifying observations. However, due to model biases and inadequate specification of initial conditions, ensemble forecasts are often biased and underdispersed. As a result, estimates of the most likely values of the verifying observations, and the associated forecast uncertainty, are often inaccurate. It is therefore necessary to correct, or post-process ensemble forecasts, using statistical models known as `ensemble post-processing methods'. To this end, this thesis is concerned with the application of statistical methodology in the field of probabilistic weather forecasting, and in particular ensemble post-processing. Using various datasets, we extend existing work and propose the novel use of statistical methodology to tackle several aspects of ensemble post-processing. Our novel contributions to the field are the following. In chapter~3 we present a comparison study for several post-processing methods, with a focus on probabilistic forecasts for extreme events. We find that the benefits of ensemble post-processing are larger for forecasts of extreme events, compared with forecasts of common events. We show that allowing flexible corrections to the biases in ensemble location is important for the forecasting of extreme events. In chapter~4 we tackle the complicated problem of post-processing ensemble forecasts without making distributional assumptions, to produce recalibrated ensemble forecasts without the intermediate step of specifying a probability forecast distribution. We propose a latent variable model, and make a novel application of measurement error models. We show in three case studies that our distribution-free method is competitive with a popular alternative that makes distributional assumptions. We suggest that our distribution-free method could serve as a useful baseline on which forecasters should seek to improve. In chapter~5 we address the subject of parameter uncertainty in ensemble post-processing. As in all parametric statistical models, the parameter estimates are subject to uncertainty. We approximate the distribution of model parameters by bootstrap resampling, and demonstrate improvements in forecast skill by incorporating this additional source of uncertainty in to out-of-sample probability forecasts. In chapter~6 we use model diagnostic tools to determine how specific post-processing models may be improved. We subsequently introduce bias correction schemes that move beyond the standard linear schemes employed in the literature and in practice, particularly in the case of correcting ensemble underdispersion. Finally, we illustrate the complicated problem of assessing the skill of ensemble forecasts whose members are dependent, or correlated. We show that dependent ensemble members can result in surprising conclusions when employing standard measures of forecast skill.
47

Stochastic analysis, simulation and identification of hyperelastic constitutive equations / Analyse stochastique, simulation et identification de lois de comportement hyperélastiques

Staber, Brian 29 June 2018 (has links)
Le projet de thèse concerne la construction, la génération et l'identification de modèles continus stochastiques, pour des milieux hétérogènes exhibant des comportements non linéaires. Le domaine d'application principal visé est la biomécanique, notamment au travers du développement d'outils de modélisation multi-échelles et stochastiques, afin de quantifier les grandes incertitudes exhibées par les tissus mous. Deux aspects sont particulièrement mis en exergue. Le premier point a trait à la prise en compte des incertitudes en mécanique non linéaire, et leurs incidences sur les prédictions des quantités d'intérêt. Le second aspect concerne la construction, la génération (en grandes dimensions) et l'identification multi-échelle de représentations continues à partir de résultats expérimentaux limités / This work is concerned with the construction, generation and identification of stochastic continuum models, for heterogeneous materials exhibiting nonlinear behaviors. The main covered domains of applications are biomechanics, through the development of multiscale methods and stochastic models, in order to quantify the great variabilities exhibited by soft tissues. Two aspects are particularly highlighted. The first one is related to the uncertainty quantification in non linear mechanics, and its implications on the quantities of interest. The second aspect is concerned with the construction, the generation in high dimension and multiscale identification based on limited experimental data
48

Cross entropy-based analysis of spacecraft control systems

Mujumdar, Anusha Pradeep January 2016 (has links)
Space missions increasingly require sophisticated guidance, navigation and control algorithms, the development of which is reliant on verification and validation (V&V) techniques to ensure mission safety and success. A crucial element of V&V is the assessment of control system robust performance in the presence of uncertainty. In addition to estimating average performance under uncertainty, it is critical to determine the worst case performance. Industrial V&V approaches typically employ mu-analysis in the early control design stages, and Monte Carlo simulations on high-fidelity full engineering simulators at advanced stages of the design cycle. While highly capable, such techniques present a critical gap between pessimistic worst case estimates found using analytical methods, and the optimistic outlook often presented by Monte Carlo runs. Conservative worst case estimates are problematic because they can demand a controller redesign procedure, which is not justified if the poor performance is unlikely to occur. Gaining insight into the probability associated with the worst case performance is valuable in bridging this gap. It should be noted that due to the complexity of industrial-scale systems, V&V techniques are required to be capable of efficiently analysing non-linear models in the presence of significant uncertainty. As well, they must be computationally tractable. It is desirable that such techniques demand little engineering effort before each analysis, to be applied widely in industrial systems. Motivated by these factors, this thesis proposes and develops an efficient algorithm, based on the cross entropy simulation method. The proposed algorithm efficiently estimates the probabilities associated with various performance levels, from nominal performance up to degraded performance values, resulting in a curve of probabilities associated with various performance values. Such a curve is termed the probability profile of performance (PPoP), and is introduced as a tool that offers insight into a control system's performance, principally the probability associated with the worst case performance. The cross entropy-based robust performance analysis is implemented here on various industrial systems in European Space Agency-funded research projects. The implementation on autonomous rendezvous and docking models for the Mars Sample Return mission constitutes the core of the thesis. The proposed technique is implemented on high-fidelity models of the Vega launcher, as well as on a generic long coasting launcher upper stage. In summary, this thesis (a) develops an algorithm based on the cross entropy simulation method to estimate the probability associated with the worst case, (b) proposes the cross entropy-based PPoP tool to gain insight into system performance, (c) presents results of the robust performance analysis of three space industry systems using the proposed technique in conjunction with existing methods, and (d) proposes an integrated template for conducting robust performance analysis of linearised aerospace systems.
49

Nonlinear Dynamics of Uncertain Multi-Joint Structures

January 2016 (has links)
abstract: The present investigation is part of a long-term effort focused on the development of a methodology for the computationally efficient prediction of the dynamic response of structures with multiple joints. The first part of this thesis reports on the dynamic response of nominally identical beams with a single lap joint (“Brake-Reuss” beam). The observed impact responses at different levels clearly demonstrate the occurrence of both micro- and macro-slip, which are reflected by increased damping and a lowering of natural frequencies. Significant beam-to-beam variability of impact responses is also observed. Based on these experimental results, a deterministic 4-parameter Iwan model of the joint was developed. These parameters were randomized following a previous investigation. The randomness in the impact response predicted from this uncertain model was assessed in a Monte Carlo format through a series of time integrations of the response and found to be consistent with the experimental results. The availability of an uncertain computational model for the Brake-Reuss beam provides a starting point to analyze and model the response of multi-joint structures in the presence of uncertainty/variability. To this end, a 4-beam frame was designed that is composed of three identical Brake-Reuss beams and a fourth, stretched one. The response of that structure to impact was computed and several cases were identified. The presence of uncertainty implies that an exact prediction of the response of a particular frame cannot be achieved. Rather, the response can only be predicted to lie within a band reflecting the level of uncertainty. In this perspective, the computational model adopted for the frame is only required to provide a good estimate of this uncertainty band. Equivalently, a relaxation of the model complexity, i.e., the introduction of epistemic uncertainty, can be performed as long as it does not affect significantly the uncertainty band of the predictions. Such an approach, which holds significant promise for the efficient computational of the response of structures with many uncertain joints, is assessed here by replacing some joints by linear spring elements. It is found that this simplification of the model is often acceptable at lower excitation/response levels. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2016
50

Quantification des incertitudes et analyse de sensibilité pour codes de calcul à entrées fonctionnelles et dépendantes / Stochastic methods for uncertainty treatment of functional variables in computer codes : application to safety studies

Nanty, Simon 15 October 2015 (has links)
Cette thèse s'inscrit dans le cadre du traitement des incertitudes dans les simulateurs numériques, et porte plus particulièrement sur l'étude de deux cas d'application liés aux études de sûreté pour les réacteurs nucléaires. Ces deux applications présentent plusieurs caractéristiques communes. La première est que les entrées du code étudié sont fonctionnelles et scalaires, les entrées fonctionnelles étant dépendantes entre elles. La deuxième caractéristique est que la distribution de probabilité des entrées fonctionnelles n'est connue qu'à travers un échantillon de ces variables. La troisième caractéristique, présente uniquement dans un des deux cas d'étude, est le coût de calcul élevé du code étudié qui limite le nombre de simulations possibles. L'objectif principal de ces travaux de thèse était de proposer une méthodologie complète de traitement des incertitudes de simulateurs numériques pour les deux cas étudiés. Dans un premier temps, nous avons proposé une méthodologie pour quantifier les incertitudes de variables aléatoires fonctionnelles dépendantes à partir d'un échantillon de leurs réalisations. Cette méthodologie permet à la fois de modéliser la dépendance entre les variables fonctionnelles et de prendre en compte le lien entre ces variables et une autre variable, appelée covariable, qui peut être, par exemple, la sortie du code étudié. Associée à cette méthodologie, nous avons développé une adaptation d'un outil de visualisation de données fonctionnelles, permettant de visualiser simultanément les incertitudes et les caractéristiques de plusieurs variables fonctionnelles dépendantes. Dans un second temps, une méthodologie pour réaliser l'analyse de sensibilité globale des simulateurs des deux cas d'étude a été proposée. Dans le cas d'un code coûteux en temps de calcul, l'application directe des méthodes d'analyse de sensibilité globale quantitative est impossible. Pour pallier ce problème, la solution retenue consiste à construire un modèle de substitution ou métamodèle, approchant le code de calcul et ayant un temps de calcul très court. Une méthode d'échantillonnage uniforme optimisé pour des variables scalaires et fonctionnelles a été développée pour construire la base d'apprentissage du métamodèle. Enfin, une nouvelle approche d'approximation de codes coûteux et à entrées fonctionnelles a été explorée. Dans cette approche, le code est vu comme un code stochastique dont l'aléa est dû aux variables fonctionnelles supposées incontrôlables. Sous ces hypothèses, plusieurs métamodèles ont été développés et comparés. L'ensemble des méthodes proposées dans ces travaux a été appliqué aux deux cas d'application étudiés. / This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called covariate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or metamodel, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the metamodel. Finally, a new approximation approach for expensive codes with functional outputs has been explored. In this approach, the code is seen as a stochastic code, whose randomness is due to the functional variables, assumed uncontrollable. In this framework, several metamodels have been developed and compared. All the methods proposed in this work have been applied to the two nuclear safety applications.

Page generated in 0.0508 seconds