• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 248
  • 248
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Quantifying Uncertainty in Reactor Flux/Power Distributions

Kennedy, Ryanne Ariel 22 July 2011 (has links)
No description available.
42

Modeling and quantifying uncertainty in bus arrival timeprediction

Josefsson, Olof January 2023 (has links)
Public transportation operates in an environment which, due to its nature of numerous possibly influencing factors, is highly stochastic. This makes predictions of arrival times difficult, yet it’s important to be accurate in order to adhere to travelers expectations. In this study, the focus is on quantifying uncertainty around travel-time predictions as a means to improve the reliability of predictions in the context of public transportation. This is done by comparing Prediction Interval Coverage Probability (PICP) and Normalized Mean Prediction Interval Length (NMPIL). Three models, with two transformations of the response variable, were evaluated on real travel data from Skånetrafiken. The focus of the study was on examining a specific urban bus route, namely line 5 in Malmö, Sweden. The results indicated that a transformation based on the firstDifference achieved a better performance overall, but the results on a stopwise basis varied along the route. In terms of models, the uncertainty quantification revealed that Quantile Regression could be more appropriate at capturing data intervals which provide better coverage but at a shorter interval length, thus being more precise in its predictions. This is likely relatable to the robustness of the model and it being able to deal with extreme observations. A comparison with the current prediction model, which is agnostic in this study, revealed that the proposed point estimates from the Gaussian Process model based on the  firstDifference transformation outperformed the agnostic model on several stops. As such, further research is proposed as there is means for improvement in the current implementation.
43

A Variational Approach to Estimating Uncertain Parameters in Elliptic Systems

van Wyk, Hans-Werner 25 May 2012 (has links)
As simulation plays an increasingly central role in modern science and engineering research, by supplementing experiments, aiding in the prototyping of engineering systems or informing decisions on safety and reliability, the need to quantify uncertainty in model outputs due to uncertainties in the model parameters becomes critical. However, the statistical characterization of the model parameters is rarely known. In this thesis, we propose a variational approach to solve the stochastic inverse problem of obtaining a statistical description of the diffusion coefficient in an elliptic partial differential equation, based noisy measurements of the model output. We formulate the parameter identification problem as an infinite dimensional constrained optimization problem for which we establish existence of minimizers as well as first order necessary conditions. A spectral approximation of the uncertain observations (via a truncated Karhunen-Loeve expansion) allows us to estimate the infinite dimensional problem by a smooth, albeit high dimensional, deterministic optimization problem, the so-called 'finite noise' problem, in the space of functions with bounded mixed derivatives. We prove convergence of 'finite noise' minimizers to the appropriate infinite dimensional ones, and devise a gradient based, as well as a sampling based strategy for locating these numerically. Lastly, we illustrate our methods by means of numerical examples. / Ph. D.
44

Sequential learning, large-scale calibration, and uncertainty quantification

Huang, Jiangeng 23 July 2019 (has links)
With remarkable advances in computing power, computer experiments continue to expand the boundaries and drive down the cost of various scientific discoveries. New challenges keep arising from designing, analyzing, modeling, calibrating, optimizing, and predicting in computer experiments. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For heteroskedastic computer experiments, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from input-dependent noise. Motivated by challenges in both large data size and model fidelity arising from ever larger modern computer experiments, highly accurate and computationally efficient divide-and-conquer calibration methods based on on-site experimental design and surrogate modeling for large-scale computer models are developed in this dissertation. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This on-site surrogate calibration method is further extended to multiple output calibration problems. / Doctor of Philosophy / With remarkable advances in computing power, complex physical systems today can be simulated comparatively cheaply and to high accuracy through computer experiments. Computer experiments continue to expand the boundaries and drive down the cost of various scientific investigations, including biological, business, engineering, industrial, management, health-related, physical, and social sciences. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For computer experiments with changing signal-to-noise ratio, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from complex noise structure. In order to effectively extract key information from massive amount of simulation and make better prediction for the real world, highly accurate and computationally efficient divide-and-conquer calibration methods for large-scale computer models are developed in this dissertation, addressing challenges in both large data size and model fidelity arising from ever larger modern computer experiments. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This large-scale calibration method is further extended to solve multiple output calibration problems.
45

On Methodology for Verification, Validation and Uncertainty Quantification in Power Electronic Converters Modeling

Rashidi Mehrabadi, Niloofar 18 September 2014 (has links)
This thesis provides insight into quantitative accuracy assessment of the modeling and simulation of power electronic converters. Verification, Validation, and Uncertainty quantification (VVandUQ) provides a means to quantify the disagreement between computational simulation results and experimental results in order to have quantitative comparisons instead of qualitative comparisons. Due to the broad applications of modeling and simulation in power electronics, VVandUQ is used to evaluate the credibility of modeling and simulation results. The topic of VVandUQ needs to be studied exclusively for power electronic converters. To carry out this work, the formal procedure for VVandUQ of power electronic converters is presented. The definition of the fundamental words in the proposed framework is also provided. The accuracy of the switching model of a three-phase Voltage Source Inverter (VSI) is quantitatively assessed following the proposed procedure. Accordingly, this thesis describes the hardware design and development of the switching model of the three-phase VSI. / Master of Science
46

Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured Materials

Kim, Jee Yun 01 October 2018 (has links)
Mesoscale design involving control of material distribution pattern can create a statistically heterogeneous material system, which has shown increased adaptability to complex mechanical environments involving highly non-uniform stress fields. Advances in multi-material additive manufacturing can aid in this mesoscale design, providing voxel level control of material property. This vast freedom in design space also unlocks possibilities within optimization of the material distribution pattern. The optimization problem can be divided into a forward problem focusing on accurate predication and an inverse problem focusing on efficient search of the optimal design. In the forward problem, the physical behavior of the material can be modeled based on fundamental mechanics laws and simulated through finite element analysis (FEA). A major limitation in modeling is the unknown parameters in constitutive equations that describe the constituent materials; determining these parameters via conventional single material testing has been proven to be insufficient, which necessitates novel and effective approaches of calibration. A calibration framework based in Bayesian inference, which integrates data from simulations and physical experiments, has been applied to a study involving a mesostructured material fabricated by fused deposition modeling. Calibration results provide insights on what values these parameters converge to as well as which material parameters the model output has the largest dependence on while accounting for sources of uncertainty introduced during the modeling process. Additionally, this statistical formulation is able to provide quick predictions of the physical system by implementing a surrogate and discrepancy model. The surrogate model is meant to be a statistical representation of the simulation results, circumventing issues arising from computational load, while the discrepancy is aimed to account for the difference between the simulation output and physical experiments. In this thesis, this Bayesian calibration framework is applied to a material bending problem, where in-situ mechanical characterization data and FEA simulations based on constitutive modeling are combined to produce updated values of the unknown material parameters with uncertainty. / Master of Science / A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
47

SIR-models and uncertainty quantification

Jakobsson, Per Henrik, Wärnberg, Anton January 2024 (has links)
This thesis applies the theory of uncertainty quantification and sensitivity analysis on the SIR-model and SEIR-model for the spread of diseases. We attempt to determine if we can apply this theory to estimate the model parameters to an acceptable degree of accuracy.  Using sensitivity analysis we determine which parameters of the models are the most significant for some quantity of interest. We apply forward uncertainty quantification to determine how the uncertainty of the model parameters propagates to the quantities of interests. And lastly, we apply uncertainty quantification based on the maximum likelihood method to estimate the model parameters. To easily verify the results, we use synthetic data when estimating the parameters. After applying these methods we see that the importance of the model parameters heavily depend on the choice of quantity of interest. We also note that the uncertainty method reduces the uncertainty in the quantities of interests, although there are a lot of sources of errors that still needs to be considered.
48

Statistical adjustment, calibration, and uncertainty quantification of complex computer models

Yan, Huan 27 August 2014 (has links)
This thesis consists of three chapters on the statistical adjustment, calibration, and uncertainty quantification of complex computer models with applications in engineering. The first chapter systematically develops an engineering-driven statistical adjustment and calibration framework, the second chapter deals with the calibration of potassium current model in a cardiac cell, and the third chapter develops an emulator-based approach for propagating input parameter uncertainty in a solid end milling process. Engineering model development involves several simplifying assumptions for the purpose of mathematical tractability which are often not realistic in practice. This leads to discrepancies in the model predictions. A commonly used statistical approach to overcome this problem is to build a statistical model for the discrepancies between the engineering model and observed data. In contrast, an engineering approach would be to find the causes of discrepancy and fix the engineering model using first principles. However, the engineering approach is time consuming, whereas the statistical approach is fast. The drawback of the statistical approach is that it treats the engineering model as a black box and therefore, the statistically adjusted models lack physical interpretability. In the first chapter, we propose a new framework for model calibration and statistical adjustment. It tries to open up the black box using simple main effects analysis and graphical plots and introduces statistical models inside the engineering model. This approach leads to simpler adjustment models that are physically more interpretable. The approach is illustrated using a model for predicting the cutting forces in a laser-assisted mechanical micromachining process and a model for predicting the temperature of outlet air in a fluidized-bed process. The second chapter studies the calibration of a computer model of potassium currents in a cardiac cell. The computer model is expensive to evaluate and contains twenty-four unknown parameters, which makes the calibration challenging for the traditional methods using kriging. Another difficulty with this problem is the presence of large cell-to-cell variation, which is modeled through random effects. We propose physics-driven strategies for the approximation of the computer model and an efficient method for the identification and estimation of parameters in this high-dimensional nonlinear mixed-effects statistical model. Traditional sampling-based approaches to uncertainty quantification can be slow if the computer model is computationally expensive. In such cases, an easy-to-evaluate emulator can be used to replace the computer model to improve the computational efficiency. However, the traditional technique using kriging is found to perform poorly for the solid end milling process. In chapter three, we develop a new emulator, in which a base function is used to capture the general trend of the output. We propose optimal experimental design strategies for fitting the emulator. We call our proposed emulator local base emulator. Using the solid end milling example, we show that the local base emulator is an efficient and accurate technique for uncertainty quantification and has advantages over the other traditional tools.
49

Analyse d'incertitudes et de robustesse pour les modèles à entrées et sorties fonctionnelles / uncertainties and robustness analysis for models with functional inputs and outputs

El Amri, Mohamed 29 April 2019 (has links)
L'objectif de cette thèse est de résoudre un problème d'inversion sous incertitudes de fonctions coûteuses à évaluer dans le cadre du paramétrage du contrôle d'un système de dépollution de véhicules.L'effet de ces incertitudes est pris en compte au travers de l'espérance de la grandeur d'intérêt. Une difficulté réside dans le fait que l'incertitude est en partie due à une entrée fonctionnelle connue à travers d'un échantillon donné. Nous proposons deux approches basées sur une approximation du code coûteux par processus gaussiens et une réduction de dimension de la variable fonctionnelle par une méthode de Karhunen-Loève.La première approche consiste à appliquer une méthode d'inversion de type SUR (Stepwise Uncertainty Reduction) sur l'espérance de la grandeur d'intérêt. En chaque point d'évaluation dans l'espace de contrôle, l'espérance est estimée par une méthode de quantification fonctionnelle gloutonne qui fournit une représentation discrète de la variable fonctionnelle et une estimation séquentielle efficace à partir de l'échantillon donné de la variable fonctionnelle.La deuxième approche consiste à appliquer la méthode SUR directement sur la grandeur d'intérêt dans l'espace joint des variables de contrôle et des variables incertaines. Une stratégie d'enrichissement du plan d'expériences dédiée à l'inversion sous incertitudes fonctionnelles et exploitant les propriétés des processus gaussiens est proposée.Ces deux approches sont comparées sur des fonctions jouets et sont appliquées à un cas industriel de post-traitement des gaz d'échappement d'un véhicule. La problématique est de déterminer les réglages du contrôle du système permettant le respect des normes de dépollution en présence d'incertitudes, sur le cycle de conduite. / This thesis deals with the inversion problem under uncertainty of expensive-to-evaluate functions in the context of the tuning of the control unit of a vehicule depollution system.The effect of these uncertainties is taken into account through the expectation of the quantity of interest. The problem lies in the fact that the uncertainty is partly due to a functional variable only known through a given sample. We propose two approaches to solve the inversion problem, both methods are based on Gaussian Process modelling for expensive-to-evaluate functions and a dimension reduction of the functional variable by the Karhunen-Loève expansion.The first methodology consists in applying a Stepwise Uncertainty Reduction (SUR) method on the expectation of the quantity of interest. At each evaluation point in the control space, the expectation is estimated by a greedy functional quantification method that provides a discrete representation of the functional variable and an effective sequential estimate from the given sample.The second approach consists in applying the SUR method directly to the quantity of interest in the joint space. Devoted to inversion under functional uncertainties, a strategy for enriching the experimental design exploiting the properties of Gaussian processes is proposed.These two approaches are compared on toy analytical examples and are applied to an industrial application for an exhaust gas post-treatment system of a vehicle. The objective is to identify the set of control parameters that leads to meet the pollutant emission norms under uncertainties on the driving cycle.
50

Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties

Sawlan, Zaid A 10 November 2018 (has links)
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here. Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques. In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.

Page generated in 0.1509 seconds