• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 249
  • 249
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 27
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Inverse Uncertainty Quantification using deterministic sampling : An intercomparison between different IUQ methods

Andersson, Hjalmar January 2021 (has links)
In this thesis, two novel methods for Inverse Uncertainty Quantification are benchmarked against the more established methods of Monte Carlo sampling of output parameters(MC) and Maximum Likelihood Estimation (MLE). Inverse Uncertainty Quantification (IUQ) is the process of how to best estimate the values of the input parameters in a simulation, and the uncertainty of said estimation, given a measurement of the output parameters. The two new methods are Deterministic Sampling (DS) and Weight Fixing (WF). Deterministic sampling uses a set of sampled points such that the set of points has the same statistic as the output. For each point, the corresponding point of the input is found to be able to calculate the statistics of the input. Weight fixing uses random samples from the rough region around the input to create a linear problem that involves finding the right weights so that the output has the right statistic. The benchmarking between the four methods shows that both DS and WF are comparably accurate to both MC and MLE in most cases tested in this thesis. It was also found that both DS and WF uses approximately the same amount of function calls as MLE and all three methods use a lot fewer function calls to the simulation than MC. It was discovered that WF is not always able to find a solution. This is probably because the methods used for WF are not the optimal method for what they are supposed to do. Finding more optimal methods for WF is something that could be investigated further.
132

Quantifying Uncertainty in the Residence Time of the Drug and Carrier Particles in a Dry Powder Inhaler

Badhan, Antara, Krushnarao Kotteda, V. M., Afrin, Samia, Kumar, Vinod 01 September 2021 (has links)
Dry powder inhalers (DPI), used as a means for pulmonary drug delivery, typically contain a combination of active pharmaceutical ingredients (API) and significantly larger carrier particles. The microsized drug particles-which have a strong propensity to aggregate and poor aerosolization performance-are mixed with significantly large carrier particles that cannot penetrate the mouth-throat region to deagglomerate and entrain the smaller API particles in the inhaled airflow. Therefore, a DPI's performance depends on the carrier-API combination particles' entrainment and the time and thoroughness of the individual API particles' deagglomeration from the carrier particles. Since DPI particle transport is significantly affected by particle-particle interactions, particle sizes and shapes present significant challenges to computational fluid dynamics (CFD) modelers to model regional lung deposition from a DPI. We employed the Particle-In-Cell method for studying the transport/deposition and the agglomeration and deagglomeration for DPI carrier and API particles in the present work. The proposed development will leverage CFD-PIC and sensitivity analysis capabilities from the Department of Energy laboratories: Multiphase Flow Interface Flow Exchange and Dakota UQ software. A data-driven framework is used to obtain the reliable low order statics of the particle's residence time in the inhaler. The framework is further used to study the effect of drug particle density, carrier particle density and size, fluidizing agent density and velocity, and some numerical parameters on the particles' residence time in the inhaler.
133

Uncertainty quantification and calibration of a photovoltaic plant model : warranty of performance and robust estimation of the long-term production. / Quantification des incertitudes et calage d'un modèle de centrale photovoltaïque : garantie de performance et estimation robuste de la production long-terme

Carmassi, Mathieu 21 December 2018 (has links)
Les difficultés de mise en œuvre d'expériences de terrain ou de laboratoire, ainsi que les coûts associés, conduisent les sociétés industrielles à se tourner vers des codes numériques de calcul. Ces codes, censés être représentatifs des phénomènes physiques en jeu, entraînent néanmoins tout un cortège de problèmes. Le premier de ces problèmes provient de la volonté de prédire la réalité à partir d'un modèle informatique. En effet, le code doit être représentatif du phénomène et, par conséquent, être capable de simuler des données proches de la réalité. Or, malgré le constant développement du réalisme de ces codes, des erreurs de prédiction subsistent. Elles sont de deux natures différentes. La première provient de la différence entre le phénomène physique et les valeurs relevées expérimentalement. La deuxième concerne l'écart entre le code développé et le phénomène physique. Pour diminuer cet écart, souvent qualifié de biais ou d'erreur de modèle, les développeurs complexifient en général les codes, les rendant très chronophages dans certains cas. De plus, le code dépend de paramètres à fixer par l'utilisateur qui doivent être choisis pour correspondre au mieux aux données de terrain. L'estimation de ces paramètres propres au code s'appelle le calage. Cette thèse propose dans un premier temps une revue des méthodes statistiques nécessaires à la compréhension du calage Bayésien. Ensuite, une revue des principales méthodes de calage est présentée accompagnée d'un exemple comparatif basé sur un code de calcul servant à prédire la puissance d'une centrale photovoltaïque. Le package appelé CaliCo qui permet de réaliser un calage rapide de beaucoup de codes numériques est alors présenté. Enfin, un cas d'étude réel d'une grande centrale photovoltaïque sera introduit et le calage réalisé pour effectuer un suivi de performance de la centrale. Ce cas de code industriel particulier introduit des spécificités de calage numériques qui seront abordées et deux modèles statistiques y seront exposés. / Field experiments are often difficult and expensive to make. To bypass these issues, industrial companies have developed computational codes. These codes intend to be representative of the physical system, but come with a certain amount of problems. The code intends to be as close as possible to the physical system. It turns out that, despite continuous code development, the difference between the code outputs and experiments can remain significant. Two kinds of uncertainties are observed. The first one comes from the difference between the physical phenomenon and the values recorded experimentally. The second concerns the gap between the code and the physical system. To reduce this difference, often named model bias, discrepancy, or model error, computer codes are generally complexified in order to make them more realistic. These improvements lead to time consuming codes. Moreover, a code often depends on parameters to be set by the user to make the code as close as possible to field data. This estimation task is called calibration. This thesis first proposes a review of the statistical methods necessary to understand Bayesian calibration. Then, a review of the main calibration methods is presented with a comparative example based on a numerical code used to predict the power of a photovoltaic plant. The package called CaliCo which allows to quickly perform a Bayesian calibration on a lot of numerical codes is then presented. Finally, a real case study of a large photovoltaic power plant will be introduced and the calibration carried out as part of a performance monitoring framework. This particular case of industrial code introduces numerical calibration specificities that will be discussed with two statistical models.
134

Modeling Nonstationarity Using Locally Stationary Basis Processes

Ganguly, Shreyan 03 October 2019 (has links)
No description available.
135

Uncertainty Quantification and Propagation in Materials Modeling Using a Bayesian Inferential Framework

Ricciardi, Denielle E. 13 November 2020 (has links)
No description available.
136

Quantifying Uncertainty in Flood Modeling Using Bayesian Approaches

Tao Huang (15353755) 27 April 2023 (has links)
<p>  </p> <p>Floods all over the world are one of the most common and devastating natural disasters for human society, and the flood risk is increasing recently due to more and more extreme climatic events. In the United States, one of the key resources that provide the flood risk information to the public is the Flood Insurance Rate Map (FIRM) administrated by the Federal Emergency Management Agency (FEMA) and the digitalized FIRMs have covered over 90% of the United States population so far. However, the uncertainty in the modeling process of FIRMs is rarely investigated. In this study, we use two of the widely used multi-model methods, the Bayesian Model Averaging (BMA) and the generalized likelihood uncertainty estimation (GLUE), to evaluate and reduce the impacts of various uncertainties with respect to modeling settings, evaluation metrics, and algorithm parameters on the flood modeling of FIRMs. Accordingly, three objectives of this study are to: (1) quantify the uncertainty in FEMA FIRMs by using BMA and Hierarchical BMA approaches; (2) investigate the inherent limitations and uncertainty in existing evaluation metrics of flood models; and (3) estimate the BMA parameters (weights and variances) using the Metropolis-Hastings (M-H) algorithm with multiple Markov Chains Monte Carlo (MCMC).</p> <p><br></p> <p>In the first objective, both the BMA and hierarchical BMA (HBMA) approaches are employed to quantify the uncertainty within the detailed FEMA models of the Deep River and the Saint Marys River in the State of Indiana based on water stage predictions from 150 HEC-RAS 1D unsteady flow model configurations that incorporate four uncertainty sources including bridges, channel roughness, floodplain roughness, and upstream flow input. Given the ensemble predictions and the observed water stage data in the training period, the BMA weight and the variance for each model member are obtained, and then the BMA prediction ability is validated for the observed data from the later period. The results indicate that the BMA prediction is more robust than both the original FEMA model and the ensemble mean. Furthermore, the HBMA framework explicitly shows the propagation of various uncertainty sources, and both the channel roughness and the upstream flow input have a larger impact on prediction variance than bridges. Hence, it provides insights for modelers into the relative impact of individual uncertainty sources in the flood modeling process. The results show that the probabilistic flood maps developed based on the BMA analysis could provide more reliable predictions than the deterministic FIRMs.</p> <p><br></p> <p>In the second objective, the inherent limitations and sampling uncertainty in several commonly used model evaluation metrics, namely, the Nash Sutcliffe efficiency (<em>NSE</em>), the Kling Gupta efficiency (<em>KGE</em>), and the coefficient of determination (<em>R</em>2), are investigated systematically, and hence the overall performance of flood models can be evaluated in a comprehensive way. These evaluation metrics are then applied to the 1D HEC-RAS models of six reaches located in the states of Indiana and Texas of the United States to quantify the uncertainty associated with the channel roughness and upstream flow input. The results show that the model performances based on the uniform and normal priors are comparable. The distributions of these evaluation metrics are significantly different for the flood model under different high-flow scenarios, and it further indicates that the metrics should be treated as random statistical variables given both aleatory and epistemic uncertainties in the modeling process. Additionally, the white-noise error in observations has the least impact on the evaluation metrics.</p> <p><br></p> <p>In the third objective, the Metropolis-Hastings (M-H) algorithm, which is one of the most widely used algorithms in the MCMC method, is proposed to estimate the BMA parameters (weights and variances), since the reliability of BMA parameters determines the accuracy of BMA predictions. However, the uncertainty in the BMA parameters with fixed values, which are usually obtained from the Expectation-Maximization (EM) algorithm, has not been adequately investigated in BMA-related applications over the past few decades. Both numerical experiments and two practical 1D HEC-RAS models in the states of Indiana and Texas of the United States are employed to examine the applicability of the M-H algorithm with multiple independent Markov chains. The results show that the BMA weights estimated from both algorithms are comparable, while the BMA variances obtained from the M-H MCMC algorithm are closer to the given variances in the numerical experiment. Overall, the MCMC approach with multiple chains can provide more information associated with the uncertainty of BMA parameters and its performance of water stage predictions is better than the default EM algorithm in terms of multiple evaluation metrics as well as algorithm flexibility.</p>
137

DATA ANALYSIS AND UNCERTAINTY QUANTIFICATION OF ROOF PRESSURE MEASUREMENTS USING THE NIST AERODYNAMIC DATABASE

Shelley, Erick R. 08 July 2022 (has links)
No description available.
138

Bringing Newton and Bernoulli Into the Quantum World: Applying Classical Physics to the Modeling of Quantum Behavior in Transition Metal Alloys

Weiss, Elan J. January 2022 (has links)
No description available.
139

Computer Model Emulation and Calibration using Deep Learning

Bhatnagar, Saumya January 2022 (has links)
No description available.
140

[en] BRANCHING PROCESSES FOR EPIDEMICS STUDY / [pt] PROCESSOS DE RAMIFICAÇÃO PARA O ESTUDO DE EPIDEMIAS

JOAO PEDRO XAVIER FREITAS 26 October 2023 (has links)
[pt] Este trabalho modela a evolução temporal de uma epidemia com uma abordagem estocástica. O número de novas infecções por infectado é modelado como uma variável aleatória discreta, chamada aqui de contágio. Logo, a evolução temporal da doença é um processo estocástico. Mais especificamente, a propagação é dada pelo modelo de Bienaymé-Galton-Watson, um tipo de processo de ramificação de parâmetro discreto. Neste processo, para um determinado instante, o número de membros infectados, ou seja, a geração de membros infectados é uma variável aleatória. Na primeira parte da dissertação, dado que o modelo probabilístico do contágio é conhecido, quatro metodologias utilizadas para obter as funções de massa das gerações do processo estocástico são comparadas. As metodologias são: funções geradoras de probabilidade com e sem identidades polinomiais, cadeia de Markov e simulações de Monte Carlo. A primeira e terceira metodologias fornecem expressões analíticas relacionando a variável aleatória de contágio com a variável aleatória do tamanho de uma geração. Essas expressões analíticas são utilizadas na segunda parte desta dissertação, na qual o problema clássico de inferência paramétrica bayesiana é estudado. Com a ajuda do teorema de Bayes, parâmetros da variável aleatória de contágio são inferidos a partir de realizações do processo de ramificação. As expressões analíticas obtidas na primeira parte do trabalho são usadas para construir funções de verossimilhança apropriadas. Para resolver o problema inverso, duas maneiras diferentes de se usar dados provindos do processo de Bienaymé-Galton-Watson são desenvolvidas e comparadas: quando dados são realizações de uma única geração do processo de ramificação ou quando os dados são uma única realização do processo de ramificação observada ao longo de uma quantidade de gerações. O critério abordado neste trabalho para encerrar o processo de atualização na inferência paramétrica usa a distância de L2-Wasserstein, que é uma métrica baseada no transporte ótimo de massa. Todas as rotinas numéricas e simbólicas desenvolvidas neste trabalho são escritas em MATLAB. / [en] This work models an epidemic s spreading over time with a stochastic approach. The number of infections per infector is modeled as a discrete random variable, named here as contagion. Therefore, the evolution of the disease over time is a stochastic process. More specifically, this propagation is modeled as the Bienaymé-Galton-Watson process, one kind of branching process with discrete parameter. In this process, for a given time, the number of infected members, i.e. a generation of infected members, is a random variable. In the first part of this dissertation, given that the mass function of the contagion s random variable is known, four methodologies to find the mass function of the generations of the stochastic process are compared. The methodologies are: probability generating functions with and without polynomial identities, Markov chain and Monte Carlo simulations. The first and the third methodologies provide analytical expressions relating the contagion random variable and the generation s size random variable. These analytical expressions are used in the second part of this dissertation, where a classical inverse problem of bayesian parametric inference is studied. With the help of Bayes rule, parameters of the contagion random variable are inferred from realizations of the stochastic process. The analytical expressions obtained in the first part of the work are used to build appropriate likelihood functions. In order to solve the inverse problem, two different ways of using data from the Bienaymé-Galton-Watson process are developed and compared: when data are realizations of a single generation of the branching process and when data is just one realization of the branching process observed over a certain number of generations. The criteria used in this work to stop the update process in the bayesian parametric inference uses the L2-Wasserstein distance, which is a metric based on optimal mass transference. All numerical and symbolical routines developed to this work are written in MATLAB.

Page generated in 0.14 seconds