• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 240
  • 240
  • 61
  • 56
  • 52
  • 36
  • 35
  • 33
  • 32
  • 28
  • 26
  • 25
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Multi Data Reservoir History Matching using the Ensemble Kalman Filter

Katterbauer, Klemens 05 1900 (has links)
Reservoir history matching is becoming increasingly important with the growing demand for higher quality formation characterization and forecasting and the increased complexity and expenses for modern hydrocarbon exploration projects. History matching has long been dominated by adjusting reservoir parameters based solely on well data whose spatial sparse sampling has been a challenge for characterizing the flow properties in areas away from the wells. Geophysical data are widely collected nowadays for reservoir monitoring purposes, but has not yet been fully integrated into history matching and forecasting fluid flow. In this thesis, I present a pioneering approach towards incorporating different time-lapse geophysical data together for enhancing reservoir history matching and uncertainty quantification. The thesis provides several approaches to efficiently integrate multiple geophysical data, analyze the sensitivity of the history matches to observation noise, and examine the framework’s performance in several settings, such as the Norne field in Norway. The results demonstrate the significant improvements in reservoir forecasting and characterization and the synergy effects encountered between the different geophysical data. In particular, the joint use of electromagnetic and seismic data improves the accuracy of forecasting fluid properties, and the usage of electromagnetic data has led to considerably better estimates of hydrocarbon fluid components. For volatile oil and gas reservoirs the joint integration of gravimetric and InSAR data has shown to be beneficial in detecting the influx of water and thereby improving the recovery rate. Summarizing, this thesis makes an important contribution towards integrated reservoir management and multiphysics integration for reservoir history matching.
32

Carbon Capture and Synergistic Energy Storage: Performance and Uncertainty Quantification

Russell, Christopher Stephen 27 February 2019 (has links)
Energy use around the world will rise in the coming decades. Renewable energy sources will help meet this demand, but renewable sources suffer from intermittency, uncontrollable power supply, geographic limitations, and other issues. Many of these issues can be mitigated by introducing energy storage technologies. These technologies facilitate load following and can effectively time-shift power. This analysis compares dedicated and synergistic energy storage technologies using energy efficiency as the primary metric. Energy storage will help renewable sources come to the grid, but fossil fuels still dominate energy sources for decades to come in nearly all projections. Carbon capture technologies can significantly reduce the negative environmental impact of these power plants. There are many carbon capture technologies under development. This analysis considers both the innovative and relatively new cryogenic carbon capture™ (CCC) process and more traditional solvent-based systems. The CCC process requires less energy than other leading technologies while simultaneously providing a means of energy storage for the power plant. This analysis shows CCC is effective as a means to capture CO2 from coal-fired power plants, natural-gas-fired power plants, and syngas production plants. Statistical analysis includes two carbon capture technologies and illustrates how uncertainty quantification (UQ) provides error bars for simulations. UQ provides information on data gaps, uncertainties for property models, and distributions for model predictions. In addition, UQ results provide a discrepancy function that can be introduced into the model to provide a better fit to data and better accuracy overall.
33

Impact of Uncertainties in Reaction Rates and Thermodynamic Properties on Ignition Delay Time

Hantouche, Mireille 04 1900 (has links)
Ignition delay time, τ_ign, is a key quantity of interest that is used to assess the predictability of a chemical kinetic model. This dissertation explores the sensitivity of τ_ign to uncertainties in: 1. rate-rule kinetic rates parameters and 2. enthalpies and entropies of fuel and fuel radicals using global and local sensitivity approaches. We begin by considering variability in τ_ign to uncertainty in rate parameters. We consider a 30-dimensional stochastic germ in which each random variable is associated with one reaction class, and build a surrogate model for τ_ign using polynomial chaos expansions. The adaptive pseudo-spectral projection technique is used for this purpose. First-order and total-order sensitivity indices characterizing the dependence of τ_ign on uncertain inputs are estimated. Results indicate that τ_ign is mostly sensitive to variations in four dominant reaction classes. Next, we develop a thermodynamic class approach to study variability in τ_ign of n-butanol due to uncertainty in thermodynamic properties of species of interest, and to define associated uncertainty ranges. A global sensitivity analysis is performed, again using surrogates constructed using an adaptive pseudo-spectral method. Results indicate that the variability of τ_ign is dominated by uncertainties in the classes associated with peroxy and hydroperoxide radicals. We also perform a combined sensitivity analysis of uncertainty in kinetic rates and thermodynamic properties which revealed that uncertainties in thermodynamic properties can induce variabilities in ignition delay time that are as large as those associated with kinetic rate uncertainties. In the last part, we develop a tangent linear approximation (TLA) to estimate the sensitivity of τ_ign with respect to individual rate parameters and thermodynamic properties in detailed chemical mechanisms. Attention is focused on a gas mixture reacting under adiabatic, constant-volume conditions. The proposed approach is based on integrating the linearized system of equations governing the evolution of the partial derivatives of the state vector with respect to individual random variables, and a linearized approximation is developed to relate ignition delay sensitivity to scaled partial derivatives of temperature. The computations indicate that TLA leads to robust local sensitivity predictions at a computational cost that is order-of-magnitude smaller than that incurred by finite-difference approaches.
34

PROBABILISTIC DESIGN AND RELIABILITY ANALYSIS WITH KRIGING AND ENVELOPE METHODS

Hao Wu (12456738) 26 April 2022 (has links)
<p> </p> <p>In the mechanical design stage, engineers always meet with uncertainty, such as random</p> <p>variables, stochastic processes, and random processes. Due to the uncertainty, products may</p> <p>behave randomly with respect to time and space, and this may result in a high probability of failure,</p> <p>low lifetime, and low robustness. Although extensive research has been conducted on the</p> <p>component reliability methods, time- and space-dependent system reliability methods are still</p> <p>limited. This dissertation is motivated by the need of efficient and accurate methods for addressing</p> <p>time- and space-dependent system reliability and probabilistic design problems.</p> <p>The objective of this dissertation is to develop efficient and accurate methods for reliability</p> <p>analysis and design. There are five research tasks for this objective. The first research task develops</p> <p>a surrogate model with an active learning method to predict the time- and space-independent</p> <p>system reliability. In the second research task, the time- and space-independent system reliability</p> <p>is estimated by the second order saddlepoint approximation method. In the third research task, the</p> <p>time-dependent system reliability is addressed by an envelope method with efficient global</p> <p>optimization. In the fourth research task, a general time- and space-dependent problem is</p> <p>investigated. The envelope method converts the time- and space-dependent problem into time- and</p> <p>space-independent one, and the second order approximation is used to predict results. The last task</p> <p>proposes a new sequential reliability-based design with the envelope method for time- and spacedependent</p> <p>reliability. The accuracy and efficiency of our proposed methods are demonstrated</p> <p>through a wide range of mathematics problems and engineering problems.</p>
35

On Methodology for Verification, Validation and Uncertainty Quantification in Power Electronic Converters Modeling

Rashidi Mehrabadi, Niloofar 18 September 2014 (has links)
This thesis provides insight into quantitative accuracy assessment of the modeling and simulation of power electronic converters. Verification, Validation, and Uncertainty quantification (VVandUQ) provides a means to quantify the disagreement between computational simulation results and experimental results in order to have quantitative comparisons instead of qualitative comparisons. Due to the broad applications of modeling and simulation in power electronics, VVandUQ is used to evaluate the credibility of modeling and simulation results. The topic of VVandUQ needs to be studied exclusively for power electronic converters. To carry out this work, the formal procedure for VVandUQ of power electronic converters is presented. The definition of the fundamental words in the proposed framework is also provided. The accuracy of the switching model of a three-phase Voltage Source Inverter (VSI) is quantitatively assessed following the proposed procedure. Accordingly, this thesis describes the hardware design and development of the switching model of the three-phase VSI. / Master of Science
36

Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured Materials

Kim, Jee Yun 01 October 2018 (has links)
Mesoscale design involving control of material distribution pattern can create a statistically heterogeneous material system, which has shown increased adaptability to complex mechanical environments involving highly non-uniform stress fields. Advances in multi-material additive manufacturing can aid in this mesoscale design, providing voxel level control of material property. This vast freedom in design space also unlocks possibilities within optimization of the material distribution pattern. The optimization problem can be divided into a forward problem focusing on accurate predication and an inverse problem focusing on efficient search of the optimal design. In the forward problem, the physical behavior of the material can be modeled based on fundamental mechanics laws and simulated through finite element analysis (FEA). A major limitation in modeling is the unknown parameters in constitutive equations that describe the constituent materials; determining these parameters via conventional single material testing has been proven to be insufficient, which necessitates novel and effective approaches of calibration. A calibration framework based in Bayesian inference, which integrates data from simulations and physical experiments, has been applied to a study involving a mesostructured material fabricated by fused deposition modeling. Calibration results provide insights on what values these parameters converge to as well as which material parameters the model output has the largest dependence on while accounting for sources of uncertainty introduced during the modeling process. Additionally, this statistical formulation is able to provide quick predictions of the physical system by implementing a surrogate and discrepancy model. The surrogate model is meant to be a statistical representation of the simulation results, circumventing issues arising from computational load, while the discrepancy is aimed to account for the difference between the simulation output and physical experiments. In this thesis, this Bayesian calibration framework is applied to a material bending problem, where in-situ mechanical characterization data and FEA simulations based on constitutive modeling are combined to produce updated values of the unknown material parameters with uncertainty. / Master of Science / A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
37

Sequential learning, large-scale calibration, and uncertainty quantification

Huang, Jiangeng 23 July 2019 (has links)
With remarkable advances in computing power, computer experiments continue to expand the boundaries and drive down the cost of various scientific discoveries. New challenges keep arising from designing, analyzing, modeling, calibrating, optimizing, and predicting in computer experiments. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For heteroskedastic computer experiments, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from input-dependent noise. Motivated by challenges in both large data size and model fidelity arising from ever larger modern computer experiments, highly accurate and computationally efficient divide-and-conquer calibration methods based on on-site experimental design and surrogate modeling for large-scale computer models are developed in this dissertation. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This on-site surrogate calibration method is further extended to multiple output calibration problems. / Doctor of Philosophy / With remarkable advances in computing power, complex physical systems today can be simulated comparatively cheaply and to high accuracy through computer experiments. Computer experiments continue to expand the boundaries and drive down the cost of various scientific investigations, including biological, business, engineering, industrial, management, health-related, physical, and social sciences. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For computer experiments with changing signal-to-noise ratio, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from complex noise structure. In order to effectively extract key information from massive amount of simulation and make better prediction for the real world, highly accurate and computationally efficient divide-and-conquer calibration methods for large-scale computer models are developed in this dissertation, addressing challenges in both large data size and model fidelity arising from ever larger modern computer experiments. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This large-scale calibration method is further extended to solve multiple output calibration problems.
38

Uncertainty Quantification and Sensitivity Analysis of Multiphysics Environments for Application in Pressurized Water Reactor Design

Blakely, Cole David 01 August 2018 (has links)
The most common design among U.S. nuclear power plants is the pressurized water reactor (PWR). The three primary design disciplines of these plants are system analysis (which includes thermal hydraulics), neutronics, and fuel performance. The nuclear industry has developed a variety of codes over the course of forty years, each with an emphasis within a specific discipline. Perhaps the greatest difficulty in mathematically modeling a nuclear reactor, is choosing which specific phenomena need to be modeled, and to what detail. A multiphysics computational environment provides a means of advancing simulations of nuclear plants. Put simply, users are able to combine various physical models which have commonly been treated as separate in the past. The focus of this work is a specific multiphysics environment currently under development at Idaho National Labs known as the LOCA Toolkit for US light water reactors (LOTUS). The ability of LOTUS to use uncertainty quantification (UQ) and sensitivity analysis (SA) tools within a multihphysics environment allow for a number of unique analyses which to the best of our knowledge, have yet to be performed. These include the first known integration of the neutronics and thermal hydraulic code VERA-CS currently under development by CASL, with the well-established fuel performance code FRAPCON by PNWL. The integration was used to model a fuel depletion case. The outputs of interest for this integration were the minimum departure from nucleate boiling ratio (MDNBR) (a thermal hydraulic parameter indicating how close a heat flux is to causing a dangerous form of boiling in which an insulating layer of coolant vapour is formed), the maximum fuel centerline temperature (MFCT) of the uranium rod, and the gap conductance at peak power (GCPP). GCPP refers to the thermal conductance of the gas filled gap between fuel and cladding at the axial location with the highest local power generation. UQ and SA were performed on MDNBR, MFCT, and GCPP at a variety of times throughout the fuel depletion. Results showed the MDNBR to behave linearly and consistently throughout the depletion, with the most impactful input uncertainties being coolant outlet pressure and inlet temperature as well as core power. MFCT also behaves linearly, but with a shift in SA measures. Initially MFCT is sensitive to fuel thermal conductivity and gap dimensions. However, later in the fuel cycle, nearly all uncertainty stems from fuel thermal conductivity, with minor contributions coming from core power and initial fuel density. GCPP uncertainty exhibits nonlinear, time-dependent behaviour which requires higher order SA measures to properly analyze. GCPP begins with a dependence on gap dimensions, but in later states, shifts to a dependence on the biases of a variety of specific calculation such as fuel swelling and cladding creep and oxidation. LOTUS was also used to perform the first higher order SA of an integration of VERA-CS and the BISON fuel performance code currently under development at INL. The same problem and outputs were studied as the VERA-CS and FRAPCON integration. Results for MDNBR and MFCT were relatively consistent. GCPP results contained notable differences, specifically a large dependence on fuel and clad surface roughness in later states. However, this difference is due to the surface roughness not being perturbed in the first integration. SA of later states also showed an increased sensitivity to fission gas release coefficients. Lastly a Loss of Coolant Accident was investigated with an integration of FRAPCON with the INL neutronics code PHISICS and system analysis code RELAP5-3D. The outputs of interest were ratios of the peak cladding temperatures (highest temperature encountered by cladding during LOCA) and equivalent cladding reacted (the percentage of cladding oxidized) to their cladding hydrogen content-based limits. This work contains the first known UQ of these ratios within the aforementioned integration. Results showed the PCT ratio to be relatively well behaved. The ECR ratio behaves as a threshold variable, which is to say it abruptly shifts to radically higher values under specific conditions. This threshold behaviour establishes the importance of performing UQ so as to see the full spectrum of possible values for an output of interest. The SA capabilities of LOTUS provide a path forward for developers to increase code fidelity for specific outputs. Performing UQ within a multiphysics environment may provide improved estimates of safety metrics in nuclear reactors. These improved estimates may allow plants to operate at higher power, thereby increasing profits. Lastly, LOTUS will be of particular use in the development of newly proposed nuclear fuel designs.
39

Uncertainty Quantification for Underdetermined Inverse Problems via Krylov Subspace Iterative Solvers

Devathi, Duttaabhinivesh 23 May 2019 (has links)
No description available.
40

Non-Deterministic Metamodeling for Multidisciplinary Design Optimization of Aircraft Systems Under Uncertainty

Clark, Daniel L., Jr. 18 December 2019 (has links)
No description available.

Page generated in 0.1512 seconds