• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 241
  • 241
  • 61
  • 57
  • 52
  • 36
  • 35
  • 34
  • 33
  • 28
  • 26
  • 25
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Multi Data Reservoir History Matching using the Ensemble Kalman Filter

Katterbauer, Klemens 05 1900 (has links)
Reservoir history matching is becoming increasingly important with the growing demand for higher quality formation characterization and forecasting and the increased complexity and expenses for modern hydrocarbon exploration projects. History matching has long been dominated by adjusting reservoir parameters based solely on well data whose spatial sparse sampling has been a challenge for characterizing the flow properties in areas away from the wells. Geophysical data are widely collected nowadays for reservoir monitoring purposes, but has not yet been fully integrated into history matching and forecasting fluid flow. In this thesis, I present a pioneering approach towards incorporating different time-lapse geophysical data together for enhancing reservoir history matching and uncertainty quantification. The thesis provides several approaches to efficiently integrate multiple geophysical data, analyze the sensitivity of the history matches to observation noise, and examine the framework’s performance in several settings, such as the Norne field in Norway. The results demonstrate the significant improvements in reservoir forecasting and characterization and the synergy effects encountered between the different geophysical data. In particular, the joint use of electromagnetic and seismic data improves the accuracy of forecasting fluid properties, and the usage of electromagnetic data has led to considerably better estimates of hydrocarbon fluid components. For volatile oil and gas reservoirs the joint integration of gravimetric and InSAR data has shown to be beneficial in detecting the influx of water and thereby improving the recovery rate. Summarizing, this thesis makes an important contribution towards integrated reservoir management and multiphysics integration for reservoir history matching.
32

Carbon Capture and Synergistic Energy Storage: Performance and Uncertainty Quantification

Russell, Christopher Stephen 27 February 2019 (has links)
Energy use around the world will rise in the coming decades. Renewable energy sources will help meet this demand, but renewable sources suffer from intermittency, uncontrollable power supply, geographic limitations, and other issues. Many of these issues can be mitigated by introducing energy storage technologies. These technologies facilitate load following and can effectively time-shift power. This analysis compares dedicated and synergistic energy storage technologies using energy efficiency as the primary metric. Energy storage will help renewable sources come to the grid, but fossil fuels still dominate energy sources for decades to come in nearly all projections. Carbon capture technologies can significantly reduce the negative environmental impact of these power plants. There are many carbon capture technologies under development. This analysis considers both the innovative and relatively new cryogenic carbon capture™ (CCC) process and more traditional solvent-based systems. The CCC process requires less energy than other leading technologies while simultaneously providing a means of energy storage for the power plant. This analysis shows CCC is effective as a means to capture CO2 from coal-fired power plants, natural-gas-fired power plants, and syngas production plants. Statistical analysis includes two carbon capture technologies and illustrates how uncertainty quantification (UQ) provides error bars for simulations. UQ provides information on data gaps, uncertainties for property models, and distributions for model predictions. In addition, UQ results provide a discrepancy function that can be introduced into the model to provide a better fit to data and better accuracy overall.
33

Impact of Uncertainties in Reaction Rates and Thermodynamic Properties on Ignition Delay Time

Hantouche, Mireille 04 1900 (has links)
Ignition delay time, τ_ign, is a key quantity of interest that is used to assess the predictability of a chemical kinetic model. This dissertation explores the sensitivity of τ_ign to uncertainties in: 1. rate-rule kinetic rates parameters and 2. enthalpies and entropies of fuel and fuel radicals using global and local sensitivity approaches. We begin by considering variability in τ_ign to uncertainty in rate parameters. We consider a 30-dimensional stochastic germ in which each random variable is associated with one reaction class, and build a surrogate model for τ_ign using polynomial chaos expansions. The adaptive pseudo-spectral projection technique is used for this purpose. First-order and total-order sensitivity indices characterizing the dependence of τ_ign on uncertain inputs are estimated. Results indicate that τ_ign is mostly sensitive to variations in four dominant reaction classes. Next, we develop a thermodynamic class approach to study variability in τ_ign of n-butanol due to uncertainty in thermodynamic properties of species of interest, and to define associated uncertainty ranges. A global sensitivity analysis is performed, again using surrogates constructed using an adaptive pseudo-spectral method. Results indicate that the variability of τ_ign is dominated by uncertainties in the classes associated with peroxy and hydroperoxide radicals. We also perform a combined sensitivity analysis of uncertainty in kinetic rates and thermodynamic properties which revealed that uncertainties in thermodynamic properties can induce variabilities in ignition delay time that are as large as those associated with kinetic rate uncertainties. In the last part, we develop a tangent linear approximation (TLA) to estimate the sensitivity of τ_ign with respect to individual rate parameters and thermodynamic properties in detailed chemical mechanisms. Attention is focused on a gas mixture reacting under adiabatic, constant-volume conditions. The proposed approach is based on integrating the linearized system of equations governing the evolution of the partial derivatives of the state vector with respect to individual random variables, and a linearized approximation is developed to relate ignition delay sensitivity to scaled partial derivatives of temperature. The computations indicate that TLA leads to robust local sensitivity predictions at a computational cost that is order-of-magnitude smaller than that incurred by finite-difference approaches.
34

PROBABILISTIC DESIGN AND RELIABILITY ANALYSIS WITH KRIGING AND ENVELOPE METHODS

Hao Wu (12456738) 26 April 2022 (has links)
<p> </p> <p>In the mechanical design stage, engineers always meet with uncertainty, such as random</p> <p>variables, stochastic processes, and random processes. Due to the uncertainty, products may</p> <p>behave randomly with respect to time and space, and this may result in a high probability of failure,</p> <p>low lifetime, and low robustness. Although extensive research has been conducted on the</p> <p>component reliability methods, time- and space-dependent system reliability methods are still</p> <p>limited. This dissertation is motivated by the need of efficient and accurate methods for addressing</p> <p>time- and space-dependent system reliability and probabilistic design problems.</p> <p>The objective of this dissertation is to develop efficient and accurate methods for reliability</p> <p>analysis and design. There are five research tasks for this objective. The first research task develops</p> <p>a surrogate model with an active learning method to predict the time- and space-independent</p> <p>system reliability. In the second research task, the time- and space-independent system reliability</p> <p>is estimated by the second order saddlepoint approximation method. In the third research task, the</p> <p>time-dependent system reliability is addressed by an envelope method with efficient global</p> <p>optimization. In the fourth research task, a general time- and space-dependent problem is</p> <p>investigated. The envelope method converts the time- and space-dependent problem into time- and</p> <p>space-independent one, and the second order approximation is used to predict results. The last task</p> <p>proposes a new sequential reliability-based design with the envelope method for time- and spacedependent</p> <p>reliability. The accuracy and efficiency of our proposed methods are demonstrated</p> <p>through a wide range of mathematics problems and engineering problems.</p>
35

Uncertainty Quantification and Sensitivity Analysis of Multiphysics Environments for Application in Pressurized Water Reactor Design

Blakely, Cole David 01 August 2018 (has links)
The most common design among U.S. nuclear power plants is the pressurized water reactor (PWR). The three primary design disciplines of these plants are system analysis (which includes thermal hydraulics), neutronics, and fuel performance. The nuclear industry has developed a variety of codes over the course of forty years, each with an emphasis within a specific discipline. Perhaps the greatest difficulty in mathematically modeling a nuclear reactor, is choosing which specific phenomena need to be modeled, and to what detail. A multiphysics computational environment provides a means of advancing simulations of nuclear plants. Put simply, users are able to combine various physical models which have commonly been treated as separate in the past. The focus of this work is a specific multiphysics environment currently under development at Idaho National Labs known as the LOCA Toolkit for US light water reactors (LOTUS). The ability of LOTUS to use uncertainty quantification (UQ) and sensitivity analysis (SA) tools within a multihphysics environment allow for a number of unique analyses which to the best of our knowledge, have yet to be performed. These include the first known integration of the neutronics and thermal hydraulic code VERA-CS currently under development by CASL, with the well-established fuel performance code FRAPCON by PNWL. The integration was used to model a fuel depletion case. The outputs of interest for this integration were the minimum departure from nucleate boiling ratio (MDNBR) (a thermal hydraulic parameter indicating how close a heat flux is to causing a dangerous form of boiling in which an insulating layer of coolant vapour is formed), the maximum fuel centerline temperature (MFCT) of the uranium rod, and the gap conductance at peak power (GCPP). GCPP refers to the thermal conductance of the gas filled gap between fuel and cladding at the axial location with the highest local power generation. UQ and SA were performed on MDNBR, MFCT, and GCPP at a variety of times throughout the fuel depletion. Results showed the MDNBR to behave linearly and consistently throughout the depletion, with the most impactful input uncertainties being coolant outlet pressure and inlet temperature as well as core power. MFCT also behaves linearly, but with a shift in SA measures. Initially MFCT is sensitive to fuel thermal conductivity and gap dimensions. However, later in the fuel cycle, nearly all uncertainty stems from fuel thermal conductivity, with minor contributions coming from core power and initial fuel density. GCPP uncertainty exhibits nonlinear, time-dependent behaviour which requires higher order SA measures to properly analyze. GCPP begins with a dependence on gap dimensions, but in later states, shifts to a dependence on the biases of a variety of specific calculation such as fuel swelling and cladding creep and oxidation. LOTUS was also used to perform the first higher order SA of an integration of VERA-CS and the BISON fuel performance code currently under development at INL. The same problem and outputs were studied as the VERA-CS and FRAPCON integration. Results for MDNBR and MFCT were relatively consistent. GCPP results contained notable differences, specifically a large dependence on fuel and clad surface roughness in later states. However, this difference is due to the surface roughness not being perturbed in the first integration. SA of later states also showed an increased sensitivity to fission gas release coefficients. Lastly a Loss of Coolant Accident was investigated with an integration of FRAPCON with the INL neutronics code PHISICS and system analysis code RELAP5-3D. The outputs of interest were ratios of the peak cladding temperatures (highest temperature encountered by cladding during LOCA) and equivalent cladding reacted (the percentage of cladding oxidized) to their cladding hydrogen content-based limits. This work contains the first known UQ of these ratios within the aforementioned integration. Results showed the PCT ratio to be relatively well behaved. The ECR ratio behaves as a threshold variable, which is to say it abruptly shifts to radically higher values under specific conditions. This threshold behaviour establishes the importance of performing UQ so as to see the full spectrum of possible values for an output of interest. The SA capabilities of LOTUS provide a path forward for developers to increase code fidelity for specific outputs. Performing UQ within a multiphysics environment may provide improved estimates of safety metrics in nuclear reactors. These improved estimates may allow plants to operate at higher power, thereby increasing profits. Lastly, LOTUS will be of particular use in the development of newly proposed nuclear fuel designs.
36

Uncertainty Quantification for Underdetermined Inverse Problems via Krylov Subspace Iterative Solvers

Devathi, Duttaabhinivesh 23 May 2019 (has links)
No description available.
37

Non-Deterministic Metamodeling for Multidisciplinary Design Optimization of Aircraft Systems Under Uncertainty

Clark, Daniel L., Jr. 18 December 2019 (has links)
No description available.
38

Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit

Doty, Austin January 2012 (has links)
No description available.
39

Deep Gaussian Process Surrogates for Computer Experiments

Sauer, Annie Elizabeth 27 April 2023 (has links)
Deep Gaussian processes (DGPs) upgrade ordinary GPs through functional composition, in which intermediate GP layers warp the original inputs, providing flexibility to model non-stationary dynamics. Recent applications in machine learning favor approximate, optimization-based inference for fast predictions, but applications to computer surrogate modeling - with an eye towards downstream tasks like Bayesian optimization and reliability analysis - demand broader uncertainty quantification (UQ). I prioritize UQ through full posterior integration in a Bayesian scheme, hinging on elliptical slice sampling of latent layers. I demonstrate how my DGP's non-stationary flexibility, combined with appropriate UQ, allows for active learning: a virtuous cycle of data acquisition and model updating that departs from traditional space-filling designs and yields more accurate surrogates for fixed simulation effort. I propose new sequential design schemes that rely on optimization of acquisition criteria through evaluation of strategically allocated candidates instead of numerical optimizations, with a motivating application to contour location in an aeronautics simulation. Alternatively, when simulation runs are cheap and readily available, large datasets present a challenge for full DGP posterior integration due to cubic scaling bottlenecks. For this case I introduce the Vecchia approximation, popular for ordinary GPs in spatial data settings. I show that Vecchia-induced sparsity of Cholesky factors allows for linear computational scaling without compromising DGP accuracy or UQ. I vet both active learning and Vecchia-approximated DGPs on numerous illustrative examples and real computer experiments. I provide open-source implementations in the "deepgp" package for R on CRAN. / Doctor of Philosophy / Scientific research hinges on experimentation, yet direct experimentation is often impossible or infeasible (practically, financially, or ethically). For example, engineers designing satellites are interested in how the shape of the satellite affects its movement in space. They cannot create whole suites of differently shaped satellites, send them into orbit, and observe how they move. Instead they rely on carefully developed computer simulations. The complexity of such computer simulations necessitates a statistical model, termed a "surrogate", that is able to generate predictions in place of actual evaluations of the simulator (which may take days or weeks to run). Gaussian processes (GPs) are a common statistical modeling choice because they provide nonlinear predictions with thorough estimates of uncertainty, but they are limited in their flexibility. Deep Gaussian processes (DGPs) offer a more flexible alternative while still reaping the benefits of traditional GPs. I provide an implementation of DGP surrogates that prioritizes prediction accuracy and estimates of uncertainty. For computer simulations that are very costly to run, I provide a method of sequentially selecting input configurations to maximize learning from a fixed budget of simulator evaluations. I propose novel methods for selecting input configurations when the goal is to optimize the response or identify regions that correspond to system "failures". When abundant simulation evaluations are available, I provide an approximation which allows for faster DGP model fitting without compromising predictive power. I thoroughly vet my methods on both synthetic "toy" datasets and real aeronautic computer experiments.
40

Bayesian Errors and Rogue Effective Field Theories

Klco, Natalie 27 April 2015 (has links)
No description available.

Page generated in 0.0339 seconds