• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 20
  • 16
  • 8
  • 1
  • Tagged with
  • 216
  • 216
  • 55
  • 55
  • 47
  • 35
  • 33
  • 31
  • 31
  • 25
  • 25
  • 24
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Multi Data Reservoir History Matching using the Ensemble Kalman Filter

Katterbauer, Klemens 05 1900 (has links)
Reservoir history matching is becoming increasingly important with the growing demand for higher quality formation characterization and forecasting and the increased complexity and expenses for modern hydrocarbon exploration projects. History matching has long been dominated by adjusting reservoir parameters based solely on well data whose spatial sparse sampling has been a challenge for characterizing the flow properties in areas away from the wells. Geophysical data are widely collected nowadays for reservoir monitoring purposes, but has not yet been fully integrated into history matching and forecasting fluid flow. In this thesis, I present a pioneering approach towards incorporating different time-lapse geophysical data together for enhancing reservoir history matching and uncertainty quantification. The thesis provides several approaches to efficiently integrate multiple geophysical data, analyze the sensitivity of the history matches to observation noise, and examine the framework’s performance in several settings, such as the Norne field in Norway. The results demonstrate the significant improvements in reservoir forecasting and characterization and the synergy effects encountered between the different geophysical data. In particular, the joint use of electromagnetic and seismic data improves the accuracy of forecasting fluid properties, and the usage of electromagnetic data has led to considerably better estimates of hydrocarbon fluid components. For volatile oil and gas reservoirs the joint integration of gravimetric and InSAR data has shown to be beneficial in detecting the influx of water and thereby improving the recovery rate. Summarizing, this thesis makes an important contribution towards integrated reservoir management and multiphysics integration for reservoir history matching.
22

Carbon Capture and Synergistic Energy Storage: Performance and Uncertainty Quantification

Russell, Christopher Stephen 27 February 2019 (has links)
Energy use around the world will rise in the coming decades. Renewable energy sources will help meet this demand, but renewable sources suffer from intermittency, uncontrollable power supply, geographic limitations, and other issues. Many of these issues can be mitigated by introducing energy storage technologies. These technologies facilitate load following and can effectively time-shift power. This analysis compares dedicated and synergistic energy storage technologies using energy efficiency as the primary metric. Energy storage will help renewable sources come to the grid, but fossil fuels still dominate energy sources for decades to come in nearly all projections. Carbon capture technologies can significantly reduce the negative environmental impact of these power plants. There are many carbon capture technologies under development. This analysis considers both the innovative and relatively new cryogenic carbon capture™ (CCC) process and more traditional solvent-based systems. The CCC process requires less energy than other leading technologies while simultaneously providing a means of energy storage for the power plant. This analysis shows CCC is effective as a means to capture CO2 from coal-fired power plants, natural-gas-fired power plants, and syngas production plants. Statistical analysis includes two carbon capture technologies and illustrates how uncertainty quantification (UQ) provides error bars for simulations. UQ provides information on data gaps, uncertainties for property models, and distributions for model predictions. In addition, UQ results provide a discrepancy function that can be introduced into the model to provide a better fit to data and better accuracy overall.
23

PROBABILISTIC DESIGN AND RELIABILITY ANALYSIS WITH KRIGING AND ENVELOPE METHODS

Hao Wu (12456738) 26 April 2022 (has links)
<p> </p> <p>In the mechanical design stage, engineers always meet with uncertainty, such as random</p> <p>variables, stochastic processes, and random processes. Due to the uncertainty, products may</p> <p>behave randomly with respect to time and space, and this may result in a high probability of failure,</p> <p>low lifetime, and low robustness. Although extensive research has been conducted on the</p> <p>component reliability methods, time- and space-dependent system reliability methods are still</p> <p>limited. This dissertation is motivated by the need of efficient and accurate methods for addressing</p> <p>time- and space-dependent system reliability and probabilistic design problems.</p> <p>The objective of this dissertation is to develop efficient and accurate methods for reliability</p> <p>analysis and design. There are five research tasks for this objective. The first research task develops</p> <p>a surrogate model with an active learning method to predict the time- and space-independent</p> <p>system reliability. In the second research task, the time- and space-independent system reliability</p> <p>is estimated by the second order saddlepoint approximation method. In the third research task, the</p> <p>time-dependent system reliability is addressed by an envelope method with efficient global</p> <p>optimization. In the fourth research task, a general time- and space-dependent problem is</p> <p>investigated. The envelope method converts the time- and space-dependent problem into time- and</p> <p>space-independent one, and the second order approximation is used to predict results. The last task</p> <p>proposes a new sequential reliability-based design with the envelope method for time- and spacedependent</p> <p>reliability. The accuracy and efficiency of our proposed methods are demonstrated</p> <p>through a wide range of mathematics problems and engineering problems.</p>
24

Uncertainty Quantification for Underdetermined Inverse Problems via Krylov Subspace Iterative Solvers

Devathi, Duttaabhinivesh 23 May 2019 (has links)
No description available.
25

Non-Deterministic Metamodeling for Multidisciplinary Design Optimization of Aircraft Systems Under Uncertainty

Clark, Daniel L., Jr. 18 December 2019 (has links)
No description available.
26

Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit

Doty, Austin January 2012 (has links)
No description available.
27

Deep Gaussian Process Surrogates for Computer Experiments

Sauer, Annie Elizabeth 27 April 2023 (has links)
Deep Gaussian processes (DGPs) upgrade ordinary GPs through functional composition, in which intermediate GP layers warp the original inputs, providing flexibility to model non-stationary dynamics. Recent applications in machine learning favor approximate, optimization-based inference for fast predictions, but applications to computer surrogate modeling - with an eye towards downstream tasks like Bayesian optimization and reliability analysis - demand broader uncertainty quantification (UQ). I prioritize UQ through full posterior integration in a Bayesian scheme, hinging on elliptical slice sampling of latent layers. I demonstrate how my DGP's non-stationary flexibility, combined with appropriate UQ, allows for active learning: a virtuous cycle of data acquisition and model updating that departs from traditional space-filling designs and yields more accurate surrogates for fixed simulation effort. I propose new sequential design schemes that rely on optimization of acquisition criteria through evaluation of strategically allocated candidates instead of numerical optimizations, with a motivating application to contour location in an aeronautics simulation. Alternatively, when simulation runs are cheap and readily available, large datasets present a challenge for full DGP posterior integration due to cubic scaling bottlenecks. For this case I introduce the Vecchia approximation, popular for ordinary GPs in spatial data settings. I show that Vecchia-induced sparsity of Cholesky factors allows for linear computational scaling without compromising DGP accuracy or UQ. I vet both active learning and Vecchia-approximated DGPs on numerous illustrative examples and real computer experiments. I provide open-source implementations in the "deepgp" package for R on CRAN. / Doctor of Philosophy / Scientific research hinges on experimentation, yet direct experimentation is often impossible or infeasible (practically, financially, or ethically). For example, engineers designing satellites are interested in how the shape of the satellite affects its movement in space. They cannot create whole suites of differently shaped satellites, send them into orbit, and observe how they move. Instead they rely on carefully developed computer simulations. The complexity of such computer simulations necessitates a statistical model, termed a "surrogate", that is able to generate predictions in place of actual evaluations of the simulator (which may take days or weeks to run). Gaussian processes (GPs) are a common statistical modeling choice because they provide nonlinear predictions with thorough estimates of uncertainty, but they are limited in their flexibility. Deep Gaussian processes (DGPs) offer a more flexible alternative while still reaping the benefits of traditional GPs. I provide an implementation of DGP surrogates that prioritizes prediction accuracy and estimates of uncertainty. For computer simulations that are very costly to run, I provide a method of sequentially selecting input configurations to maximize learning from a fixed budget of simulator evaluations. I propose novel methods for selecting input configurations when the goal is to optimize the response or identify regions that correspond to system "failures". When abundant simulation evaluations are available, I provide an approximation which allows for faster DGP model fitting without compromising predictive power. I thoroughly vet my methods on both synthetic "toy" datasets and real aeronautic computer experiments.
28

Bayesian Errors and Rogue Effective Field Theories

Klco, Natalie 27 April 2015 (has links)
No description available.
29

Quantifying Uncertainty in Reactor Flux/Power Distributions

Kennedy, Ryanne Ariel 22 July 2011 (has links)
No description available.
30

Sequential learning, large-scale calibration, and uncertainty quantification

Huang, Jiangeng 23 July 2019 (has links)
With remarkable advances in computing power, computer experiments continue to expand the boundaries and drive down the cost of various scientific discoveries. New challenges keep arising from designing, analyzing, modeling, calibrating, optimizing, and predicting in computer experiments. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For heteroskedastic computer experiments, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from input-dependent noise. Motivated by challenges in both large data size and model fidelity arising from ever larger modern computer experiments, highly accurate and computationally efficient divide-and-conquer calibration methods based on on-site experimental design and surrogate modeling for large-scale computer models are developed in this dissertation. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This on-site surrogate calibration method is further extended to multiple output calibration problems. / Doctor of Philosophy / With remarkable advances in computing power, complex physical systems today can be simulated comparatively cheaply and to high accuracy through computer experiments. Computer experiments continue to expand the boundaries and drive down the cost of various scientific investigations, including biological, business, engineering, industrial, management, health-related, physical, and social sciences. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For computer experiments with changing signal-to-noise ratio, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from complex noise structure. In order to effectively extract key information from massive amount of simulation and make better prediction for the real world, highly accurate and computationally efficient divide-and-conquer calibration methods for large-scale computer models are developed in this dissertation, addressing challenges in both large data size and model fidelity arising from ever larger modern computer experiments. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This large-scale calibration method is further extended to solve multiple output calibration problems.

Page generated in 0.0401 seconds