• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 250
  • 250
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 27
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models

Kamilis, Dimitrios January 2018 (has links)
Uncertainty Quantification (UQ) has been an active area of research in recent years with a wide range of applications in data and imaging sciences. In many problems, the source of uncertainty stems from an unknown parameter in the model. In physical and engineering systems for example, the parameters of the partial differential equation (PDE) that model the observed data may be unknown or incompletely specified. In such cases, one may use a probabilistic description based on prior information and formulate a forward UQ problem of characterising the uncertainty in the PDE solution and observations in response to that in the parameters. Conversely, inverse UQ encompasses the statistical estimation of the unknown parameters from the available observations, which can be cast as a Bayesian inverse problem. The contributions of the thesis focus on examining the aforementioned forward and inverse UQ problems for the low-frequency, time-harmonic Maxwell equations, where the model uncertainty emanates from the lack of knowledge of the material conductivity parameter. The motivation comes from the Controlled-Source Electromagnetic Method (CSEM) that aims to detect and image hydrocarbon reservoirs by using electromagnetic field (EM) measurements to obtain information about the conductivity profile of the sub-seabed. Traditionally, algorithms for deterministic models have been employed to solve the inverse problem in CSEM by optimisation and regularisation methods, which aside from the image reconstruction provide no quantitative information on the credibility of its features. This work employs instead stochastic models where the conductivity is represented as a lognormal random field, with the objective of providing a more informative characterisation of the model observables and the unknown parameters. The variational formulation of these stochastic models is analysed and proved to be well-posed under suitable assumptions. For computational purposes the stochastic formulation is recast as a deterministic, parametric problem with distributed uncertainty, which leads to an infinite-dimensional integration problem with respect to the prior and posterior measure. One of the main challenges is thus the approximation of these integrals, with the standard choice being some variant of the Monte-Carlo (MC) method. However, such methods typically fail to take advantage of the intrinsic properties of the model and suffer from unsatisfactory convergence rates. Based on recently developed theory on high-dimensional approximation, this thesis advocates the use of Sparse Quadrature (SQ) to tackle the integration problem. For the models considered here and under certain assumptions, we prove that for forward UQ, Sparse Quadrature can attain dimension-independent convergence rates that out-perform MC. Typical CSEM models are large-scale and thus additional effort is made in this work to reduce the cost of obtaining forward solutions for each sampling parameter by utilising the weighted Reduced Basis method (RB) and the Empirical Interpolation Method (EIM). The proposed variant of a combined SQ-EIM-RB algorithm is based on an adaptive selection of training sets and a primal-dual, goal-oriented formulation for the EIM-RB approximation. Numerical examples show that the suggested computational framework can alleviate the computational costs associated with forward UQ for the pertinent large-scale models, thus providing a viable methodology for practical applications.
162

Quantified PIRT and uncertainty quantification for computer code validation

Luo, Hu 05 December 2013 (has links)
This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (�� groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals. / Graduation date: 2013 / Access restricted to the OSU Community, at author's request, from Dec. 5, 2012 - Dec. 5, 2013
163

A Hierarchical History Matching Method and its Applications

Yin, Jichao 2011 December 1900 (has links)
Modern reservoir management typically involves simulations of geological models to predict future recovery estimates, providing the economic assessment of different field development strategies. Integrating reservoir data is a vital step in developing reliable reservoir performance models. Currently, most effective strategies for traditional manual history matching commonly follow a structured approach with a sequence of adjustments from global to regional parameters, followed by local changes in model properties. In contrast, many of the recent automatic history matching methods utilize parameter sensitivities or gradients to directly update the fine-scale reservoir properties, often ignoring geological inconsistency. Therefore, there is need for combining elements of all of these scales in a seamless manner. We present a hierarchical streamline-assisted history matching, with a framework of global-local updates. A probabilistic approach, consisting of design of experiments, response surface methodology and the genetic algorithm, is used to understand the uncertainty in the large-scale static and dynamic parameters. This global update step is followed by a streamline-based model calibration for high resolution reservoir heterogeneity. This local update step assimilates dynamic production data. We apply the genetic global calibration to unconventional shale gas reservoir specifically we include stimulated reservoir volume as a constraint term in the data integration to improve history matching and reduce prediction uncertainty. We introduce a novel approach for efficiently computing well drainage volumes for shale gas wells with multistage fractures and fracture clusters, and we will filter stochastic shale gas reservoir models by comparing the computed drainage volume with the measured SRV within specified confidence limits. Finally, we demonstrate the value of integrating downhole temperature measurements as coarse-scale constraint during streamline-based history matching of dynamic production data. We first derive coarse-scale permeability trends in the reservoir from temperature data. The coarse information are then downscaled into fine scale permeability by sequential Gaussian simulation with block kriging, and updated by local-scale streamline-based history matching. he power and utility of our approaches have been demonstrated using both synthetic and field examples.
164

Fiabilité et évaluation des incertitudes pour la simulation numérique de la turbulence : application aux machines hydrauliques / Reliability and uncertainty assessment for the numerical simulation of turbulence : application to hydraulic machines

Brugière, Olivier 14 January 2015 (has links)
La simulation numérique fiable des performances de turbines hydrauliques suppose : i) de pouvoir inclure dans les calculs RANS (Reynolds-Averaged Navier-Stokes) traditionnellement mis en œuvre l'effet des incertitudes qui existent en pratique sur les conditions d'entrée de l'écoulement; ii) de pouvoir faire appel à une stratégie de type SGE (Simulation des Grandes Echelles) pour améliorer la description des effets de la turbulence lorsque des écarts subsistent entre calculs RANS et résultats d'essai de référence même après prise en compte des incertitudes. Les présents travaux mettent en oeuvre une démarche non intrusive de quantification d'incertitude (NISP pour Non-Intrusive Spectral Projection) pour deux configurations d'intérêt pratique : un distributeur de turbine Francis avec débit et angle d'entrée incertains et un aspirateur de turbine bulbe avec conditions d'entrée (profils de vitesse,en particulier en proche paroi, et grandeurs turbulentes) incertaines. L'approche NISP est utilisée non seulement pour estimer la valeur moyenne et la variance de quantités d'intérêt mais également pour disposer d'une analyse de la variance qui permet d'identifier les incertitudes les plus influentes. Les simulations RANS, vérifiées par une démarche de convergence en maillage, ne permettent pas pour la plupart des configurations analysées d'expliquer les écarts calcul / expérience grâce à la prise en compte des incertitudes d'entrée.Nous mettons donc également en ouvre des simulations SGE en faisant appel à une stratégie originale d'évaluation de la qualité des maillages utilisés dans le cadre d'une démarche de vérification des calculs SGE. Pour une majorité des configurations analysées, la combinaison d'une stratégie SGE et d'une démarche de quantification des incertitudes permet de produire des résultats numériques fiables. La prise en compte des incertitudes d'entrée permet également de proposer une démarche d'optimisation robuste du distributeur de turbine Francis étudié. / The reliable numerical simulation of hydraulic turbines performance requires : i) to includeinto the conventional RANS computations the effect of the uncertainties existing in practiceon the inflow conditions; ii) to rely on a LES (Large Eddy Simulation) strategy to improve thedescription of turbulence effects when discrepancies between RANS computations and experimentskeep arising even though uncertainties are taken into account. The present workapplies a non-intrusive Uncertainty Quantification strategy (NISP for Non-Intrusive SpectralProjection) to two configurations of practical interest : a Francis turbine distributor, with uncertaininlet flow rate and angle, and a draft-tube of a bulb-type turbine with uncertain inflowconditions (velocity distributions, in particular close to the wall boundaries, and turbulentquantities). The NISP method is not only used to compute the mean value and variance ofquantities of interest, it is also applied to perform an analysis of the variance and identify inthis way the most influential uncertainties. The RANS simulations, verified through a gridconvergence approach, are such the discrepancies between computation and experimentcannot be explained by taking into account the inflow uncertainties for most of the configurationsunder study. Therefore, LES simulations are also performed and these simulations areverified using an original methodology for assessing the quality of the computational grids(since the grid-convergence concept is not relevant for LES). For most of the flows understudy, combining a SGE strategy with a UQ approach yields reliable numerical results. Takinginto account inflow uncertainties also allows to propose a robust optimization strategy forthe Francis turbine distributor under study.
165

Some new ideas on fractional factorial design and computer experiment

Su, Heng 08 June 2015 (has links)
This thesis consists of two parts. The first part is on fractional factorial design, and the second part is on computer experiment. The first part has two chapters. In the first chapter, we use the concept of conditional main effect, and propose the CME analysis to solve the problem of effect aliasing in two-level fractional factorial design. In the second chapter, we study the conversion rates of a system of webpages with the proposed funnel testing method, by using directed graph to represent the system, fractional factorial design to conduct the experiment, and a method to optimize the total conversion rate with respect to all the webpages in the system. The second part also has two chapters. In the third chapter, we use regression models to quantify the model form uncertainties in the Perez model in building energy simulations. In the last chapter, we propose a new Gaussian process that can jointly model both point and integral responses.
166

Coupled flow systems, adjoint techniques and uncertainty quantification

Garg, Vikram Vinod, 1985- 25 October 2012 (has links)
Coupled systems are ubiquitous in modern engineering and science. Such systems can encompass fluid dynamics, structural mechanics, chemical species transport and electrostatic effects among other components, all of which can be coupled in many different ways. In addition, such models are usually multiscale, making their numerical simulation challenging, and necessitating the use of adaptive modeling techniques. The multiscale, multiphysics models of electrosomotic flow (EOF) constitute a particularly challenging coupled flow system. A special feature of such models is that the coupling between the electric physics and hydrodynamics is via the boundary. Numerical simulations of coupled systems are typically targeted towards specific Quantities of Interest (QoIs). Adjoint-based approaches offer the possibility of QoI targeted adaptive mesh refinement and efficient parameter sensitivity analysis. The formulation of appropriate adjoint problems for EOF models is particularly challenging, due to the coupling of physics via the boundary as opposed to the interior of the domain. The well-posedness of the adjoint problem for such models is also non-trivial. One contribution of this dissertation is the derivation of an appropriate adjoint problem for slip EOF models, and the development of penalty-based, adjoint-consistent variational formulations of these models. We demonstrate the use of these formulations in the simulation of EOF flows in straight and T-shaped microchannels, in conjunction with goal-oriented mesh refinement and adjoint sensitivity analysis. Complex computational models may exhibit uncertain behavior due to various reasons, ranging from uncertainty in experimentally measured model parameters to imperfections in device geometry. The last decade has seen a growing interest in the field of Uncertainty Quantification (UQ), which seeks to determine the effect of input uncertainties on the system QoIs. Monte Carlo methods remain a popular computational approach for UQ due to their ease of use and "embarassingly parallel" nature. However, a major drawback of such methods is their slow convergence rate. The second contribution of this work is the introduction of a new Monte Carlo method which utilizes local sensitivity information to build accurate surrogate models. This new method, called the Local Sensitivity Derivative Enhanced Monte Carlo (LSDEMC) method can converge at a faster rate than plain Monte Carlo, especially for problems with a low to moderate number of uncertain parameters. Adjoint-based sensitivity analysis methods enable the computation of sensitivity derivatives at virtually no extra cost after the forward solve. Thus, the LSDEMC method, in conjuction with adjoint sensitivity derivative techniques can offer a robust and efficient alternative for UQ of complex systems. The efficiency of Monte Carlo methods can be further enhanced by using stratified sampling schemes such as Latin Hypercube Sampling (LHS). However, the non-incremental nature of LHS has been identified as one of the main obstacles in its application to certain classes of complex physical systems. Current incremental LHS strategies restrict the user to at least doubling the size of an existing LHS set to retain the convergence properties of LHS. The third contribution of this research is the development of a new Hierachical LHS algorithm, that creates designs which can be used to perform LHS studies in a more flexibly incremental setting, taking a step towards adaptive LHS methods. / text
167

Reservoir description with well-log-based and core-calibrated petrophysical rock classification

Xu, Chicheng 25 September 2013 (has links)
Rock type is a key concept in modern reservoir characterization that straddles multiple scales and bridges multiple disciplines. Reservoir rock classification (or simply rock typing) has been recognized as one of the most effective description tools to facilitate large-scale reservoir modeling and simulation. This dissertation aims to integrate core data and well logs to enhance reservoir description by classifying reservoir rocks in a geologically and petrophysically consistent manner. The main objective is to develop scientific approaches for utilizing multi-physics rock data at different time and length scales to describe reservoir rock-fluid systems. Emphasis is placed on transferring physical understanding of rock types from limited ground-truthing core data to abundant well logs using fast log simulations in a multi-layered earth model. Bimodal log-normal pore-size distribution functions derived from mercury injection capillary pressure (MICP) data are first introduced to characterize complex pore systems in carbonate and tight-gas sandstone reservoirs. Six pore-system attributes are interpreted and integrated to define petrophysical orthogonality or dissimilarity between two pore systems of bimodal log-normal distributions. A simple three-dimensional (3D) cubic pore network model constrained by nuclear magnetic resonance (NMR) and MICP data is developed to quantify fluid distributions and phase connectivity for predicting saturation-dependent relative permeability during two-phase drainage. There is rich petrophysical information in spatial fluid distributions resulting from vertical fluid flow on a geologic time scale and radial mud-filtrate invasion on a drilling time scale. Log attributes elicited by such fluid distributions are captured to quantify dynamic reservoir petrophysical properties and define reservoir flow capacity. A new rock classification workflow that reconciles reservoir saturation-height behavior and mud-filtrate for more accurate dynamic reservoir modeling is developed and verified in both clastic and carbonate fields. Rock types vary and mix at the sub-foot scale in heterogeneous reservoirs due to depositional control or diagenetic overprints. Conventional well logs are limited in their ability to probe the details of each individual bed or rock type as seen from outcrops or cores. A bottom-up Bayesian rock typing method is developed to efficiently test multiple working hypotheses against well logs to quantify uncertainty of rock types and their associated petrophysical properties in thinly bedded reservoirs. Concomitantly, a top-down reservoir description workflow is implemented to characterize intermixed or hybrid rock classes from flow-unit scale (or seismic scale) down to the pore scale based on a multi-scale orthogonal rock class decomposition approach. Correlations between petrophysical rock types and geological facies in reservoirs originating from deltaic and turbidite depositional systems are investigated in detail. Emphasis is placed on the cause-and-effect relationship between pore geometry and rock geological attributes such as grain size and bed thickness. Well log responses to those geological attributes and associated pore geometries are subjected to numerical log simulations. Sensitivity of various physical logs to petrophysical orthogonality between rock classes is investigated to identify the most diagnostic log attributes for log-based rock typing. Field cases of different reservoir types from various geological settings are used to verify the application of petrophysical rock classification to assist reservoir characterization, including facies interpretation, permeability prediction, saturation-height analysis, dynamic petrophysical modeling, uncertainty quantification, petrophysical upscaling, and production forecasting. / text
168

A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion

Martin, James Robert, Ph. D. 18 September 2015 (has links)
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
169

Comparative Deterministic and Probabilistic Modeling in Geotechnics: Applications to Stabilization of Organic Soils, Determination of Unknown Foundations for Bridge Scour, and One-Dimensional Diffusion Processes

Yousefpour, Negin 16 December 2013 (has links)
This study presents different aspects on the use of deterministic methods including Artificial Neural Networks (ANNs), and linear and nonlinear regression, as well as probabilistic methods including Bayesian inference and Monte Carlo methods to develop reliable solutions for challenging problems in geotechnics. This study addresses the theoretical and computational advantages and limitations of these methods in application to: 1) prediction of the stiffness and strength of stabilized organic soils, 2) determination of unknown foundations for bridges vulnerable to scour, and 3) uncertainty quantification for one-dimensional diffusion processes. ANNs were successfully implemented in this study to develop nonlinear models for the mechanical properties of stabilized organic soils. ANN models were able to learn from the training examples and then generalize the trend to make predictions for the stiffness and strength of stabilized organic soils. A stepwise parameter selection and a sensitivity analysis method were implemented to identify the most relevant factors for the prediction of the stiffness and strength. Also, the variations of the stiffness and strength with respect to each factor were investigated. A deterministic and a probabilistic approach were proposed to evaluate the characteristics of unknown foundations of bridges subjected to scour. The proposed methods were successfully implemented and validated by collecting data for bridges in the Bryan District. ANN models were developed and trained using the database of bridges to predict the foundation type and embedment depth. The probabilistic Bayesian approach generated probability distributions for the foundation and soil characteristics and was able to capture the uncertainty in the predictions. The parametric and numerical uncertainties in the one-dimensional diffusion process were evaluated under varying observation conditions. The inverse problem was solved using Bayesian inference formulated by both the analytical and numerical solutions of the ordinary differential equation of diffusion. The numerical uncertainty was evaluated by comparing the mean and standard deviation of the posterior realizations of the process corresponding to the analytical and numerical solutions of the forward problem. It was shown that higher correlation in the structure of the observations increased both parametric and numerical uncertainties, whereas increasing the number of data dramatically decreased the uncertainties in the diffusion process.
170

Bayesian networks for uncertainty estimation in the response of dynamic structures

Calanni Fraccone, Giorgio M. 07 July 2008 (has links)
The dissertation focuses on estimating the uncertainty associated with stress/strain prediction procedures from dynamic test data used in turbine blade analysis. An accurate prediction of the maximum response levels for physical components during in-field operating conditions is essential for evaluating their performance and life characteristics, as well as for investigating how their behavior critically impacts system design and reliability assessment. Currently, stress/strain inference for a dynamic system is based on the combination of experimental data and results from the analytical/numerical model of the component under consideration. Both modeling challenges and testing limitations, however, contribute to the introduction of various sources of uncertainty within the given estimation procedure, and lead ultimately to diminished accuracy and reduced confidence in the predicted response. The objective of this work is to characterize the uncertainties present in the current response estimation process and provide a means to assess them quantitatively. More specifically, proposed in this research is a statistical methodology based on a Bayesian-network representation of the modeling process which allows for a statistically rigorous synthesis of modeling assumptions and information from experimental data. Such a framework addresses the problem of multi-directional uncertainty propagation, where standard techniques for unidirectional propagation from inputs' uncertainty to outputs' variability are not suited. Furthermore, it allows for the inclusion within the analysis of newly available test data that can provide indirect evidence on the parameters of the structure's analytical model, as well as lead to a reduction of the residual uncertainty in the estimated quantities. As part of this work, key uncertainty sources (i.e., material and geometric properties, sensor measurement and placement, as well as noise due data processing limitations) are investigated, and their impact upon the system response estimates is assessed through sensitivity studies. The results are utilized for the identification of the most significant contributors to uncertainty to be modeled within the developed Bayesian inference scheme. Simulated experimentation, statistically equivalent to specified real tests, is also constructed to generate the data necessary to build the appropriate Bayesian network, which is then infused with actual experimental information for the purpose of explaining the uncertainty embedded in the response predictions and quantifying their inherent accuracy.

Page generated in 0.1261 seconds