• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 243
  • 243
  • 62
  • 58
  • 53
  • 36
  • 35
  • 34
  • 34
  • 28
  • 26
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

[en] HYBRID METHOD BASED INTO KALMAN FILTER AND DEEP GENERATIVE MODEL TO HISTORY MATCHING AND UNCERTAINTY QUANTIFICATION OF FACIES GEOLOGICAL MODELS / [pt] MÉTODO HÍBRIDO BASEADO EM FILTRO DE KALMAN E MODELOS GENERATIVOS DE APRENDIZAGEM PROFUNDA NO AJUSTE DE HISTÓRICO SOB INCERTEZAS PARA MODELOS DE FÁCIES GEOLÓGICAS

SMITH WASHINGTON ARAUCO CANCHUMUNI 25 March 2019 (has links)
[pt] Os métodos baseados no filtro de Kalman têm tido sucesso notável na indústria do petróleo nos últimos anos, especialmente, para resolver problemas reais de ajuste de histórico. No entanto, como a formulação desses métodos é baseada em hipóteses de gaussianidade e linearidade, seu desempenho é severamente degradado quando a geologia a priori é descrita em termos de distribuições complexas (e.g. modelos de fácies). A tendência atual em soluções para o problema de ajuste de histórico é levar em consideração modelos de reservatórios mais realistas com geologia complexa. Assim, a modelagem de fácies geológicas desempenha um papel importante na caracterização de reservatórios, como forma de reproduzir padrões importantes de heterogeneidade e facilitar a modelagem das propriedades petrofísicas das rochas do reservatório. Esta tese introduz uma nova metodologia para realizar o ajuste de histórico de modelos geológicos complexos. A metodologia consiste na integração de métodos baseados no filtro de Kalman em particular o método conhecido na literatura como Ensemble Smoother with Multiple Data Assimilation (ES-MDA), com uma parametrização das fácies geológicas por meio de técnicas baseadas em aprendizado profundo (Deep Learning) em arquiteturas do tipo autoencoder. Um autoencoder sempre consiste em duas partes, o codificador (modelo de reconhecimento) e o decodificador (modelo gerador). O procedimento começa com o treinamento de um conjunto de realizações de fácies por meio de algoritmos de aprendizado profundo, através do qual são identificadas as principais características das imagens de fácies geológicas, permitindo criar novas realizações com as mesmas características da base de treinamento com uma reduzida parametrização dos modelos de fácies na saída do codificador. Essa parametrização é regularizada no codificador para fornecer uma distribuição gaussiana na saída, a qual é utilizada para atualizar os modelos de fácies de acordo com os dados observados do reservatório, através do método ES-MDA. Ao final, os modelos atualizados são reconstruídos através do aprendizado profundo (decodificador), com o objetivo de obter modelos finais que apresentem características similares às da base de treinamento. Os resultados, em três casos de estudo com 2 e 3 fácies, mostram que a parametrização de modelos de fácies baseada no aprendizado profundo consegue reconstruir os modelos de fácies com um erro inferior a 0,3 por cento. A metodologia proposta gera modelos geológicos ajustados que conservam a descrição geológica a priori do reservatório (fácies com canais curvilíneos), além de ser consistente com o ajuste dos dados observados do reservatório. / [en] Kalman filter-based methods have had remarkable success in the oil industry in recent years, especially to solve several real-life history matching problems. However, as the formulation of these methods is based on the assumptions of gaussianity and linearity, their performance is severely degraded when a priori geology is described in terms of complex distributions (e.g., facies models). The current trend in solutions for the history matching problem is to take into account more realistic reservoir models, with complex geology. Thus the geological facies modeling plays an important role in the characterization of reservoirs as a way of reproducing important patterns of heterogeneity and to facilitate the modeling of the reservoir rocks petrophysical properties. This thesis introduces a new methodology to perform the history matching of complex geological models. This methodology consists of the integration of Kalman filter-based methods, particularly the method known in the literature as Ensemble Smoother with Multiple Data Assimilation (ES-MDA), with a parameterization of the geological facies through techniques based on deep learning in autoencoder type architectures. An autoencoder always consists of two parts, the encoder (recognition model) and the decoder (generator model). The procedure begins with the training of a set of facies realizations via deep generative models, through which the main characteristics of geological facies images are identified, allowing for the creation of new realizations with the same characteristics of the training base, with a low dimention parametrization of the facies models at the output of the encoder. This parameterization is regularized at the encoder to provide Gaussian distribution models in the output, which is then used to update the models according to the observed data of the reservoir through the ES-MDA method. In the end, the updated models are reconstructed through deep learning (decoder), with the objective of obtaining final models that present characteristics similar to those of the training base. The results, in three case studies with 2 and 3 facies, show that the parameterization of facies models based on deep learning can reconstruct facies models with an error lower than 0.3 percent. The proposed methodology generates final geological models that preserve the a priori geological description of the reservoir (facies with curvilinear channels), besides being consistent with the adjustment of the observed data of the reservoir.
152

Advanced polyhedral discretization methods for poromechanical modelling / Méthodes de discrétisation avancées pour la modélisation hydro-poromécanique

Botti, Michele 27 November 2018 (has links)
Dans cette thèse, on s’intéresse à de nouveaux schémas de discrétisation afin de résoudre les équations couplées de la poroélasticité et nous présentons des résultats analytiques et numériques concernant des problèmes issus de la poromécanique. Nous proposons de résoudre ces problèmes en utilisant les méthodes Hybrid High-Order (HHO), une nouvelle classe de méthodes de discrétisation polyédriques d’ordre arbitraire. Cette thèse a été conjointement financée par le Bureau de Recherches Géologiques et Minières (BRGM) et le LabEx NUMEV. Le couplage entre l’écoulement souterrain et la déformation géomécanique est un sujet de recherche crucial pour les deux institutions de cofinancement. / In this manuscript we focus on novel discretization schemes for solving the coupled equations of poroelasticity and we present analytical and numerical results for poromechanics problems relevant to geoscience applications. We propose to solve these problems using Hybrid High-Order (HHO) methods, a new class of nonconforming high-order methods supporting general polyhedral meshes. This Ph.D. thesis was conjointly founded by the Bureau de recherches géologiques et minières (BRGM) and LabEx NUMEV. The coupling between subsurface flow and geomechanical deformation is a crucial research topic for both cofunding institutions.
153

Analyse physics-based de scénarios sismiques «de la faille au site» : prédiction de mouvement sismique fort pour l’étude de vulnérabilité sismique de structures critiques. / Forward physics-based analysis of "source-to-site" seismic scenarios for strong ground motion prediction and seismic vulnerability assessment of critical structures

Gatti, Filippo 25 September 2017 (has links)
L’ambition de ce travail est la prédiction du champ d’onde incident réalistique, induit par des mouvement forts de sol, aux sites d’importance stratégique, comme des centrales nucléaires. À cette fin, un plateforme multi-outil est développé et exploité pour simuler les aspects différents d’un phénomène complexe et multi-échelle comme un tremblement de terre. Ce cadre computationnel fait face à la nature diversifiée d’un tremblement de terre par approche holistique local-régionale.Un cas d’étude complexe est choisie: le tremblement de terre MW6.6 Niigata-Ken Ch¯uetsu-Oki, qui a endommagé la centrale nucléaire de Kashiwazaki-Kariwa. Les effets de site non-linéaires observés sont à premier examinés et caractérisés. Dans la suite, le modèle 3D «de la faille au site» est construit et employé pour prédire le mouvement sismique dans une bande de fréquence de 0-7 Hz. L’effet de la structure géologique pliée au-dessous du site est quantifié en simulant deux chocs d’intensité modérée et en évaluant la variabilité spatiale des spectres de réponse aux différents endroits dans le site nucléaire. Le résultat numérique souligne le besoin d’une description plus détaillée du champ d’onde incident utilisé comme paramètre d’entrée dans la conception structurel antisismique de réacteurs nucléaires et des installations. Finalement, la bande de fréquences des signaux synthétiques obtenues comme résultat des simulations numériques est agrandie en exploitant la prédiction stochastique des ordonnées spectrales à courte période fournies par des Réseaux Artificiels de Neurones. / The ambition of this work is the prediction of a synthetic yet realistic broad-band incident wave-field, induced by strong ground motion earthquakes at sites of strategic importance, such as nuclear power plants. To this end, an multi-tool platform is developed and exploited to simulate the different aspects of the complex and multi-scale phenomenon an earthquake embodies. This multi-scale computational framework copes with the manifold nature of an earthquake by a holistic local-to-regional approach. A complex case study is chosen to this end: is the MW6.6 Niigata-Ken Ch¯uetsu-Oki earthquake, which damaged the Kashiwazaki-Kariwa nuclear power plant. The observed non-linear site-effects are at first investigated and characterized. In the following, the 3D source-to-site model is constructed and employed to provide reliable input ground motion, for a frequency band of 0-7 Hz. The effect of the folded geological structure underneath the site is quantified by simulating two aftershocks of moderate intensity and by estimating the spatial variability of the response spectra at different locations within the nuclear site. The numerical outcome stresses the need for a more detailed description of the incident wave-field used as input parameter in the antiseismic structural design of nuclear reactors and facilities. Finally, the frequency band of the time-histories obtained as outcome of the numerical simulations is enlarged by exploiting the stochastic prediction of short-period response ordinates provided by Artificial Neural Networks.
154

Uncertainty quantification in the simulation of road traffic and associated atmospheric emissions in a metropolitan area / Quantification d'incertitude en simulation du trafic routier et de ses émissions atmosphériques à l'échelle métropolitaine

Chen, Ruiwei 25 May 2018 (has links)
Ce travail porte sur la quantification d'incertitude dans la modélisation des émissions de polluants atmosphériques dues au trafic routier d'une aire urbaine. Une chaîne de modélisations des émissions de polluants atmosphériques est construite, en couplant un modèle d’affectation dynamique du trafic (ADT) avec un modèle de facteurs d’émission. Cette chaîne est appliquée à l’agglomération de Clermont-Ferrand (France) à la résolution de la rue. Un métamodèle de l’ADT est construit pour réduire le temps d’évaluation du modèle. Une analyse de sensibilité globale est ensuite effectuée sur cette chaîne, afin d’identifier les entrées les plus influentes sur les sorties. Enfin, pour la quantification d’incertitude, deux ensembles sont construits avec l’approche de Monte Carlo, l’un pour l’ADT et l’autre pour les émissions. L’ensemble d’ADT est évalué et amélioré grâce à la comparaison avec les débits du trafic observés, afin de mieux échantillonner les incertitudes / This work focuses on the uncertainty quantification in the modeling of road traffic emissions in a metropolitan area. The first step is to estimate the time-dependent traffic flow at street-resolution for a full agglomeration area, using a dynamic traffic assignment (DTA) model. Then, a metamodel is built for the DTA model set up for the agglomeration, in order to reduce the computational cost of the DTA simulation. Then the road traffic emissions of atmospheric pollutants are estimated at street resolution, based on a modeling chain that couples the DTA metamodel with an emission factor model. This modeling chain is then used to conduct a global sensitivity analysis to identify the most influential inputs in computed traffic flows, speeds and emissions. At last, the uncertainty quantification is carried out based on ensemble simulations using Monte Carlo approach. The ensemble is evaluated with observations in order to check and optimize its reliability
155

Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models

Kamilis, Dimitrios January 2018 (has links)
Uncertainty Quantification (UQ) has been an active area of research in recent years with a wide range of applications in data and imaging sciences. In many problems, the source of uncertainty stems from an unknown parameter in the model. In physical and engineering systems for example, the parameters of the partial differential equation (PDE) that model the observed data may be unknown or incompletely specified. In such cases, one may use a probabilistic description based on prior information and formulate a forward UQ problem of characterising the uncertainty in the PDE solution and observations in response to that in the parameters. Conversely, inverse UQ encompasses the statistical estimation of the unknown parameters from the available observations, which can be cast as a Bayesian inverse problem. The contributions of the thesis focus on examining the aforementioned forward and inverse UQ problems for the low-frequency, time-harmonic Maxwell equations, where the model uncertainty emanates from the lack of knowledge of the material conductivity parameter. The motivation comes from the Controlled-Source Electromagnetic Method (CSEM) that aims to detect and image hydrocarbon reservoirs by using electromagnetic field (EM) measurements to obtain information about the conductivity profile of the sub-seabed. Traditionally, algorithms for deterministic models have been employed to solve the inverse problem in CSEM by optimisation and regularisation methods, which aside from the image reconstruction provide no quantitative information on the credibility of its features. This work employs instead stochastic models where the conductivity is represented as a lognormal random field, with the objective of providing a more informative characterisation of the model observables and the unknown parameters. The variational formulation of these stochastic models is analysed and proved to be well-posed under suitable assumptions. For computational purposes the stochastic formulation is recast as a deterministic, parametric problem with distributed uncertainty, which leads to an infinite-dimensional integration problem with respect to the prior and posterior measure. One of the main challenges is thus the approximation of these integrals, with the standard choice being some variant of the Monte-Carlo (MC) method. However, such methods typically fail to take advantage of the intrinsic properties of the model and suffer from unsatisfactory convergence rates. Based on recently developed theory on high-dimensional approximation, this thesis advocates the use of Sparse Quadrature (SQ) to tackle the integration problem. For the models considered here and under certain assumptions, we prove that for forward UQ, Sparse Quadrature can attain dimension-independent convergence rates that out-perform MC. Typical CSEM models are large-scale and thus additional effort is made in this work to reduce the cost of obtaining forward solutions for each sampling parameter by utilising the weighted Reduced Basis method (RB) and the Empirical Interpolation Method (EIM). The proposed variant of a combined SQ-EIM-RB algorithm is based on an adaptive selection of training sets and a primal-dual, goal-oriented formulation for the EIM-RB approximation. Numerical examples show that the suggested computational framework can alleviate the computational costs associated with forward UQ for the pertinent large-scale models, thus providing a viable methodology for practical applications.
156

Quantified PIRT and uncertainty quantification for computer code validation

Luo, Hu 05 December 2013 (has links)
This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (�� groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals. / Graduation date: 2013 / Access restricted to the OSU Community, at author's request, from Dec. 5, 2012 - Dec. 5, 2013
157

A Hierarchical History Matching Method and its Applications

Yin, Jichao 2011 December 1900 (has links)
Modern reservoir management typically involves simulations of geological models to predict future recovery estimates, providing the economic assessment of different field development strategies. Integrating reservoir data is a vital step in developing reliable reservoir performance models. Currently, most effective strategies for traditional manual history matching commonly follow a structured approach with a sequence of adjustments from global to regional parameters, followed by local changes in model properties. In contrast, many of the recent automatic history matching methods utilize parameter sensitivities or gradients to directly update the fine-scale reservoir properties, often ignoring geological inconsistency. Therefore, there is need for combining elements of all of these scales in a seamless manner. We present a hierarchical streamline-assisted history matching, with a framework of global-local updates. A probabilistic approach, consisting of design of experiments, response surface methodology and the genetic algorithm, is used to understand the uncertainty in the large-scale static and dynamic parameters. This global update step is followed by a streamline-based model calibration for high resolution reservoir heterogeneity. This local update step assimilates dynamic production data. We apply the genetic global calibration to unconventional shale gas reservoir specifically we include stimulated reservoir volume as a constraint term in the data integration to improve history matching and reduce prediction uncertainty. We introduce a novel approach for efficiently computing well drainage volumes for shale gas wells with multistage fractures and fracture clusters, and we will filter stochastic shale gas reservoir models by comparing the computed drainage volume with the measured SRV within specified confidence limits. Finally, we demonstrate the value of integrating downhole temperature measurements as coarse-scale constraint during streamline-based history matching of dynamic production data. We first derive coarse-scale permeability trends in the reservoir from temperature data. The coarse information are then downscaled into fine scale permeability by sequential Gaussian simulation with block kriging, and updated by local-scale streamline-based history matching. he power and utility of our approaches have been demonstrated using both synthetic and field examples.
158

Fiabilité et évaluation des incertitudes pour la simulation numérique de la turbulence : application aux machines hydrauliques / Reliability and uncertainty assessment for the numerical simulation of turbulence : application to hydraulic machines

Brugière, Olivier 14 January 2015 (has links)
La simulation numérique fiable des performances de turbines hydrauliques suppose : i) de pouvoir inclure dans les calculs RANS (Reynolds-Averaged Navier-Stokes) traditionnellement mis en œuvre l'effet des incertitudes qui existent en pratique sur les conditions d'entrée de l'écoulement; ii) de pouvoir faire appel à une stratégie de type SGE (Simulation des Grandes Echelles) pour améliorer la description des effets de la turbulence lorsque des écarts subsistent entre calculs RANS et résultats d'essai de référence même après prise en compte des incertitudes. Les présents travaux mettent en oeuvre une démarche non intrusive de quantification d'incertitude (NISP pour Non-Intrusive Spectral Projection) pour deux configurations d'intérêt pratique : un distributeur de turbine Francis avec débit et angle d'entrée incertains et un aspirateur de turbine bulbe avec conditions d'entrée (profils de vitesse,en particulier en proche paroi, et grandeurs turbulentes) incertaines. L'approche NISP est utilisée non seulement pour estimer la valeur moyenne et la variance de quantités d'intérêt mais également pour disposer d'une analyse de la variance qui permet d'identifier les incertitudes les plus influentes. Les simulations RANS, vérifiées par une démarche de convergence en maillage, ne permettent pas pour la plupart des configurations analysées d'expliquer les écarts calcul / expérience grâce à la prise en compte des incertitudes d'entrée.Nous mettons donc également en ouvre des simulations SGE en faisant appel à une stratégie originale d'évaluation de la qualité des maillages utilisés dans le cadre d'une démarche de vérification des calculs SGE. Pour une majorité des configurations analysées, la combinaison d'une stratégie SGE et d'une démarche de quantification des incertitudes permet de produire des résultats numériques fiables. La prise en compte des incertitudes d'entrée permet également de proposer une démarche d'optimisation robuste du distributeur de turbine Francis étudié. / The reliable numerical simulation of hydraulic turbines performance requires : i) to includeinto the conventional RANS computations the effect of the uncertainties existing in practiceon the inflow conditions; ii) to rely on a LES (Large Eddy Simulation) strategy to improve thedescription of turbulence effects when discrepancies between RANS computations and experimentskeep arising even though uncertainties are taken into account. The present workapplies a non-intrusive Uncertainty Quantification strategy (NISP for Non-Intrusive SpectralProjection) to two configurations of practical interest : a Francis turbine distributor, with uncertaininlet flow rate and angle, and a draft-tube of a bulb-type turbine with uncertain inflowconditions (velocity distributions, in particular close to the wall boundaries, and turbulentquantities). The NISP method is not only used to compute the mean value and variance ofquantities of interest, it is also applied to perform an analysis of the variance and identify inthis way the most influential uncertainties. The RANS simulations, verified through a gridconvergence approach, are such the discrepancies between computation and experimentcannot be explained by taking into account the inflow uncertainties for most of the configurationsunder study. Therefore, LES simulations are also performed and these simulations areverified using an original methodology for assessing the quality of the computational grids(since the grid-convergence concept is not relevant for LES). For most of the flows understudy, combining a SGE strategy with a UQ approach yields reliable numerical results. Takinginto account inflow uncertainties also allows to propose a robust optimization strategy forthe Francis turbine distributor under study.
159

Some new ideas on fractional factorial design and computer experiment

Su, Heng 08 June 2015 (has links)
This thesis consists of two parts. The first part is on fractional factorial design, and the second part is on computer experiment. The first part has two chapters. In the first chapter, we use the concept of conditional main effect, and propose the CME analysis to solve the problem of effect aliasing in two-level fractional factorial design. In the second chapter, we study the conversion rates of a system of webpages with the proposed funnel testing method, by using directed graph to represent the system, fractional factorial design to conduct the experiment, and a method to optimize the total conversion rate with respect to all the webpages in the system. The second part also has two chapters. In the third chapter, we use regression models to quantify the model form uncertainties in the Perez model in building energy simulations. In the last chapter, we propose a new Gaussian process that can jointly model both point and integral responses.
160

Coupled flow systems, adjoint techniques and uncertainty quantification

Garg, Vikram Vinod, 1985- 25 October 2012 (has links)
Coupled systems are ubiquitous in modern engineering and science. Such systems can encompass fluid dynamics, structural mechanics, chemical species transport and electrostatic effects among other components, all of which can be coupled in many different ways. In addition, such models are usually multiscale, making their numerical simulation challenging, and necessitating the use of adaptive modeling techniques. The multiscale, multiphysics models of electrosomotic flow (EOF) constitute a particularly challenging coupled flow system. A special feature of such models is that the coupling between the electric physics and hydrodynamics is via the boundary. Numerical simulations of coupled systems are typically targeted towards specific Quantities of Interest (QoIs). Adjoint-based approaches offer the possibility of QoI targeted adaptive mesh refinement and efficient parameter sensitivity analysis. The formulation of appropriate adjoint problems for EOF models is particularly challenging, due to the coupling of physics via the boundary as opposed to the interior of the domain. The well-posedness of the adjoint problem for such models is also non-trivial. One contribution of this dissertation is the derivation of an appropriate adjoint problem for slip EOF models, and the development of penalty-based, adjoint-consistent variational formulations of these models. We demonstrate the use of these formulations in the simulation of EOF flows in straight and T-shaped microchannels, in conjunction with goal-oriented mesh refinement and adjoint sensitivity analysis. Complex computational models may exhibit uncertain behavior due to various reasons, ranging from uncertainty in experimentally measured model parameters to imperfections in device geometry. The last decade has seen a growing interest in the field of Uncertainty Quantification (UQ), which seeks to determine the effect of input uncertainties on the system QoIs. Monte Carlo methods remain a popular computational approach for UQ due to their ease of use and "embarassingly parallel" nature. However, a major drawback of such methods is their slow convergence rate. The second contribution of this work is the introduction of a new Monte Carlo method which utilizes local sensitivity information to build accurate surrogate models. This new method, called the Local Sensitivity Derivative Enhanced Monte Carlo (LSDEMC) method can converge at a faster rate than plain Monte Carlo, especially for problems with a low to moderate number of uncertain parameters. Adjoint-based sensitivity analysis methods enable the computation of sensitivity derivatives at virtually no extra cost after the forward solve. Thus, the LSDEMC method, in conjuction with adjoint sensitivity derivative techniques can offer a robust and efficient alternative for UQ of complex systems. The efficiency of Monte Carlo methods can be further enhanced by using stratified sampling schemes such as Latin Hypercube Sampling (LHS). However, the non-incremental nature of LHS has been identified as one of the main obstacles in its application to certain classes of complex physical systems. Current incremental LHS strategies restrict the user to at least doubling the size of an existing LHS set to retain the convergence properties of LHS. The third contribution of this research is the development of a new Hierachical LHS algorithm, that creates designs which can be used to perform LHS studies in a more flexibly incremental setting, taking a step towards adaptive LHS methods. / text

Page generated in 0.1969 seconds