• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 245
  • 245
  • 62
  • 58
  • 53
  • 36
  • 35
  • 35
  • 34
  • 28
  • 28
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY

Piyush Pandita (6561242) 10 June 2019 (has links)
<div>Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.</div><div> The aim of conducting these experiments could be to study the production of a material that has great applicability.</div><div> One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.</div><div> The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.</div><div> This gives rise to what is known as epistemic uncertainty. </div><div> This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.</div><div> The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.</div>
152

Amortissement virtuel pour la conception vibroacoustique des lanceurs futurs / Thin films and heterostructures of LiNbO3 for acoustical / optical integrated devices

Krifa, Mohamed 19 May 2017 (has links)
Dans le dimensionnement des lanceurs spatiaux, la maîtrise de l'amortissement est une problématique majeure. Faute d'essais sur structure réelle très couteux avant la phase finale de qualification, la modélisation de l'amortissement peut conduire à un sur-dimensionnement de la structure alors que le but recherché est de diminuer le coût du lancement d'une fusée tout en garantissant le confort vibratoire de la charge utile.Nos contributions sont les suivantes. Premièrement, une méthode de prédiction par le calcul des niveaux vibratoires dans les structures de lanceurs en utilisant une stratégie d'essais virtuels qui permet de prédire les amortissements en basses fréquences, est proposée. Cette méthode est basée sur l'utilisation de méta-modèles construits à partir de plans d'expériences numériques à l'aide de modèles détaillés des liaisons. Ces méta-modèles peuvent être obtenus grâce à des calculs spécifiques utilisant une résolution 3D par éléments finis avec prise en compte du contact. En utilisant ces méta-modèles, l'amortissement modal dans un cycle de vibration peut être calculé comme étant le ratio entre l'énergie dissipée et l'énergie de déformation. L'approche utilisée donne une approximation précise et peu coûteuse de la solution. Le calcul non-linéaire global qui est inaccessible pour les structures complexes est rendu accessible en utilisant l'approche virtuelle basées sur les abaques.Deuxièmement, une validation des essais virtuels sur la structure du lanceur Ariane 5 a été élaborée en tenant compte des liaisons boulonnées entre les étages afin d'illustrer l'approche proposée. Lorsque la matrice d'amortissement généralisé n'est pas diagonale (car des dissipations localisées), ces méthodes modales ne permettent pas de calculer ou d'estimer les termes d'amortissement généralisé extra-diagonaux. La problématique posée est alors la quantification de l'erreur commise lorsque l'on néglige les termes extra-diagonaux dans le calcul des niveaux vibratoires ; avec un bon ratio précision / coût de calcul.Troisièmement, la validité de l'hypothèse de diagonalité de la matrice d'amortissement généralisée a été examinée et une méthode très peu coûteuse de quantification a posteriori de l'erreur d'estimation de l'amortissement modal par la méthodes des perturbations a été proposée.Finalement, la dernière contribution de cette thèse est la proposition d'un outil d'aide à la décision qui permet de quantifier l'impact des méconnaissances sur l'amortissement dans les liaisons sur le comportement global des lanceurs via l'utilisation de la méthode info-gap. / In the dimensioning of space launchers, controlling depreciation is a major problem. In the absence of very expensive real structural tests before the final qualification phase, damping modeling can lead to over-sizing of the structure while the aim is to reduce the cost of launching a rocket while guaranteeing the vibratory comfort of the payload.[...]
153

[en] HYBRID METHOD BASED INTO KALMAN FILTER AND DEEP GENERATIVE MODEL TO HISTORY MATCHING AND UNCERTAINTY QUANTIFICATION OF FACIES GEOLOGICAL MODELS / [pt] MÉTODO HÍBRIDO BASEADO EM FILTRO DE KALMAN E MODELOS GENERATIVOS DE APRENDIZAGEM PROFUNDA NO AJUSTE DE HISTÓRICO SOB INCERTEZAS PARA MODELOS DE FÁCIES GEOLÓGICAS

SMITH WASHINGTON ARAUCO CANCHUMUNI 25 March 2019 (has links)
[pt] Os métodos baseados no filtro de Kalman têm tido sucesso notável na indústria do petróleo nos últimos anos, especialmente, para resolver problemas reais de ajuste de histórico. No entanto, como a formulação desses métodos é baseada em hipóteses de gaussianidade e linearidade, seu desempenho é severamente degradado quando a geologia a priori é descrita em termos de distribuições complexas (e.g. modelos de fácies). A tendência atual em soluções para o problema de ajuste de histórico é levar em consideração modelos de reservatórios mais realistas com geologia complexa. Assim, a modelagem de fácies geológicas desempenha um papel importante na caracterização de reservatórios, como forma de reproduzir padrões importantes de heterogeneidade e facilitar a modelagem das propriedades petrofísicas das rochas do reservatório. Esta tese introduz uma nova metodologia para realizar o ajuste de histórico de modelos geológicos complexos. A metodologia consiste na integração de métodos baseados no filtro de Kalman em particular o método conhecido na literatura como Ensemble Smoother with Multiple Data Assimilation (ES-MDA), com uma parametrização das fácies geológicas por meio de técnicas baseadas em aprendizado profundo (Deep Learning) em arquiteturas do tipo autoencoder. Um autoencoder sempre consiste em duas partes, o codificador (modelo de reconhecimento) e o decodificador (modelo gerador). O procedimento começa com o treinamento de um conjunto de realizações de fácies por meio de algoritmos de aprendizado profundo, através do qual são identificadas as principais características das imagens de fácies geológicas, permitindo criar novas realizações com as mesmas características da base de treinamento com uma reduzida parametrização dos modelos de fácies na saída do codificador. Essa parametrização é regularizada no codificador para fornecer uma distribuição gaussiana na saída, a qual é utilizada para atualizar os modelos de fácies de acordo com os dados observados do reservatório, através do método ES-MDA. Ao final, os modelos atualizados são reconstruídos através do aprendizado profundo (decodificador), com o objetivo de obter modelos finais que apresentem características similares às da base de treinamento. Os resultados, em três casos de estudo com 2 e 3 fácies, mostram que a parametrização de modelos de fácies baseada no aprendizado profundo consegue reconstruir os modelos de fácies com um erro inferior a 0,3 por cento. A metodologia proposta gera modelos geológicos ajustados que conservam a descrição geológica a priori do reservatório (fácies com canais curvilíneos), além de ser consistente com o ajuste dos dados observados do reservatório. / [en] Kalman filter-based methods have had remarkable success in the oil industry in recent years, especially to solve several real-life history matching problems. However, as the formulation of these methods is based on the assumptions of gaussianity and linearity, their performance is severely degraded when a priori geology is described in terms of complex distributions (e.g., facies models). The current trend in solutions for the history matching problem is to take into account more realistic reservoir models, with complex geology. Thus the geological facies modeling plays an important role in the characterization of reservoirs as a way of reproducing important patterns of heterogeneity and to facilitate the modeling of the reservoir rocks petrophysical properties. This thesis introduces a new methodology to perform the history matching of complex geological models. This methodology consists of the integration of Kalman filter-based methods, particularly the method known in the literature as Ensemble Smoother with Multiple Data Assimilation (ES-MDA), with a parameterization of the geological facies through techniques based on deep learning in autoencoder type architectures. An autoencoder always consists of two parts, the encoder (recognition model) and the decoder (generator model). The procedure begins with the training of a set of facies realizations via deep generative models, through which the main characteristics of geological facies images are identified, allowing for the creation of new realizations with the same characteristics of the training base, with a low dimention parametrization of the facies models at the output of the encoder. This parameterization is regularized at the encoder to provide Gaussian distribution models in the output, which is then used to update the models according to the observed data of the reservoir through the ES-MDA method. In the end, the updated models are reconstructed through deep learning (decoder), with the objective of obtaining final models that present characteristics similar to those of the training base. The results, in three case studies with 2 and 3 facies, show that the parameterization of facies models based on deep learning can reconstruct facies models with an error lower than 0.3 percent. The proposed methodology generates final geological models that preserve the a priori geological description of the reservoir (facies with curvilinear channels), besides being consistent with the adjustment of the observed data of the reservoir.
154

Advanced polyhedral discretization methods for poromechanical modelling / Méthodes de discrétisation avancées pour la modélisation hydro-poromécanique

Botti, Michele 27 November 2018 (has links)
Dans cette thèse, on s’intéresse à de nouveaux schémas de discrétisation afin de résoudre les équations couplées de la poroélasticité et nous présentons des résultats analytiques et numériques concernant des problèmes issus de la poromécanique. Nous proposons de résoudre ces problèmes en utilisant les méthodes Hybrid High-Order (HHO), une nouvelle classe de méthodes de discrétisation polyédriques d’ordre arbitraire. Cette thèse a été conjointement financée par le Bureau de Recherches Géologiques et Minières (BRGM) et le LabEx NUMEV. Le couplage entre l’écoulement souterrain et la déformation géomécanique est un sujet de recherche crucial pour les deux institutions de cofinancement. / In this manuscript we focus on novel discretization schemes for solving the coupled equations of poroelasticity and we present analytical and numerical results for poromechanics problems relevant to geoscience applications. We propose to solve these problems using Hybrid High-Order (HHO) methods, a new class of nonconforming high-order methods supporting general polyhedral meshes. This Ph.D. thesis was conjointly founded by the Bureau de recherches géologiques et minières (BRGM) and LabEx NUMEV. The coupling between subsurface flow and geomechanical deformation is a crucial research topic for both cofunding institutions.
155

Analyse physics-based de scénarios sismiques «de la faille au site» : prédiction de mouvement sismique fort pour l’étude de vulnérabilité sismique de structures critiques. / Forward physics-based analysis of "source-to-site" seismic scenarios for strong ground motion prediction and seismic vulnerability assessment of critical structures

Gatti, Filippo 25 September 2017 (has links)
L’ambition de ce travail est la prédiction du champ d’onde incident réalistique, induit par des mouvement forts de sol, aux sites d’importance stratégique, comme des centrales nucléaires. À cette fin, un plateforme multi-outil est développé et exploité pour simuler les aspects différents d’un phénomène complexe et multi-échelle comme un tremblement de terre. Ce cadre computationnel fait face à la nature diversifiée d’un tremblement de terre par approche holistique local-régionale.Un cas d’étude complexe est choisie: le tremblement de terre MW6.6 Niigata-Ken Ch¯uetsu-Oki, qui a endommagé la centrale nucléaire de Kashiwazaki-Kariwa. Les effets de site non-linéaires observés sont à premier examinés et caractérisés. Dans la suite, le modèle 3D «de la faille au site» est construit et employé pour prédire le mouvement sismique dans une bande de fréquence de 0-7 Hz. L’effet de la structure géologique pliée au-dessous du site est quantifié en simulant deux chocs d’intensité modérée et en évaluant la variabilité spatiale des spectres de réponse aux différents endroits dans le site nucléaire. Le résultat numérique souligne le besoin d’une description plus détaillée du champ d’onde incident utilisé comme paramètre d’entrée dans la conception structurel antisismique de réacteurs nucléaires et des installations. Finalement, la bande de fréquences des signaux synthétiques obtenues comme résultat des simulations numériques est agrandie en exploitant la prédiction stochastique des ordonnées spectrales à courte période fournies par des Réseaux Artificiels de Neurones. / The ambition of this work is the prediction of a synthetic yet realistic broad-band incident wave-field, induced by strong ground motion earthquakes at sites of strategic importance, such as nuclear power plants. To this end, an multi-tool platform is developed and exploited to simulate the different aspects of the complex and multi-scale phenomenon an earthquake embodies. This multi-scale computational framework copes with the manifold nature of an earthquake by a holistic local-to-regional approach. A complex case study is chosen to this end: is the MW6.6 Niigata-Ken Ch¯uetsu-Oki earthquake, which damaged the Kashiwazaki-Kariwa nuclear power plant. The observed non-linear site-effects are at first investigated and characterized. In the following, the 3D source-to-site model is constructed and employed to provide reliable input ground motion, for a frequency band of 0-7 Hz. The effect of the folded geological structure underneath the site is quantified by simulating two aftershocks of moderate intensity and by estimating the spatial variability of the response spectra at different locations within the nuclear site. The numerical outcome stresses the need for a more detailed description of the incident wave-field used as input parameter in the antiseismic structural design of nuclear reactors and facilities. Finally, the frequency band of the time-histories obtained as outcome of the numerical simulations is enlarged by exploiting the stochastic prediction of short-period response ordinates provided by Artificial Neural Networks.
156

Uncertainty quantification in the simulation of road traffic and associated atmospheric emissions in a metropolitan area / Quantification d'incertitude en simulation du trafic routier et de ses émissions atmosphériques à l'échelle métropolitaine

Chen, Ruiwei 25 May 2018 (has links)
Ce travail porte sur la quantification d'incertitude dans la modélisation des émissions de polluants atmosphériques dues au trafic routier d'une aire urbaine. Une chaîne de modélisations des émissions de polluants atmosphériques est construite, en couplant un modèle d’affectation dynamique du trafic (ADT) avec un modèle de facteurs d’émission. Cette chaîne est appliquée à l’agglomération de Clermont-Ferrand (France) à la résolution de la rue. Un métamodèle de l’ADT est construit pour réduire le temps d’évaluation du modèle. Une analyse de sensibilité globale est ensuite effectuée sur cette chaîne, afin d’identifier les entrées les plus influentes sur les sorties. Enfin, pour la quantification d’incertitude, deux ensembles sont construits avec l’approche de Monte Carlo, l’un pour l’ADT et l’autre pour les émissions. L’ensemble d’ADT est évalué et amélioré grâce à la comparaison avec les débits du trafic observés, afin de mieux échantillonner les incertitudes / This work focuses on the uncertainty quantification in the modeling of road traffic emissions in a metropolitan area. The first step is to estimate the time-dependent traffic flow at street-resolution for a full agglomeration area, using a dynamic traffic assignment (DTA) model. Then, a metamodel is built for the DTA model set up for the agglomeration, in order to reduce the computational cost of the DTA simulation. Then the road traffic emissions of atmospheric pollutants are estimated at street resolution, based on a modeling chain that couples the DTA metamodel with an emission factor model. This modeling chain is then used to conduct a global sensitivity analysis to identify the most influential inputs in computed traffic flows, speeds and emissions. At last, the uncertainty quantification is carried out based on ensemble simulations using Monte Carlo approach. The ensemble is evaluated with observations in order to check and optimize its reliability
157

Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models

Kamilis, Dimitrios January 2018 (has links)
Uncertainty Quantification (UQ) has been an active area of research in recent years with a wide range of applications in data and imaging sciences. In many problems, the source of uncertainty stems from an unknown parameter in the model. In physical and engineering systems for example, the parameters of the partial differential equation (PDE) that model the observed data may be unknown or incompletely specified. In such cases, one may use a probabilistic description based on prior information and formulate a forward UQ problem of characterising the uncertainty in the PDE solution and observations in response to that in the parameters. Conversely, inverse UQ encompasses the statistical estimation of the unknown parameters from the available observations, which can be cast as a Bayesian inverse problem. The contributions of the thesis focus on examining the aforementioned forward and inverse UQ problems for the low-frequency, time-harmonic Maxwell equations, where the model uncertainty emanates from the lack of knowledge of the material conductivity parameter. The motivation comes from the Controlled-Source Electromagnetic Method (CSEM) that aims to detect and image hydrocarbon reservoirs by using electromagnetic field (EM) measurements to obtain information about the conductivity profile of the sub-seabed. Traditionally, algorithms for deterministic models have been employed to solve the inverse problem in CSEM by optimisation and regularisation methods, which aside from the image reconstruction provide no quantitative information on the credibility of its features. This work employs instead stochastic models where the conductivity is represented as a lognormal random field, with the objective of providing a more informative characterisation of the model observables and the unknown parameters. The variational formulation of these stochastic models is analysed and proved to be well-posed under suitable assumptions. For computational purposes the stochastic formulation is recast as a deterministic, parametric problem with distributed uncertainty, which leads to an infinite-dimensional integration problem with respect to the prior and posterior measure. One of the main challenges is thus the approximation of these integrals, with the standard choice being some variant of the Monte-Carlo (MC) method. However, such methods typically fail to take advantage of the intrinsic properties of the model and suffer from unsatisfactory convergence rates. Based on recently developed theory on high-dimensional approximation, this thesis advocates the use of Sparse Quadrature (SQ) to tackle the integration problem. For the models considered here and under certain assumptions, we prove that for forward UQ, Sparse Quadrature can attain dimension-independent convergence rates that out-perform MC. Typical CSEM models are large-scale and thus additional effort is made in this work to reduce the cost of obtaining forward solutions for each sampling parameter by utilising the weighted Reduced Basis method (RB) and the Empirical Interpolation Method (EIM). The proposed variant of a combined SQ-EIM-RB algorithm is based on an adaptive selection of training sets and a primal-dual, goal-oriented formulation for the EIM-RB approximation. Numerical examples show that the suggested computational framework can alleviate the computational costs associated with forward UQ for the pertinent large-scale models, thus providing a viable methodology for practical applications.
158

Quantified PIRT and uncertainty quantification for computer code validation

Luo, Hu 05 December 2013 (has links)
This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (�� groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals. / Graduation date: 2013 / Access restricted to the OSU Community, at author's request, from Dec. 5, 2012 - Dec. 5, 2013
159

A Hierarchical History Matching Method and its Applications

Yin, Jichao 2011 December 1900 (has links)
Modern reservoir management typically involves simulations of geological models to predict future recovery estimates, providing the economic assessment of different field development strategies. Integrating reservoir data is a vital step in developing reliable reservoir performance models. Currently, most effective strategies for traditional manual history matching commonly follow a structured approach with a sequence of adjustments from global to regional parameters, followed by local changes in model properties. In contrast, many of the recent automatic history matching methods utilize parameter sensitivities or gradients to directly update the fine-scale reservoir properties, often ignoring geological inconsistency. Therefore, there is need for combining elements of all of these scales in a seamless manner. We present a hierarchical streamline-assisted history matching, with a framework of global-local updates. A probabilistic approach, consisting of design of experiments, response surface methodology and the genetic algorithm, is used to understand the uncertainty in the large-scale static and dynamic parameters. This global update step is followed by a streamline-based model calibration for high resolution reservoir heterogeneity. This local update step assimilates dynamic production data. We apply the genetic global calibration to unconventional shale gas reservoir specifically we include stimulated reservoir volume as a constraint term in the data integration to improve history matching and reduce prediction uncertainty. We introduce a novel approach for efficiently computing well drainage volumes for shale gas wells with multistage fractures and fracture clusters, and we will filter stochastic shale gas reservoir models by comparing the computed drainage volume with the measured SRV within specified confidence limits. Finally, we demonstrate the value of integrating downhole temperature measurements as coarse-scale constraint during streamline-based history matching of dynamic production data. We first derive coarse-scale permeability trends in the reservoir from temperature data. The coarse information are then downscaled into fine scale permeability by sequential Gaussian simulation with block kriging, and updated by local-scale streamline-based history matching. he power and utility of our approaches have been demonstrated using both synthetic and field examples.
160

Fiabilité et évaluation des incertitudes pour la simulation numérique de la turbulence : application aux machines hydrauliques / Reliability and uncertainty assessment for the numerical simulation of turbulence : application to hydraulic machines

Brugière, Olivier 14 January 2015 (has links)
La simulation numérique fiable des performances de turbines hydrauliques suppose : i) de pouvoir inclure dans les calculs RANS (Reynolds-Averaged Navier-Stokes) traditionnellement mis en œuvre l'effet des incertitudes qui existent en pratique sur les conditions d'entrée de l'écoulement; ii) de pouvoir faire appel à une stratégie de type SGE (Simulation des Grandes Echelles) pour améliorer la description des effets de la turbulence lorsque des écarts subsistent entre calculs RANS et résultats d'essai de référence même après prise en compte des incertitudes. Les présents travaux mettent en oeuvre une démarche non intrusive de quantification d'incertitude (NISP pour Non-Intrusive Spectral Projection) pour deux configurations d'intérêt pratique : un distributeur de turbine Francis avec débit et angle d'entrée incertains et un aspirateur de turbine bulbe avec conditions d'entrée (profils de vitesse,en particulier en proche paroi, et grandeurs turbulentes) incertaines. L'approche NISP est utilisée non seulement pour estimer la valeur moyenne et la variance de quantités d'intérêt mais également pour disposer d'une analyse de la variance qui permet d'identifier les incertitudes les plus influentes. Les simulations RANS, vérifiées par une démarche de convergence en maillage, ne permettent pas pour la plupart des configurations analysées d'expliquer les écarts calcul / expérience grâce à la prise en compte des incertitudes d'entrée.Nous mettons donc également en ouvre des simulations SGE en faisant appel à une stratégie originale d'évaluation de la qualité des maillages utilisés dans le cadre d'une démarche de vérification des calculs SGE. Pour une majorité des configurations analysées, la combinaison d'une stratégie SGE et d'une démarche de quantification des incertitudes permet de produire des résultats numériques fiables. La prise en compte des incertitudes d'entrée permet également de proposer une démarche d'optimisation robuste du distributeur de turbine Francis étudié. / The reliable numerical simulation of hydraulic turbines performance requires : i) to includeinto the conventional RANS computations the effect of the uncertainties existing in practiceon the inflow conditions; ii) to rely on a LES (Large Eddy Simulation) strategy to improve thedescription of turbulence effects when discrepancies between RANS computations and experimentskeep arising even though uncertainties are taken into account. The present workapplies a non-intrusive Uncertainty Quantification strategy (NISP for Non-Intrusive SpectralProjection) to two configurations of practical interest : a Francis turbine distributor, with uncertaininlet flow rate and angle, and a draft-tube of a bulb-type turbine with uncertain inflowconditions (velocity distributions, in particular close to the wall boundaries, and turbulentquantities). The NISP method is not only used to compute the mean value and variance ofquantities of interest, it is also applied to perform an analysis of the variance and identify inthis way the most influential uncertainties. The RANS simulations, verified through a gridconvergence approach, are such the discrepancies between computation and experimentcannot be explained by taking into account the inflow uncertainties for most of the configurationsunder study. Therefore, LES simulations are also performed and these simulations areverified using an original methodology for assessing the quality of the computational grids(since the grid-convergence concept is not relevant for LES). For most of the flows understudy, combining a SGE strategy with a UQ approach yields reliable numerical results. Takinginto account inflow uncertainties also allows to propose a robust optimization strategy forthe Francis turbine distributor under study.

Page generated in 0.1424 seconds