Spelling suggestions: "subject:"incertainty quantification"" "subject:"ncertainty quantification""
151 |
Multidisciplinary Design Under Uncertainty Framework of a Spacecraft and Trajectory for an Interplanetary MissionSiddhesh Ajay Naidu (18437880) 28 April 2024 (has links)
<p dir="ltr">Design under uncertainty (DUU) for spacecraft is crucial in ensuring mission success, especially given the criticality of their failure. To obtain a more realistic understanding of space systems, it is beneficial to holistically couple the modeling of the spacecraft and its trajectory as a multidisciplinary analysis (MDA). In this work, a MDA model is developed for an Earth-Mars mission by employing the general mission analysis tool (GMAT) to model the mission trajectory and rocket propulsion analysis (RPA) to design the engines. By utilizing this direct MDA model, the deterministic optimization (DO) of the system is performed first and yields a design that completed the mission in 307 days while requiring 475 kg of fuel. The direct MDA model is also integrated into a Monte Carlo simulation (MCS) to investigate the uncertainty quantification (UQ) of the spacecraft and trajectory system. When considering the combined uncertainty in the launch date for a 20-day window and the specific impulses, the time of flight ranges from 275 to 330 days and the total fuel consumption ranges from 475 to 950 kg. The spacecraft velocity exhibits deviations ranging from 2 to 4 km/s at any given instance in the Earth inertial frame. The amount of fuel consumed during the TCM ranges from 1 to 250 kg, while during the MOI, the amount of fuel consumed ranges from 350 to 810 kg. The usage of the direct MDA model for optimization and uncertainty quantification of the system can be computationally prohibitive for DUU. To address this challenge, the effectiveness of utilizing surrogate-based approaches for performing UQ is demonstrated, resulting in significantly lower computational costs. Gaussian processes (GP) models trained on data from the MDA model were implemented into the UQ framework and their results were compared to those of the direct MDA method. When considering the combined uncertainty from both sources, the surrogate-based method had a mean error of 1.67% and required only 29% of the computational time. When compared to the direct MDA, the time of flight range matched well. While the TCM and MOI fuel consumption ranges were smaller by 5 kg. These GP models were integrated into the DUU framework to perform reliability-based design optimization (RBDO) feasibly for the spacecraft and trajectory system. For the combined uncertainty, the DO design yielded a poor reliability of 54%, underscoring the necessity for performing RBDO. The DUU framework obtained a design with a significantly improved reliability of 99%, which required an additional 39.19 kg of fuel and also resulted in a reduced time of flight by 0.55 days.</p>
|
152 |
Modeling and Experimental Validation of Mission-Specific Prognosis of Li-Ion Batteries with Hybrid Physics-Informed Neural NetworksFricke, Kajetan 01 January 2023 (has links) (PDF)
While the second part of the 20th century was dominated by combustion engine powered vehicles, climate change and limited oil resources has been forcing car manufacturers and other companies in the mobility sector to switch to renewable energy sources. Electric engines supplied by Li-ion battery cells are on the forefront of this revolution in the mobility sector. A challenging but very important task hereby is the precise forecasting of the degradation of battery state-of-health and state-of-charge. Hence, there is a high demand in models that can predict the SOH and SOC and consider the specifics of a certain kind of battery cell and the usage profile of the battery. While traditional physics-based and data-driven approaches are used to monitor the SOH and SOC, they both have limitations related to computational costs or that require engineers to continually update their prediction models as new battery cells are developed and put into use in battery-powered vehicle fleets. In this dissertation, we enhance a hybrid physics-informed machine learning version of a battery SOC model to predict voltage drop during discharge. The enhanced model captures the effect of wide variation of load levels, in the form of input current, which causes large thermal stress cycles. The cell temperature build-up during a discharge cycle is used to identify temperature-sensitive model parameters. Additionally, we enhance an aging model built upon cumulative energy drawn by introducing the effect of the load level. We then map cumulative energy and load level to battery capacity with a Gaussian process model. To validate our approach, we use a battery aging dataset collected on a self-developed testbed, where we used a wide current level range to age battery packs in accelerated fashion. Prediction results show that our model can be successfully calibrated and generalizes across all applied load levels.
|
153 |
Modeling Continental-Scale Outdoor Environmental Sound Levels with Limited DataPedersen, Katrina Lynn 01 January 2021 (has links) (PDF)
Modeling outdoor acoustic environments is a challenging problem because outdoor acoustic environments are the combination of diverse sources and propagation effects, including barriers to propagation such as buildings or vegetation. Outdoor acoustic environments are most commonly modeled on small geographic scales (e.g., within a single city). Extending modeling efforts to continental scales is particularly challenging due to an increase in the variety of geographic environments. Furthermore, acoustic data on which to train and validate models are expensive to collect and therefore relatively limited. It is unclear how models trained on this limited acoustic data will perform across continental-scales, which likely contain unique geographic regions which are not represented in the training data.
In this dissertation, we consider the problem of continental-scale outdoor environmental sound level modeling using the contiguous United States for our area of study. We use supervised machine learning methods to produce models of various acoustic metrics and unsupervised learning methods to study the natural structures in geospatial data. We present a validation study of two continental-scale models which demonstrates that there is a need for better uncertainty quantification and tools to guide data collection. Using ensemble models, we investigate methods for quantifying uncertainty in continental-scale models. We also study methods of improving model accuracy, including dimensionality reduction, and explore the feasibility of predicting hourly spectral levels.
|
154 |
Improving hydrological post-processing for assessing the conditional predictive uncertainty of monthly streamflowsRomero Cuellar, Jonathan 07 January 2020 (has links)
[ES] La cuantificación de la incertidumbre predictiva es de vital importancia para producir predicciones hidrológicas confiables que soporten y apoyen la toma de decisiones en el marco de la gestión de los recursos hídricos. Los post-procesadores hidrológicos son herramientas adecuadas para estimar la incertidumbre predictiva de las predicciones hidrológicas (salidas del modelo hidrológico). El objetivo general de esta tesis es mejorar los métodos de post-procesamiento hidrológico para estimar la incertidumbre predictiva de caudales mensuales. Esta tesis pretende resolver dos problemas del post-procesamiento hidrológico: i) la heterocedasticidad y ii) la función de verosimilitud intratable. Los objetivos específicos de esta tesis son tres. Primero y relacionado con la heterocedasticidad, se propone y evalúa un nuevo método de post-procesamiento llamado GMM post-processor que consiste en la combinación del esquema de modelado de probabilidad Bayesiana conjunta y la mezcla de Gaussianas múltiples. Además, se comparó el desempeño del post-procesador propuesto con otros métodos tradicionales y bien aceptados en caudales mensuales a través de las doce cuencas hidrográficas del proyecto MOPEX. A partir de este objetivo (capitulo 2), encontramos que GMM post-processor es el mejor para estimar la incertidumbre predictiva de caudales mensuales, especialmente en cuencas de clima seco.
Segundo, se propone un método para cuantificar la incertidumbre predictiva en el contexto de post-procesamiento hidrológico cuando sea difícil calcular la función de verosimilitud (función de verosimilitud intratable). Algunas veces en modelamiento hidrológico es difícil calcular la función de verosimilitud, por ejemplo, cuando se trabaja con modelos complejos o en escenarios de escasa información como en cuencas no aforadas. Por lo tanto, se propone el ABC post-processor que intercambia la estimación de la función de verosimilitud por el uso de resúmenes estadísticos y datos simulados. De este objetivo específico (capitulo 3), se demuestra que la distribución predictiva estimada por un método exacto (MCMC post-processor) o por un método aproximado (ABC post-processor) es similar. Este resultado es importante porque trabajar con escasa información es una característica común en los estudios hidrológicos.
Finalmente, se aplica el ABC post-processor para estimar la incertidumbre de los estadísticos de los caudales obtenidos desde las proyecciones de cambio climático, como un caso particular de un problema de función de verosimilitud intratable. De este objetivo específico (capitulo 4), encontramos que el ABC post-processor ofrece proyecciones de cambio climático más confiables que los 14 modelos climáticos (sin post-procesamiento). De igual forma, ABC post-processor produce bandas de incertidumbre más realista para los estadísticos de los caudales que el método clásico de múltiples conjuntos (ensamble). / [CA] La quantificació de la incertesa predictiva és de vital importància per a produir prediccions hidrològiques confiables que suporten i recolzen la presa de decisions en el marc de la gestió dels recursos hídrics. Els post-processadors hidrològics són eines adequades per a estimar la incertesa predictiva de les prediccions hidrològiques (eixides del model hidrològic). L'objectiu general d'aquesta tesi és millorar els mètodes de post-processament hidrològic per a estimar la incertesa predictiva de cabals mensuals. Els objectius específics d'aquesta tesi són tres. Primer, es proposa i avalua un nou mètode de post-processament anomenat GMM post-processor que consisteix en la combinació de l'esquema de modelatge de probabilitat Bayesiana conjunta i la barreja de Gaussianes múltiples. A més, es compara l'acompliment del post-processador proposat amb altres mètodes tradicionals i ben acceptats en cabals mensuals a través de les dotze conques hidrogràfiques del projecte MOPEX. A partir d'aquest objectiu (capítol 2), trobem que GMM post-processor és el millor per a estimar la incertesa predictiva de cabals mensuals, especialment en conques de clima sec.
En segon lloc, es proposa un mètode per a quantificar la incertesa predictiva en el context de post-processament hidrològic quan siga difícil calcular la funció de versemblança (funció de versemblança intractable). Algunes vegades en modelació hidrològica és difícil calcular la funció de versemblança, per exemple, quan es treballa amb models complexos o amb escenaris d'escassa informació com a conques no aforades. Per tant, es proposa l'ABC post-processor que intercanvia l'estimació de la funció de versemblança per l'ús de resums estadístics i dades simulades. D'aquest objectiu específic (capítol 3), es demostra que la distribució predictiva estimada per un mètode exacte (MCMC post-processor) o per un mètode aproximat (ABC post-processor) és similar. Aquest resultat és important perquè treballar amb escassa informació és una característica comuna als estudis hidrològics.
Finalment, s'aplica l'ABC post-processor per a estimar la incertesa dels estadístics dels cabals obtinguts des de les projeccions de canvi climàtic. D'aquest objectiu específic (capítol 4), trobem que l'ABC post-processor ofereix projeccions de canvi climàtic més confiables que els 14 models climàtics (sense post-processament). D'igual forma, ABC post-processor produeix bandes d'incertesa més realistes per als estadístics dels cabals que el mètode clàssic d'assemble. / [EN] The predictive uncertainty quantification in monthly streamflows is crucial to make reliable hydrological predictions that help and support decision-making in water resources management. Hydrological post-processing methods are suitable tools to estimate the predictive uncertainty of deterministic streamflow predictions (hydrological model outputs). In general, this thesis focuses on improving hydrological post-processing methods for assessing the conditional predictive uncertainty of monthly streamflows. This thesis deal with two issues of the hydrological post-processing scheme i) the heteroscedasticity problem and ii) the intractable likelihood problem. Mainly, this thesis includes three specific aims. First and relate to the heteroscedasticity problem, we develop and evaluate a new post-processing approach, called GMM post-processor, which is based on the Bayesian joint probability modelling approach and the Gaussian mixture models. Besides, we compare the performance of the proposed post-processor with the well-known exiting post-processors for monthly streamflows across 12 MOPEX catchments. From this aim (chapter 2), we find that the GMM post-processor is the best suited for estimating the conditional predictive uncertainty of monthly streamflows, especially for dry catchments.
Secondly, we introduce a method to quantify the conditional predictive uncertainty in hydrological post-processing contexts when it is cumbersome to calculate the likelihood (intractable likelihood). Sometimes, it can be challenging to estimate the likelihood itself in hydrological modelling, especially working with complex models or with ungauged catchments. Therefore, we propose the ABC post-processor that exchanges the requirement of calculating the likelihood function by the use of some sufficient summary statistics and synthetic datasets. With this aim in mind (chapter 3), we prove that the conditional predictive distribution is similarly produced by the exact predictive (MCMC post-processor) or the approximate predictive (ABC post-processor), qualitatively speaking. This finding is significant because dealing with scarce information is a common condition in hydrological studies.
Finally, we apply the ABC post-processing method to estimate the uncertainty of streamflow statistics obtained from climate change projections, such as a particular case of intractable likelihood problem. From this specific objective (chapter 4), we find that the ABC post-processor approach: 1) offers more reliable projections than 14 climate models (without post-processing); 2) concerning the best climate models during the baseline period, produces more realistic uncertainty bands than the classical multi-model ensemble approach. / I would like to thank the Gobernación del Huila Scholarship Program No. 677
(Colombia) for providing the financial support for my PhD research. / Romero Cuellar, J. (2019). Improving hydrological post-processing for assessing the conditional predictive uncertainty of monthly streamflows [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/133999
|
155 |
Ensemble for Deterministic Sampling with positive weights : Uncertainty quantification with deterministically chosen samplesSahlberg, Arne January 2016 (has links)
Knowing the uncertainty of a calculated result is always important, but especially so when performing calculations for safety analysis. A traditional way of propagating the uncertainty of input parameters is Monte Carlo (MC) methods. A quicker alternative to MC, especially useful when computations are heavy, is Deterministic Sampling (DS). DS works by hand-picking a small set of samples, rather than randomizing a large set as in MC methods. The samples and its corresponding weights are chosen to represent the uncertainty one wants to propagate by encoding the first few statistical moments of the parameters' distributions. Finding a suitable ensemble for DS in not easy, however. Given a large enough set of samples, one can always calculate weights to encode the first couple of moments, but there is good reason to want an ensemble with only positive weights. How to choose the ensemble for DS so that all weights are positive is the problem investigated in this project. Several methods for generating such ensembles have been derived, and an algorithm for calculating weights while forcing them to be positive has been found. The methods and generated ensembles have been tested for use in uncertainty propagation in many different cases and the ensemble sizes have been compared. In general, encoding two or four moments in an ensemble seems to be enough to get a good result for the propagated mean value and standard deviation. Regarding size, the most favorable case is when the parameters are independent and have symmetrical distributions. In short, DS can work as a quicker alternative to MC methods in uncertainty propagation as well as in other applications.
|
156 |
Predicting multibody assembly of proteinsRasheed, Md. Muhibur 25 September 2014 (has links)
This thesis addresses the multi-body assembly (MBA) problem in the context of protein assemblies. [...] In this thesis, we chose the protein assembly domain because accurate and reliable computational modeling, simulation and prediction of such assemblies would clearly accelerate discoveries in understanding of the complexities of metabolic pathways, identifying the molecular basis for normal health and diseases, and in the designing of new drugs and other therapeutics. [...] [We developed] F²Dock (Fast Fourier Docking) which includes a multi-term function which includes both a statistical thermodynamic approximation of molecular free energy as well as several of knowledge-based terms. Parameters of the scoring model were learned based on a large set of positive/negative examples, and when tested on 176 protein complexes of various types, showed excellent accuracy in ranking correct configurations higher (F² Dock ranks the correcti solution as the top ranked one in 22/176 cases, which is better than other unsupervised prediction software on the same benchmark). Most of the protein-protein interaction scoring terms can be expressed as integrals over the occupied volume, boundary, or a set of discrete points (atom locations), of distance dependent decaying kernels. We developed a dynamic adaptive grid (DAG) data structure which computes smooth surface and volumetric representations of a protein complex in O(m log m) time, where m is the number of atoms assuming that the smallest feature size h is [theta](r[subscript max]) where r[subscript max] is the radius of the largest atom; updates in O(log m) time; and uses O(m)memory. We also developed the dynamic packing grids (DPG) data structure which supports quasi-constant time updates (O(log w)) and spherical neighborhood queries (O(log log w)), where w is the word-size in the RAM. DPG and DAG together results in O(k) time approximation of scoring terms where k << m is the size of the contact region between proteins. [...] [W]e consider the symmetric spherical shell assembly case, where multiple copies of identical proteins tile the surface of a sphere. Though this is a restricted subclass of MBA, it is an important one since it would accelerate development of drugs and antibodies to prevent viruses from forming capsids, which have such spherical symmetry in nature. We proved that it is possible to characterize the space of possible symmetric spherical layouts using a small number of representative local arrangements (called tiles), and their global configurations (tiling). We further show that the tilings, and the mapping of proteins to tilings on arbitrary sized shells is parameterized by 3 discrete parameters and 6 continuous degrees of freedom; and the 3 discrete DOF can be restricted to a constant number of cases if the size of the shell is known (in terms of the number of protein n). We also consider the case where a coarse model of the whole complex of proteins are available. We show that even when such coarse models do not show atomic positions, they can be sufficient to identify a general location for each protein and its neighbors, and thereby restricts the configurational space. We developed an iterative refinement search protocol that leverages such multi-resolution structural data to predict accurate high resolution model of protein complexes, and successfully applied the protocol to model gp120, a protein on the spike of HIV and currently the most feasible target for anti-HIV drug design. / text
|
157 |
Quantification of parametric uncertainties effects in structural failure criteria /Yanik, Yasar January 2019 (has links)
Orientador: Samuel Silva / Resumo: Critérios de falhas realizam a predição de circunstâncias nas quais materiais sólidos estão sobre ação de carregamentos externos. As teorias de falhas são conhecidas como diferentes critérios de falhas, como von Mises e Tresca, os quais são os mais famosos para determinados materiais. Além disso, esta dissertação de mestrado pretende mostrar a comparação entre os critérios de falha de Tresca e von Mises, levando em conta incertezas subjacentes nas equações constitutivas e na análise de tensão. Para exemplificar acomparação,algumassimulaçõessãorealizadasusandoumaplacasimples,umproblema de deflexão simples,e a estrutura de um carro do formula SAE. Devido à complexidade deste sistema, diferentes tipos de etapas probabilísticas são utilizadas, como o método de superfície de resposta e a correlação de parâmetros. Os resultados mostram que várias variáveis aleatórias de entrada afetam em maneiras diferentes as variáveis aleatórias de saída e que não há uma diferença grande entre os critérios de falha de von Mises e Tresca quando incertezas são assumidas na formulação para a análise de tensão. / Abstract: Failure theory is the investigation of predicting circumstances under which solid materials under the processing of external loads. The theories of failure are known as different failure criteria such as von Mises and Tresca which are the most famous of these for certain materials. Additionally, this master dissertation intends to show a comparison between Tresca and von Mises failure criterions, taking into account the underlying uncertainties in the constitutive equations and stress analysis. In order to exemplify the comparison, some numerical simulations are performed using a simple plate, simple deflection problem and a frame of the formula car. Due to the complexity of frame of the formula car, different kind of probabilistic steps are used as a response surface method and parameters correlation. Results show that several random input variables effect the random output variables in various ways, and there is no such a big difference between the von Mises and Tresca failure criterions when uncertainties are assumed in the formulation for stress analysis. / Mestre
|
158 |
BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTYPiyush Pandita (6561242) 10 June 2019 (has links)
<div>Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.</div><div> The aim of conducting these experiments could be to study the production of a material that has great applicability.</div><div> One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.</div><div> The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.</div><div> This gives rise to what is known as epistemic uncertainty. </div><div> This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.</div><div> The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.</div>
|
159 |
Amortissement virtuel pour la conception vibroacoustique des lanceurs futurs / Thin films and heterostructures of LiNbO3 for acoustical / optical integrated devicesKrifa, Mohamed 19 May 2017 (has links)
Dans le dimensionnement des lanceurs spatiaux, la maîtrise de l'amortissement est une problématique majeure. Faute d'essais sur structure réelle très couteux avant la phase finale de qualification, la modélisation de l'amortissement peut conduire à un sur-dimensionnement de la structure alors que le but recherché est de diminuer le coût du lancement d'une fusée tout en garantissant le confort vibratoire de la charge utile.Nos contributions sont les suivantes. Premièrement, une méthode de prédiction par le calcul des niveaux vibratoires dans les structures de lanceurs en utilisant une stratégie d'essais virtuels qui permet de prédire les amortissements en basses fréquences, est proposée. Cette méthode est basée sur l'utilisation de méta-modèles construits à partir de plans d'expériences numériques à l'aide de modèles détaillés des liaisons. Ces méta-modèles peuvent être obtenus grâce à des calculs spécifiques utilisant une résolution 3D par éléments finis avec prise en compte du contact. En utilisant ces méta-modèles, l'amortissement modal dans un cycle de vibration peut être calculé comme étant le ratio entre l'énergie dissipée et l'énergie de déformation. L'approche utilisée donne une approximation précise et peu coûteuse de la solution. Le calcul non-linéaire global qui est inaccessible pour les structures complexes est rendu accessible en utilisant l'approche virtuelle basées sur les abaques.Deuxièmement, une validation des essais virtuels sur la structure du lanceur Ariane 5 a été élaborée en tenant compte des liaisons boulonnées entre les étages afin d'illustrer l'approche proposée. Lorsque la matrice d'amortissement généralisé n'est pas diagonale (car des dissipations localisées), ces méthodes modales ne permettent pas de calculer ou d'estimer les termes d'amortissement généralisé extra-diagonaux. La problématique posée est alors la quantification de l'erreur commise lorsque l'on néglige les termes extra-diagonaux dans le calcul des niveaux vibratoires ; avec un bon ratio précision / coût de calcul.Troisièmement, la validité de l'hypothèse de diagonalité de la matrice d'amortissement généralisée a été examinée et une méthode très peu coûteuse de quantification a posteriori de l'erreur d'estimation de l'amortissement modal par la méthodes des perturbations a été proposée.Finalement, la dernière contribution de cette thèse est la proposition d'un outil d'aide à la décision qui permet de quantifier l'impact des méconnaissances sur l'amortissement dans les liaisons sur le comportement global des lanceurs via l'utilisation de la méthode info-gap. / In the dimensioning of space launchers, controlling depreciation is a major problem. In the absence of very expensive real structural tests before the final qualification phase, damping modeling can lead to over-sizing of the structure while the aim is to reduce the cost of launching a rocket while guaranteeing the vibratory comfort of the payload.[...]
|
160 |
[en] HYBRID METHOD BASED INTO KALMAN FILTER AND DEEP GENERATIVE MODEL TO HISTORY MATCHING AND UNCERTAINTY QUANTIFICATION OF FACIES GEOLOGICAL MODELS / [pt] MÉTODO HÍBRIDO BASEADO EM FILTRO DE KALMAN E MODELOS GENERATIVOS DE APRENDIZAGEM PROFUNDA NO AJUSTE DE HISTÓRICO SOB INCERTEZAS PARA MODELOS DE FÁCIES GEOLÓGICASSMITH WASHINGTON ARAUCO CANCHUMUNI 25 March 2019 (has links)
[pt] Os métodos baseados no filtro de Kalman têm tido sucesso notável na
indústria do petróleo nos últimos anos, especialmente, para resolver problemas
reais de ajuste de histórico. No entanto, como a formulação desses métodos
é baseada em hipóteses de gaussianidade e linearidade, seu desempenho
é severamente degradado quando a geologia a priori é descrita em termos
de distribuições complexas (e.g. modelos de fácies). A tendência atual em
soluções para o problema de ajuste de histórico é levar em consideração
modelos de reservatórios mais realistas com geologia complexa. Assim, a
modelagem de fácies geológicas desempenha um papel importante na caracterização
de reservatórios, como forma de reproduzir padrões importantes
de heterogeneidade e facilitar a modelagem das propriedades petrofísicas
das rochas do reservatório. Esta tese introduz uma nova metodologia para
realizar o ajuste de histórico de modelos geológicos complexos. A metodologia
consiste na integração de métodos baseados no filtro de Kalman em
particular o método conhecido na literatura como Ensemble Smoother with
Multiple Data Assimilation (ES-MDA), com uma parametrização das fácies
geológicas por meio de técnicas baseadas em aprendizado profundo (Deep
Learning) em arquiteturas do tipo autoencoder. Um autoencoder sempre
consiste em duas partes, o codificador (modelo de reconhecimento) e o decodificador
(modelo gerador). O procedimento começa com o treinamento de
um conjunto de realizações de fácies por meio de algoritmos de aprendizado
profundo, através do qual são identificadas as principais características das
imagens de fácies geológicas, permitindo criar novas realizações com as mesmas
características da base de treinamento com uma reduzida parametrização
dos modelos de fácies na saída do codificador. Essa parametrização é
regularizada no codificador para fornecer uma distribuição gaussiana na
saída, a qual é utilizada para atualizar os modelos de fácies de acordo com
os dados observados do reservatório, através do método ES-MDA. Ao final,
os modelos atualizados são reconstruídos através do aprendizado profundo
(decodificador), com o objetivo de obter modelos finais que apresentem características
similares às da base de treinamento.
Os resultados, em três casos de estudo com 2 e 3 fácies, mostram que
a parametrização de modelos de fácies baseada no aprendizado profundo
consegue reconstruir os modelos de fácies com um erro inferior a 0,3 por cento. A
metodologia proposta gera modelos geológicos ajustados que conservam a
descrição geológica a priori do reservatório (fácies com canais curvilíneos),
além de ser consistente com o ajuste dos dados observados do reservatório. / [en] Kalman filter-based methods have had remarkable success in the oil
industry in recent years, especially to solve several real-life history matching
problems. However, as the formulation of these methods is based on the
assumptions of gaussianity and linearity, their performance is severely degraded
when a priori geology is described in terms of complex distributions
(e.g., facies models). The current trend in solutions for the history matching
problem is to take into account more realistic reservoir models, with complex
geology. Thus the geological facies modeling plays an important role in the
characterization of reservoirs as a way of reproducing important patterns
of heterogeneity and to facilitate the modeling of the reservoir rocks petrophysical
properties. This thesis introduces a new methodology to perform
the history matching of complex geological models. This methodology consists
of the integration of Kalman filter-based methods, particularly the
method known in the literature as Ensemble Smoother with Multiple Data
Assimilation (ES-MDA), with a parameterization of the geological facies
through techniques based on deep learning in autoencoder type architectures.
An autoencoder always consists of two parts, the encoder (recognition
model) and the decoder (generator model). The procedure begins with the
training of a set of facies realizations via deep generative models, through
which the main characteristics of geological facies images are identified, allowing
for the creation of new realizations with the same characteristics of
the training base, with a low dimention parametrization of the facies models
at the output of the encoder. This parameterization is regularized at
the encoder to provide Gaussian distribution models in the output, which
is then used to update the models according to the observed data of the
reservoir through the ES-MDA method. In the end, the updated models
are reconstructed through deep learning (decoder), with the objective of
obtaining final models that present characteristics similar to those of the
training base.
The results, in three case studies with 2 and 3 facies, show that the parameterization
of facies models based on deep learning can reconstruct facies
models with an error lower than 0.3 percent. The proposed methodology generates
final geological models that preserve the a priori geological description of
the reservoir (facies with curvilinear channels), besides being consistent with
the adjustment of the observed data of the reservoir.
|
Page generated in 0.1166 seconds