Spelling suggestions: "subject:"distory catching"" "subject:"distory batching""
11 |
History matching of surfactant-polymer floodingPratik Kiranrao Naik (5930765) 17 January 2019 (has links)
This thesis presents a framework for history matching and model calibration of surfactant-polymer (SP) flooding. At first, a high-fidelity mechanistic SP flood model is constructed by performing extensive lab-scale experiments on Berea cores. Then, incorporating Sobol based sensitivity analysis, polynomial chaos expansion based surrogate modelling (PCE-proxy) and Genetic algorithm based inverse optimization, an optimized model parameter set is determined by minimizing the miss-fit between PCE-proxy response and experimental observations for quantities of interests such as cumulative oil recovery and pressure profile. The epistemic uncertainty in PCE-proxy is quantified using a Gaussian regression process called Kriging. The framework is then extended to Bayesian calibration where the posterior of model parameters is inferred by directly sampling from it using Markov chain Monte Carlo (MCMC). Finally, a stochastic multi-objective optimization problem is posed under uncertainties in model parameters and oil price which is solved using a variant of Bayesian global optimization routine.
<br>
|
12 |
Reservoir History Matching Using Ensemble Kalman Filters with Anamorphosis TransformsAman, Beshir M. 12 1900 (has links)
This work aims to enhance the Ensemble Kalman Filter performance by transforming the non-Gaussian state variables into Gaussian variables to be a step closer to optimality. This is done by using univariate and multivariate Box-Cox transformation.
Some History matching methods such as Kalman filter, particle filter and the ensemble Kalman filter are reviewed and applied to a test case in the reservoir application. The key idea is to apply the transformation before the update step and then transform back after applying the Kalman correction. In general, the results of the multivariate method was promising, despite the fact it over-estimated some variables.
|
13 |
The integration of seismic anisotropy and reservoir performance data for characterization of naturally fractured reservoirs using discrete feature network modelsWill, Robert A. 30 September 2004 (has links)
This dissertation presents the development of a method for quantitative integration of seismic (elastic) anisotropy attributes with reservoir performance data as an aid in characterization of systems of natural fractures in hydrocarbon reservoirs. This new method incorporates stochastic Discrete Feature Network (DFN) fracture modeling techniques, DFN model based fracture system hydraulic property and elastic anisotropy modeling, and non-linear inversion techniques, to achieve numerical integration of production data and seismic attributes for iterative refinement of initial trend and fracture intensity estimates. Although DFN modeling, flow simulation, and elastic anisotropy modeling are in themselves not new technologies, this dissertation represents the first known attempt to integrate advanced models for production performance and elastic anisotropy in fractured reservoirs using a rigorous mathematical inversion. The following new developments are presented:
. • Forward modeling and sensitivity analysis of the upscaled hydraulic properties of realistic DFN fracture models through use of effective permeability modeling techniques.
. • Forward modeling and sensitivity analysis of azimuthally variant seismic attributes based on the same DFN models.
. • Development of a combined production and seismic data objective function and computation of sensitivity coefficients.
. • Iterative model-based non-linear inversion of DFN fracture model trend and intensity through minimization of the combined objective function.
This new technique is demonstrated on synthetic models with single and multiple fracture sets as well as differing background (host) reservoir hydraulic and elastic properties. Results on these synthetic control models show that, given a well conditioned initial DFN model and good quality field production and seismic observations, the integration procedure results in convergence of both fracture trend and intensity in models with both single and multiple fracture sets. Tests show that for a single fracture set convergence is accelerated when the combined objective function is used as compared to a similar technique using only production data in the objective function. Tests performed on multiple fracture sets show that, without the addition of seismic anisotropy, the model fails to converge. These tests validate the importance of the new process for use in more realistic reservoir models.
|
14 |
Predicting the migration of CO₂ plume in saline aquifers using probabilistic history matching approachesBhowmik, Sayantan 20 August 2012 (has links)
During the operation of a geological carbon storage project, verifying that the CO₂ plume remains within the permitted zone is of particular interest both to regulators and to operators. However, the cost of many monitoring technologies, such as time-lapse seismic, limits their application. For adequate predictions of plume migration, proper representation of heterogeneous permeability fields is imperative. Previous work has shown that injection data (pressures, rates) from wells might provide a means of characterizing complex permeability fields in saline aquifers. Thus, given that injection data are readily available and inexpensive, they might provide an inexpensive alternative for monitoring; combined with a flow model like the one developed in this work, these data could even be used for predicting plume migration. These predictions of plume migration pathways can then be compared to field observations like time-lapse seismic or satellite measurements of surface-deformation, to ensure the containment of the injected CO₂ within the storage area. In this work, two novel methods for creating heterogeneous permeability fields constrained by injection data are demonstrated. The first method is an implementation of a probabilistic history matching algorithm to create models of the aquifer for predicting the movement of the CO₂ plume. The geologic property of interest, for example hydraulic conductivity, is updated conditioned to geological information and injection pressures. The resultant aquifer model which is geologically consistent can be used to reliably predict the movement of the CO₂ plume in the subsurface. The second method is a model selection algorithm that refines an initial suite of subsurface models representing the prior uncertainty to create a posterior set of subsurface models that reflect injection performance consistent with that observed. Such posterior models can be used to represent uncertainty in the future migration of the CO₂ plume. The applicability of both methods is demonstrated using a field data set from central Algeria. / text
|
15 |
Particle tracking proxies for prediction of CO₂ plume migration within a model selection frameworkBhowmik, Sayantan 24 June 2014 (has links)
Geologic sequestration of CO₂ in deep saline aquifers has been studied extensively over the past two decades as a viable method of reducing anthropological carbon emissions. The monitoring and prediction of the movement of injected CO₂ is important for assessing containment of the gas within the storage volume, and taking corrective measures if required. Given the uncertainty in geologic architecture of the storage aquifers, it is reasonable to depict our prior knowledge of the project area using a vast suite of aquifer models. Simulating such a large number of models using traditional numerical flow simulators to evaluate uncertainty is computationally expensive. A novel stochastic workflow for characterizing the plume migration, based on a model selection algorithm developed by Mantilla in 2011, has been implemented. The approach includes four main steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models using proxies; (2) model clustering using the principle component analysis or multidimensional scaling coupled with the k-mean clustering approach; (3) model selection using the Bayes' rule on the reduced model space, and (4) model expansion using an ensemble pattern-based matching scheme. In this dissertation, two proxies have been developed based on particle tracking in order to assess the flow connectivity of models in the initial set. The proxies serve as fast approximations of finite-difference flow simulation models, and are meant to provide rapid estimations of connectivity of the aquifer models. Modifications have also been implemented within the model selection workflow to accommodate the particular problem of application to a carbon sequestration project. The applicability of the proxies is tested both on synthetic models and real field case studies. It is demonstrated that the first proxy captures areal migration to a reasonable extent, while failing to adequately capture vertical buoyancy-driven flow of CO₂. This limitation of the proxy is addressed in the second proxy, and its applicability is demonstrated not only in capturing horizontal migration but also in buoyancy-driven flow. Both proxies are tested both as standalone approximations of numerical simulation and within the larger model selection framework. / text
|
16 |
Multi Data Reservoir History Matching using the Ensemble Kalman FilterKatterbauer, Klemens 05 1900 (has links)
Reservoir history matching is becoming increasingly important with the growing demand for higher quality formation characterization and forecasting and the increased complexity and expenses for modern hydrocarbon exploration projects. History matching has long been dominated by adjusting reservoir parameters based solely on well data whose spatial sparse sampling has been a challenge for characterizing the flow properties in areas away from the wells. Geophysical data are widely collected nowadays for reservoir monitoring purposes, but has not yet been fully integrated into history matching and forecasting fluid flow. In this thesis, I present a pioneering approach towards incorporating different time-lapse geophysical data together for enhancing reservoir history matching and uncertainty quantification. The thesis provides several approaches to efficiently integrate multiple geophysical data, analyze the sensitivity of the history matches to observation noise, and examine the framework’s performance in several settings, such as the Norne field in Norway. The results demonstrate the significant improvements in reservoir forecasting and characterization and the synergy effects encountered between the different geophysical data. In particular, the joint use of electromagnetic and seismic data improves the accuracy of forecasting fluid properties, and the usage of electromagnetic data has led to considerably better estimates of hydrocarbon fluid components. For volatile oil and gas reservoirs the joint integration of gravimetric and InSAR data has shown to be beneficial in detecting the influx of water and thereby improving the recovery rate. Summarizing, this thesis makes an important contribution towards integrated reservoir management and multiphysics integration for reservoir history matching.
|
17 |
Caractérisation 3D de l'hétérogénéité de la perméabilité à l'échelle de l'échantillon / 3D Chatacterization of Permeability Heterogeneity at the Core ScaleSoltani, Amir 21 October 2008 (has links)
L’objet de cette thèse est de développer des méthodologies permettant d’identifier la distribution spatiale des valeurs de perméabilité dans des échantillons de roches. Nous avons tout d’abord développé en laboratoire des expériences d’injection de fluide miscible très visqueux dans des échantillons initialement saturés par une saumure peu visqueuse. Pendant l’injection, l’évolution au cours du temps de la pression différentielle entre les deux faces de l’échantillon a été enregistrée par des capteurs de pression. En outre, des mesures scanner ont fourni une carte 3D de la porosité ainsi que des cartes 3D décrivant la distribution spatiale des concentrations dans l’échantillon à différents temps. Nous avons mis en place une méthode d’interprétation donnant directement le profil 1D de la perméabilité le long de la direction d’écoulement à partir de la pression différentielle mesurée au cours du temps. Cette méthode a été validée numériquement et expérimentalement. Puis, afin d’affiner la description de l’agencement des valeurs de perméabilité dans l’échantillon, c’est à dire d’obtenir un modèle 3D de perméabilité représentatif de l’échantillon, nous avons développé une méthodologie itérative de calage des pressions et des concentrations. Cette méthode passe par deux étapes : une optimisation simple pour capturer l’hétérogénéité dans la direction de l’écoulement et une optimisation complexe pour capturer l’hétérogénéité transverse. Cette méthode a été validée à partir de tests numériques. La méthode a été appliquée à deux des expériences d’injection de fluide visqueux. Nous avons pu alors déterminer des modèles de perméabilité capables de reproduire assez bien les données de pression et de concentration acquises pendant l’injection / The objective of this study is to develop new methodologies to identify the spatial distribution of permeability values inside the heterogeneous core samples. We developed laboratory viscous miscible displacements by injecting high viscosity glycerin into the core samples initially saturated by low viscosity brine. The pressure drop across the samples was measured as a function of time until breakthrough. Meanwhile, CT scan measurements provided a 3D porosity map plus several 3D maps of concentration distribution inside the core samples at different times. A simple permeability mapping technique was developed deducing a one-dimensional permeability profile along the flow direction from the measured pressure drop data. The method was validated with both numerical and laboratory experiments. To go beyond one-dimensional characterization of permeability into cores, we developed an iterative process for matching pressure and concentration data. This method consisted of two steps: a simple optimization for capturing the permeability heterogeneity along the flow direction axis and a complex optimization for capturing transversal permeability heterogeneities. The methodology was validated by numerical data. It was also applied to the data collected from two laboratory viscous miscible displacements. We showed that the final 3D permeability models reproduce well the measured pressure drop and concentration data
|
18 |
Analysis of main parameters in adaptive ES-MDA history matching. / Análise dos principais parâmetros no ajuste de histórico utilizando ES-MDA adaptativo.Ranazzi, Paulo Henrique 06 June 2019 (has links)
In reservoir engineering, history matching is the technique that reviews the uncertain parameters of a reservoir simulation model in order to obtain a response according to the observed production data. Reservoir properties have uncertainties due to their indirect acquisition methods, that results in discrepancies between observed data and reservoir simulator response. A history matching method is the Ensemble Smoother with Multiple Data assimilation (ES-MDA), where an ensemble of models is used to quantify the parameters uncertainties. In ES-MDA, the number of iterations must be defined previously the application by the user, being a determinant parameter for a good quality matching. One way to handle this, is by implementing adaptive methodologies when the algorithm keeps iterating until it reaches good matchings. Also, in large-scale reservoir models it is necessary to apply the localization technique, in order to mitigate spurious correlations and high uncertainty reduction of posterior models. The main objective of this dissertation is to evaluate two main parameters of history matching when using an adaptive ES-MDA: localization and ensemble size, verifying the impact of these parameters in the adaptive scheme. The adaptive ES-MDA used in this work defines the number of iterations and the inflation factors automatically and distance-based Kalman gain localization was used to evaluate the localization influence. The parameters influence was analyzed by applying the methodology in the benchmark UNISIM-I-H: a synthetic large-scale reservoir model based on an offshore Brazilian field. The experiments presented considerable reduction of the objective function for all cases, showing the ability of the adaptive methodology of keep iterating until a desirable overcome is obtained. About the parameters evaluated, a relationship between the localization and the required number of iterations to complete the adaptive algorithm was verified, and this influence has not been observed as function of the ensemble size. / Em engenharia de reservatórios, ajuste de histórico é a técnica que revisa os parâmetros incertos de um modelo de simulação de reservatório para obter uma resposta condizente com os dados de produção observados. As propriedades do reservatório possuem incertezas, devido aos métodos indiretos em que foram adquiridas, resultando em discrepâncias entre os dados observados e a resposta do simulador de reservatório. Um método de ajuste de histórico é o Conjunto Suavizado com Múltiplas Aquisições de Dados (sigla em inglês ES-MDA), onde um conjunto de modelos é utilizado para quantificar as incertezas dos parâmetros. No ES-MDA o número de iterações necessita ser definido previamente pelo usuário antes de sua aplicação, sendo um parâmetro determinante para um ajuste de boa qualidade. Uma forma de contornar esta limitação é implementar metodologias adaptativas onde o algoritmo continue as iterações até que alcance bons ajustes. Por outro lado, em modelos de reservatórios de larga-escala é necessário aplicar alguma técnica de localização para evitar correlações espúrias e uma alta redução de incertezas dos modelos a posteriori. O principal objetivo desta dissertação é avaliar dois principais parâmetros do ajuste de histórico quando aplicado um ES-MDA adaptativo: localização e tamanho do conjunto, verificando o impacto destes parâmetros no método adaptativo. O ES-MDA adaptativo utilizado define o número de iterações e os fatores de inflação automaticamente e a localização no ganho de Kalman baseada na distância foi utilizada para avaliar a influência da localização. Assim, a influência dos parâmetros foi analisada aplicando a metodologia no benchmark UNISIM-I-H: um modelo de reservatório sintético de larga escala baseado em um campo offshore brasileiro. Os experimentos apresentaram considerável redução da função objetivo para todos os casos, mostrando a capacidade da metodologia adaptativa de continuar iterando até que resultados aceitáveis fossem obtidos. Sobre os parâmetros avaliados, foi verificado uma relação entre a localização e o número de iterações necessárias, influência esta que não foi observada em função do tamanho do conjunto.
|
19 |
Structural and shape reconstruction using inverse problems and machine learning techniques with application to hydrocarbon reservoirsEtienam, Clement January 2019 (has links)
This thesis introduces novel ideas in subsurface reservoir model calibration known as History Matching in the reservoir engineering community. The target of history matching is to mimic historical pressure and production data from the producing wells with the output from the reservoir simulator for the sole purpose of reducing uncertainty from such models and improving confidence in production forecast. Ensemble based methods such as the Ensemble Kalman Filter (EnKF) and Ensemble Smoother with Multiple Data Assimilation (ES-MDA) as been proposed for history matching in literature. EnKF/ES-MDA is a Monte Carlo ensemble nature filter where the representation of the covariance is located at the mean of the ensemble of the distribution instead of the uncertain true model. In EnKF/ES-MDA calculation of the gradients is not required, and the mean of the ensemble of the realisations provides the best estimates with the ensemble on its own estimating the probability density. However, because of the inherent assumptions of linearity and Gaussianity of petrophysical properties distribution, EnKF/ES-MDA does not provide an acceptable history-match and characterisation of uncertainty when tasked with calibrating reservoir models with channel like structures. One of the novel methods introduced in this thesis combines a successive parameter and shape reconstruction using level set functions (EnKF/ES-MDA-level set) where the spatial permeability fields' indicator functions are transformed into signed distances. These signed distances functions (better suited to the Gaussian requirement of EnKF/ES-MDA) are then updated during the EnKF/ES-MDA inversion. The method outperforms standard EnKF/ES-MDA in retaining geological realism of channels during and after history matching and also yielded lower Root-Mean-Square function (RMS) as compared to the standard EnKF/ES-MDA. To improve on the petrophysical reconstruction attained with the EnKF/ES-MDA-level set technique, a novel parametrisation incorporating an unsupervised machine learning method for the recovery of the permeability and porosity field is developed. The permeability and porosity fields are posed as a sparse field recovery problem and a novel SELE (Sparsity-Ensemble optimization-Level-set Ensemble optimisation) approach is proposed for the history matching. In SELE some realisations are learned using the K-means clustering Singular Value Decomposition (K-SVD) to generate an overcomplete codebook or dictionary. This dictionary is combined with Orthogonal Matching Pursuit (OMP) to ease the ill-posed nature of the production data inversion, converting our permeability/porosity field into a sparse domain. SELE enforces prior structural information on the model during the history matching and reduces the computational complexity of the Kalman gain matrix, leading to faster attainment of the minimum of the cost function value. From the results shown in the thesis; SELE outperforms conventional EnKF/ES-MDA in matching the historical production data, evident in the lower RMS value and a high geological realism/similarity to the true reservoir model.
|
20 |
Multiscale-Streamline Inversion for High-Resolution Reservoir ModelsStenerud, Vegard January 2007 (has links)
<p>The topic of this thesis is streamline-based integration of dynamic data for porous media systems, particularly in petroleum reservoirs. In the petroleum industry the integration of dynamic data is usually referred to as history matching. The thesis starts out by giving an introduction to streamline-based history-matching methods. Implementations and extensions of two existing methods for streamline-based history matching are then presented.</p><p>The first method pursued is based on obtaining modifications for streamline-effective properties, which subsequently are propagated to the underlying simulation grid for further iterations. For this method, two improvements are proposed to the original existing method. First, the improved approach involves less approximations, enables matching of porosity, and can account for gravity. Second, a multiscale approach is applied for which the data integration is performed on a hierarchy of coarsened grids. The approach proved robust, and gave a faster and better match to the data.</p><p>The second method pursued is the so-called generalized travel-time inversion (GTTI) method, which earlier has proven very robust and efficient for history matching. The key to the efficiency of this method is the quasilinear convergence properties and the use of analytic streamline-based sensitivity coefficients. GTTI is applied together with an efficient multiscale-streamline simulator, where the pressure solver is based on a multiscale mixed finite-element method (MsMFEM). To make the history matching more efficient, a selective work-reduction strategy, based on the sensitivities provided by the inversion method, is proposed for the pressure solver. In addition, a method for improved mass conservation in streamline simulation is applied, which requires much fewer streamlines to obtain accurate production-response curves. For a reservoir model with more than one million grid blocks, 69 producers and 32 injectors, the data integration took less than twenty minutes on a standard desktop computer. Finally, we propose an extension of GTTI to fully unstructured grids, where we in particular address issues regarding regularization and computation of sensitivities on unstructured grids with large differences in cell sizes.</p> / Paper I reprinted with kind permission of Elsevier, sciencedirect.com
|
Page generated in 0.0733 seconds