Spelling suggestions: "subject:"[een] HISTORY MATCHING"" "subject:"[enn] HISTORY MATCHING""
11 |
Particle tracking proxies for prediction of CO₂ plume migration within a model selection frameworkBhowmik, Sayantan 24 June 2014 (has links)
Geologic sequestration of CO₂ in deep saline aquifers has been studied extensively over the past two decades as a viable method of reducing anthropological carbon emissions. The monitoring and prediction of the movement of injected CO₂ is important for assessing containment of the gas within the storage volume, and taking corrective measures if required. Given the uncertainty in geologic architecture of the storage aquifers, it is reasonable to depict our prior knowledge of the project area using a vast suite of aquifer models. Simulating such a large number of models using traditional numerical flow simulators to evaluate uncertainty is computationally expensive. A novel stochastic workflow for characterizing the plume migration, based on a model selection algorithm developed by Mantilla in 2011, has been implemented. The approach includes four main steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models using proxies; (2) model clustering using the principle component analysis or multidimensional scaling coupled with the k-mean clustering approach; (3) model selection using the Bayes' rule on the reduced model space, and (4) model expansion using an ensemble pattern-based matching scheme. In this dissertation, two proxies have been developed based on particle tracking in order to assess the flow connectivity of models in the initial set. The proxies serve as fast approximations of finite-difference flow simulation models, and are meant to provide rapid estimations of connectivity of the aquifer models. Modifications have also been implemented within the model selection workflow to accommodate the particular problem of application to a carbon sequestration project. The applicability of the proxies is tested both on synthetic models and real field case studies. It is demonstrated that the first proxy captures areal migration to a reasonable extent, while failing to adequately capture vertical buoyancy-driven flow of CO₂. This limitation of the proxy is addressed in the second proxy, and its applicability is demonstrated not only in capturing horizontal migration but also in buoyancy-driven flow. Both proxies are tested both as standalone approximations of numerical simulation and within the larger model selection framework. / text
|
12 |
Caractérisation 3D de l'hétérogénéité de la perméabilité à l'échelle de l'échantillon / 3D Chatacterization of Permeability Heterogeneity at the Core ScaleSoltani, Amir 21 October 2008 (has links)
L’objet de cette thèse est de développer des méthodologies permettant d’identifier la distribution spatiale des valeurs de perméabilité dans des échantillons de roches. Nous avons tout d’abord développé en laboratoire des expériences d’injection de fluide miscible très visqueux dans des échantillons initialement saturés par une saumure peu visqueuse. Pendant l’injection, l’évolution au cours du temps de la pression différentielle entre les deux faces de l’échantillon a été enregistrée par des capteurs de pression. En outre, des mesures scanner ont fourni une carte 3D de la porosité ainsi que des cartes 3D décrivant la distribution spatiale des concentrations dans l’échantillon à différents temps. Nous avons mis en place une méthode d’interprétation donnant directement le profil 1D de la perméabilité le long de la direction d’écoulement à partir de la pression différentielle mesurée au cours du temps. Cette méthode a été validée numériquement et expérimentalement. Puis, afin d’affiner la description de l’agencement des valeurs de perméabilité dans l’échantillon, c’est à dire d’obtenir un modèle 3D de perméabilité représentatif de l’échantillon, nous avons développé une méthodologie itérative de calage des pressions et des concentrations. Cette méthode passe par deux étapes : une optimisation simple pour capturer l’hétérogénéité dans la direction de l’écoulement et une optimisation complexe pour capturer l’hétérogénéité transverse. Cette méthode a été validée à partir de tests numériques. La méthode a été appliquée à deux des expériences d’injection de fluide visqueux. Nous avons pu alors déterminer des modèles de perméabilité capables de reproduire assez bien les données de pression et de concentration acquises pendant l’injection / The objective of this study is to develop new methodologies to identify the spatial distribution of permeability values inside the heterogeneous core samples. We developed laboratory viscous miscible displacements by injecting high viscosity glycerin into the core samples initially saturated by low viscosity brine. The pressure drop across the samples was measured as a function of time until breakthrough. Meanwhile, CT scan measurements provided a 3D porosity map plus several 3D maps of concentration distribution inside the core samples at different times. A simple permeability mapping technique was developed deducing a one-dimensional permeability profile along the flow direction from the measured pressure drop data. The method was validated with both numerical and laboratory experiments. To go beyond one-dimensional characterization of permeability into cores, we developed an iterative process for matching pressure and concentration data. This method consisted of two steps: a simple optimization for capturing the permeability heterogeneity along the flow direction axis and a complex optimization for capturing transversal permeability heterogeneities. The methodology was validated by numerical data. It was also applied to the data collected from two laboratory viscous miscible displacements. We showed that the final 3D permeability models reproduce well the measured pressure drop and concentration data
|
13 |
Analysis of main parameters in adaptive ES-MDA history matching. / Análise dos principais parâmetros no ajuste de histórico utilizando ES-MDA adaptativo.Ranazzi, Paulo Henrique 06 June 2019 (has links)
In reservoir engineering, history matching is the technique that reviews the uncertain parameters of a reservoir simulation model in order to obtain a response according to the observed production data. Reservoir properties have uncertainties due to their indirect acquisition methods, that results in discrepancies between observed data and reservoir simulator response. A history matching method is the Ensemble Smoother with Multiple Data assimilation (ES-MDA), where an ensemble of models is used to quantify the parameters uncertainties. In ES-MDA, the number of iterations must be defined previously the application by the user, being a determinant parameter for a good quality matching. One way to handle this, is by implementing adaptive methodologies when the algorithm keeps iterating until it reaches good matchings. Also, in large-scale reservoir models it is necessary to apply the localization technique, in order to mitigate spurious correlations and high uncertainty reduction of posterior models. The main objective of this dissertation is to evaluate two main parameters of history matching when using an adaptive ES-MDA: localization and ensemble size, verifying the impact of these parameters in the adaptive scheme. The adaptive ES-MDA used in this work defines the number of iterations and the inflation factors automatically and distance-based Kalman gain localization was used to evaluate the localization influence. The parameters influence was analyzed by applying the methodology in the benchmark UNISIM-I-H: a synthetic large-scale reservoir model based on an offshore Brazilian field. The experiments presented considerable reduction of the objective function for all cases, showing the ability of the adaptive methodology of keep iterating until a desirable overcome is obtained. About the parameters evaluated, a relationship between the localization and the required number of iterations to complete the adaptive algorithm was verified, and this influence has not been observed as function of the ensemble size. / Em engenharia de reservatórios, ajuste de histórico é a técnica que revisa os parâmetros incertos de um modelo de simulação de reservatório para obter uma resposta condizente com os dados de produção observados. As propriedades do reservatório possuem incertezas, devido aos métodos indiretos em que foram adquiridas, resultando em discrepâncias entre os dados observados e a resposta do simulador de reservatório. Um método de ajuste de histórico é o Conjunto Suavizado com Múltiplas Aquisições de Dados (sigla em inglês ES-MDA), onde um conjunto de modelos é utilizado para quantificar as incertezas dos parâmetros. No ES-MDA o número de iterações necessita ser definido previamente pelo usuário antes de sua aplicação, sendo um parâmetro determinante para um ajuste de boa qualidade. Uma forma de contornar esta limitação é implementar metodologias adaptativas onde o algoritmo continue as iterações até que alcance bons ajustes. Por outro lado, em modelos de reservatórios de larga-escala é necessário aplicar alguma técnica de localização para evitar correlações espúrias e uma alta redução de incertezas dos modelos a posteriori. O principal objetivo desta dissertação é avaliar dois principais parâmetros do ajuste de histórico quando aplicado um ES-MDA adaptativo: localização e tamanho do conjunto, verificando o impacto destes parâmetros no método adaptativo. O ES-MDA adaptativo utilizado define o número de iterações e os fatores de inflação automaticamente e a localização no ganho de Kalman baseada na distância foi utilizada para avaliar a influência da localização. Assim, a influência dos parâmetros foi analisada aplicando a metodologia no benchmark UNISIM-I-H: um modelo de reservatório sintético de larga escala baseado em um campo offshore brasileiro. Os experimentos apresentaram considerável redução da função objetivo para todos os casos, mostrando a capacidade da metodologia adaptativa de continuar iterando até que resultados aceitáveis fossem obtidos. Sobre os parâmetros avaliados, foi verificado uma relação entre a localização e o número de iterações necessárias, influência esta que não foi observada em função do tamanho do conjunto.
|
14 |
Structural and shape reconstruction using inverse problems and machine learning techniques with application to hydrocarbon reservoirsEtienam, Clement January 2019 (has links)
This thesis introduces novel ideas in subsurface reservoir model calibration known as History Matching in the reservoir engineering community. The target of history matching is to mimic historical pressure and production data from the producing wells with the output from the reservoir simulator for the sole purpose of reducing uncertainty from such models and improving confidence in production forecast. Ensemble based methods such as the Ensemble Kalman Filter (EnKF) and Ensemble Smoother with Multiple Data Assimilation (ES-MDA) as been proposed for history matching in literature. EnKF/ES-MDA is a Monte Carlo ensemble nature filter where the representation of the covariance is located at the mean of the ensemble of the distribution instead of the uncertain true model. In EnKF/ES-MDA calculation of the gradients is not required, and the mean of the ensemble of the realisations provides the best estimates with the ensemble on its own estimating the probability density. However, because of the inherent assumptions of linearity and Gaussianity of petrophysical properties distribution, EnKF/ES-MDA does not provide an acceptable history-match and characterisation of uncertainty when tasked with calibrating reservoir models with channel like structures. One of the novel methods introduced in this thesis combines a successive parameter and shape reconstruction using level set functions (EnKF/ES-MDA-level set) where the spatial permeability fields' indicator functions are transformed into signed distances. These signed distances functions (better suited to the Gaussian requirement of EnKF/ES-MDA) are then updated during the EnKF/ES-MDA inversion. The method outperforms standard EnKF/ES-MDA in retaining geological realism of channels during and after history matching and also yielded lower Root-Mean-Square function (RMS) as compared to the standard EnKF/ES-MDA. To improve on the petrophysical reconstruction attained with the EnKF/ES-MDA-level set technique, a novel parametrisation incorporating an unsupervised machine learning method for the recovery of the permeability and porosity field is developed. The permeability and porosity fields are posed as a sparse field recovery problem and a novel SELE (Sparsity-Ensemble optimization-Level-set Ensemble optimisation) approach is proposed for the history matching. In SELE some realisations are learned using the K-means clustering Singular Value Decomposition (K-SVD) to generate an overcomplete codebook or dictionary. This dictionary is combined with Orthogonal Matching Pursuit (OMP) to ease the ill-posed nature of the production data inversion, converting our permeability/porosity field into a sparse domain. SELE enforces prior structural information on the model during the history matching and reduces the computational complexity of the Kalman gain matrix, leading to faster attainment of the minimum of the cost function value. From the results shown in the thesis; SELE outperforms conventional EnKF/ES-MDA in matching the historical production data, evident in the lower RMS value and a high geological realism/similarity to the true reservoir model.
|
15 |
Multiscale-Streamline Inversion for High-Resolution Reservoir ModelsStenerud, Vegard January 2007 (has links)
<p>The topic of this thesis is streamline-based integration of dynamic data for porous media systems, particularly in petroleum reservoirs. In the petroleum industry the integration of dynamic data is usually referred to as history matching. The thesis starts out by giving an introduction to streamline-based history-matching methods. Implementations and extensions of two existing methods for streamline-based history matching are then presented.</p><p>The first method pursued is based on obtaining modifications for streamline-effective properties, which subsequently are propagated to the underlying simulation grid for further iterations. For this method, two improvements are proposed to the original existing method. First, the improved approach involves less approximations, enables matching of porosity, and can account for gravity. Second, a multiscale approach is applied for which the data integration is performed on a hierarchy of coarsened grids. The approach proved robust, and gave a faster and better match to the data.</p><p>The second method pursued is the so-called generalized travel-time inversion (GTTI) method, which earlier has proven very robust and efficient for history matching. The key to the efficiency of this method is the quasilinear convergence properties and the use of analytic streamline-based sensitivity coefficients. GTTI is applied together with an efficient multiscale-streamline simulator, where the pressure solver is based on a multiscale mixed finite-element method (MsMFEM). To make the history matching more efficient, a selective work-reduction strategy, based on the sensitivities provided by the inversion method, is proposed for the pressure solver. In addition, a method for improved mass conservation in streamline simulation is applied, which requires much fewer streamlines to obtain accurate production-response curves. For a reservoir model with more than one million grid blocks, 69 producers and 32 injectors, the data integration took less than twenty minutes on a standard desktop computer. Finally, we propose an extension of GTTI to fully unstructured grids, where we in particular address issues regarding regularization and computation of sensitivities on unstructured grids with large differences in cell sizes.</p> / Paper I reprinted with kind permission of Elsevier, sciencedirect.com
|
16 |
Multiscale-Streamline Inversion for High-Resolution Reservoir ModelsStenerud, Vegard January 2007 (has links)
The topic of this thesis is streamline-based integration of dynamic data for porous media systems, particularly in petroleum reservoirs. In the petroleum industry the integration of dynamic data is usually referred to as history matching. The thesis starts out by giving an introduction to streamline-based history-matching methods. Implementations and extensions of two existing methods for streamline-based history matching are then presented. The first method pursued is based on obtaining modifications for streamline-effective properties, which subsequently are propagated to the underlying simulation grid for further iterations. For this method, two improvements are proposed to the original existing method. First, the improved approach involves less approximations, enables matching of porosity, and can account for gravity. Second, a multiscale approach is applied for which the data integration is performed on a hierarchy of coarsened grids. The approach proved robust, and gave a faster and better match to the data. The second method pursued is the so-called generalized travel-time inversion (GTTI) method, which earlier has proven very robust and efficient for history matching. The key to the efficiency of this method is the quasilinear convergence properties and the use of analytic streamline-based sensitivity coefficients. GTTI is applied together with an efficient multiscale-streamline simulator, where the pressure solver is based on a multiscale mixed finite-element method (MsMFEM). To make the history matching more efficient, a selective work-reduction strategy, based on the sensitivities provided by the inversion method, is proposed for the pressure solver. In addition, a method for improved mass conservation in streamline simulation is applied, which requires much fewer streamlines to obtain accurate production-response curves. For a reservoir model with more than one million grid blocks, 69 producers and 32 injectors, the data integration took less than twenty minutes on a standard desktop computer. Finally, we propose an extension of GTTI to fully unstructured grids, where we in particular address issues regarding regularization and computation of sensitivities on unstructured grids with large differences in cell sizes. / Paper I reprinted with kind permission of Elsevier, sciencedirect.com
|
17 |
Optimal Reservoir Management and Well Placement Under Geologic UncertaintyTaware, Satyajit Vijay 2012 August 1900 (has links)
Reservoir management, sometimes referred to as asset management in the context of petroleum reservoirs, has become recognized as an important facet of petroleum reservoir development and production operations.
In the first stage of planning field development, the simulation model is calibrated to dynamic data (history matching). One of the aims of the research is to extend the streamline based generalized travel time inversion method for full field models with multimillion cells through the use of grid coarsening. This makes the streamline based inversion suitable for high resolution simulation models with decades long production history and numerous wells by significantly reducing the computational effort. In addition, a novel workflow is proposed to integrate well bottom-hole pressure data during model calibration and the approach is illustrated via application to the CO2 sequestration.
In the second stage, field development strategies are optimized. The strategies are primarily focused on rate optimization followed by infill well drilling. A method is proposed to modify the streamline-based rate optimization approach which previously focused on maximizing sweep efficiency by equalizing arrival time of the waterfront to producers, to account for accelerated production for improving the net present value (NPV). Optimum compromise between maximizing sweep efficiency and maximizing NPV can be selected based on a 'trade-off curve.' The proposed method is demonstrated on field scale application considering geological uncertainty.
Finally, a novel method for well placement optimization is proposed that relies on streamlines and time of flight to first locate the potential regions of poorly swept and drained oil. Specifically, the proposed approach utilizes a dynamic measure based on the total streamline time of flight combined with static and dynamic parameters to identify "Sweet-Spots" for infill drilling. The "Sweet-Spots" can be either used directly as potential well-placement locations or as starting points during application of a formal optimization technique. The main advantage of the proposed method is its computational efficiency in calculating dynamic measure map. The complete workflow was also demonstrated on a multimillion cell reservoir model of a mature carbonate field with notable success. The infill locations based on dynamic measure map have been verified by subsequent drilling.
|
18 |
A Hybrid Ensemble Kalman Filter for Nonlinear DynamicsWatanabe, Shingo 2009 December 1900 (has links)
In this thesis, we propose two novel approaches for hybrid Ensemble Kalman
Filter (EnKF) to overcome limitations of the traditional EnKF. The first approach is to
swap the ensemble mean for the ensemble mode estimation to improve the covariance
calculation in EnKF. The second approach is a coarse scale permeability constraint while
updating in EnKF. Both hybrid EnKF approaches are coupled with the streamline based
Generalized Travel Time Inversion (GTTI) algorithm for periodic updating of the mean
of the ensemble and to sequentially update the ensemble in a hybrid fashion.
Through the development of the hybrid EnKF algorithm, the characteristics of
the EnKF are also investigated. We found that the limits of the updated values constrain
the assimilation results significantly and it is important to assess the measurement error
variance to have a proper balance between preserving the prior information and the
observation data misfit. Overshooting problems can be mitigated with the streamline
based covariance localizations and normal score transformation of the parameters to
support the Gaussian error statistics.
The swapping mean and mode estimation approach can give us a better matching
of the data as long as the mode solution of the inversion process is satisfactory in terms
of matching the observation trajectory.
The coarse scale permeability constrained hybrid approach gives us better
parameter estimation in terms of capturing the main trend of the permeability field and
each ensemble member is driven to the posterior mode solution from the inversion
process. However the WWCT responses and pressure responses need to be captured
through the inversion process to generate physically plausible coarse scale permeability
data to constrain hybrid EnKF updating.
Uncertainty quantification methods for EnKF were developed to verify the
performance of the proposed hybrid EnKF compared to the traditional EnKF. The results
show better assimilation quality through a sequence of updating and a stable solution is
demonstrated.
The potential of the proposed hybrid approaches are promising through the
synthetic examples and a field scale application.
|
19 |
Integration of dynamic data into reservoir description using streamline approachesHe, Zhong 15 November 2004 (has links)
Integration of dynamic data is critical for reliable reservoir description and has been an outstanding challenge for the petroleum industry. This work develops practical dynamic data integration techniques using streamline approaches to condition static geological models to various kinds of dynamic data, including two-phase production history, interference pressure observations and primary production data. The proposed techniques are computationally efficient and robust, and thus well-suited for large-scale field applications. We can account for realistic field conditions, such as gravity, and changing field conditions, arising from infill drilling, pattern conversion, and recompletion, etc., during the integration of two-phase production data. Our approach is fast and exhibits rapid convergence even when the initial model is far from the solution. The power and practical applicability of the proposed techniques are demonstrated with a variety of field examples.
To integrate two-phase production data, a travel-time inversion analogous to seismic inversion is adopted. We extend the method via a 'generalized travel-time' inversion to ensure matching of the entire production response rather than just a single time point while retaining most of the quasi-linear property of travel-time inversion. To integrate the interference pressure data, we propose an alternating procedure of travel-time inversion and peak amplitude inversion or pressure inversion to improve the overall matching of the pressure response.
A key component of the proposed techniques is the efficient computation of the sensitivities of dynamic responses with respect to reservoir parameters. These sensitivities are calculated analytically using a single forward simulation. Thus, our methods can be orders of magnitude faster than finite-difference based numerical approaches that require multiple forward simulations.
Streamline approach has also been extended to identify reservoir compartmentalization and flow barriers using primary production data in conjunction with decline type-curve analysis. The streamline 'diffusive' time of flight provides an effective way to calculate the drainage volume in 3D heterogeneous reservoirs. The flow barriers and reservoir compartmentalization are inferred based on the matching of drainage volumes from streamline-based calculation and decline type-curve analysis. The proposed approach is well-suited for application in the early stages of field development with limited well data and has been illustrated using a field example from the Gulf of Mexico.
|
20 |
Fast history matching of finite-difference model, compressible and three-phase flow using streamline-derived sensitivitiesCheng, Hao 30 October 2006 (has links)
Reconciling high-resolution geologic models to field production history is still a very
time-consuming procedure. Recently streamline-based assisted and automatic history
matching techniques, especially production data integration by âÂÂtravel-time matching,âÂÂ
have shown great potential in this regard. But no systematic study was done to examine
the merits of travel-time matching compared to more traditional amplitude matching for
field-scale application. Besides, most applications were limited to two-phase water-oil
flow because current streamline models are limited in their ability to incorporate highly
compressible flow in a rigorous and computationally efficient manner.
The purpose of this work is fourfold. First, we quantitatively investigated the
nonlinearities in the inverse problems related to travel time, generalized travel time, and
amplitude matching during production data integration and their impact on the solution
and its convergence. Results show that the commonly used amplitude inversion can be
orders of magnitude more nonlinear compared to the travel-time inversion. Both the
travel-time and generalized travel time inversion (GTTI) are shown to be more robust
and exhibit superior convergence characteristics.
Second, the streamline-based assisted history matching was enhanced in two
important aspects that significantly improve its efficiency and effectiveness. We utilize
streamline-derived analytic sensitivities to determine the location and magnitude of the
changes to improve the history match, and we use the iterative GTTI for model updating.
Our approach leads to significant savings in time and manpower. Third, a novel approach to history matching finite-difference models that combines
the efficiency of analytical sensitivity computation of the streamline models with the
versatility of finite-difference simulation was developed. Use of finite-difference
simulation can account for complex physics.
Finally, we developed an approach to history matching three-phase flow using a
novel compressible streamline formulation and streamline-derived analytic sensitivities.
Streamline models were generalized to account for compressible flow by introducing a
relative density of total fluids along streamlines and a density-dependent source term in
the saturation equation. The analytical sensitivities are calculated based on the rigorous
streamline formulation.
The power and utility of our approaches have been demonstrated using both
synthetic and field examples.
|
Page generated in 0.0492 seconds