Return to search

An efficient Bayesian formulation for production data integration into reservoir models

Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.

Identiferoai:union.ndltd.org:TEXASAandM/oai:repository.tamu.edu:1969.1/1633
Date17 February 2005
CreatorsLeonardo, Vega Velasquez
ContributorsDatta-Gupta, Akhil, Mallick, Bani K., Holditch, Steve A., Lee, W. John
PublisherTexas A&M University
Source SetsTexas A and M University
Languageen_US
Detected LanguageEnglish
TypeElectronic Dissertation, text
Format3684343 bytes, electronic, application/pdf, born digital

Page generated in 0.0021 seconds