• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 19
  • 9
  • 2
  • Tagged with
  • 63
  • 63
  • 32
  • 19
  • 13
  • 12
  • 12
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An efficient Bayesian approach to history matching and uncertainty assessment

Yuan, Chengwu 25 April 2007 (has links)
Conditioning reservoir models to production data and assessment of uncertainty can be done by Bayesian theorem. This inverse problem can be computationally intensive, generally requiring orders of magnitude more computation time compared to the forward flow simulation. This makes it not practical to assess the uncertainty by multiple realizations of history matching for field applications. We propose a robust adaptation of the Bayesian formulation, which overcomes the current limitations and is suitable for large-scale applications. It is based on a generalized travel time inversion and utilizes a streamline-based analytic approach to compute the sensitivity of the travel time with respect to reservoir parameters. Streamlines are computed from the velocity field that is available from finite-difference simulators. We use an iterative minimization algorithm based on efficient SVD (singular value decomposition) and a numerical ‘stencil’ for calculation of the square root of the inverse of the prior covariance matrix. This approach is computationally efficient. And the linear scaling property of CPU time with increasing model size makes it suitable for large-scale applications. Then it is feasible to assess uncertainty by sampling from the posterior probability distribution using Randomized Maximum Likelihood method, an approximate Markov Chain Monte Carlo algorithms. We apply this approach in a field case from the Goldsmith San Andres Unit (GSAU) in West Texas. In the application, we show the effect of prior modeling on posterior uncertainty by comparing the results from prior modeling by Cloud Transform and by generalized travel time inversion and utilizes a streamline-based analytic approach to compute the sensitivity of the travel time with respect to reservoir parameters. Streamlines are computed from the velocity field that is available from finite-difference simulators. We use an iterative minimization algorithm based on efficient SVD (singular value decomposition) and a numerical Collocated Sequential Gaussian Simulation. Exhausting prior information will reduce the prior uncertainty and posterior uncertainty after dynamic data integration and thus improve the accuracy of prediction of future performance.
2

Streamline-based three-phase history matching

Oyerinde, Adedayo Stephen 10 October 2008 (has links)
Geologic models derived from static data alone typically fail to reproduce the production history of a reservoir, thus the importance of reconciling simulation models to the dynamic response of the reservoir. This necessity has been the motivation behind the active research work in history matching. Traditionally, history matching is performed manually by applying local and regional changes to reservoir properties. While this is still in general practice, the subjective overtone of this approach, the time and manpower requirements, and the potential loss of geologic consistency have led to the development of a variety of alternative workflows for assisted and automatic history matching. Automatic history matching requires the solution of an inverse problem by minimizing an appropriately defined misfit function. Recent advances in geostatistics have led to the building of high-resolution geologic models consisting of millions of cells. Most of these are scaled up to the submillion size for reservoir simulation purposes. History matching even the scaled up models is computationally prohibitive. The associated cost in terms of time and manpower has led to increased interest in efficient history matching techniques and in particular, to sensitivity-based algorithms because of their rapid convergence. Furthermore, of the sensitivity-based methods, streamline-based production data integration has proven to be extremely efficient computationally. In this work, we extend the history matching capability of the streamline-based technique to three-phase production while addressing in general, pertinent issues associated with history matching. We deviate from the typical approach of formulating the inverse problem in terms of derived quantities such as GOR and Watercut, or measured phase rates, but concentrate on the fundamental variables that characterize such quantities. The presented formulation is in terms of well node saturations and pressures. Production data is transformed to composite saturation quantities, the time variation of which is matched in the calibration exercise. The dependence of the transformation on pressure highlights its importance and thus a need for pressure match. To address this need, we follow a low frequency asymptotic formulation for the pressure equation. We propose a simultaneous inversion of the saturation and pressure components to account for the interdependence and thus, high non-linearity of three phase inversion. We also account for global parameters through experimental design methodology and response surface modeling. The validity of the proposed history matching technique is demonstrated through application to both synthetic and field cases.
3

Multiscale Spectral-Domain Parameterization for History Matching in Structured and Unstructured Grid Geometries

Bhark, Eric Whittet 2011 August 1900 (has links)
Reservoir model calibration to production data, also known as history matching, is an essential tool for the prediction of fluid displacement patterns and related decisions concerning reservoir management and field development. The history matching of high resolution geologic models is, however, known to define an ill-posed inverse problem such that the solution of geologic heterogeneity is always non-unique and potentially unstable. A common approach to improving ill-posedness is to parameterize the estimable geologic model components, imposing a type of regularization that exploits geologic continuity by explicitly or implicitly grouping similar properties while retaining at least the minimum heterogeneity resolution required to reproduce the data. This dissertation develops novel methods of model parameterization within the class of techniques based on a linear transformation. Three principal research contributions are made in this dissertation. First is the development of an adaptive multiscale history matching formulation in the frequency domain using the discrete cosine parameterization. Geologic model calibration is performed by its sequential refinement to a spatial scale sufficient to match the data. The approach enables improvement in solution non-uniqueness and stability, and further balances model and data resolution as determined by a parameter identifiability metric. Second, a model-independent parameterization based on grid connectivity information is developed as a generalization of the cosine parameterization for applicability to generic grid geometries. The parameterization relates the spatial reservoir parameters to the modal shapes or harmonics of the grid on which they are defined, merging with a Fourier analysis in special cases (i.e., for rectangular grid cells of constant dimensions), and enabling a multiscale calibration of the reservoir model in the spectral domain. Third, a model-dependent parameterization is developed to combine grid connectivity with prior geologic information within a spectral domain representation. The resulting parameterization is capable of reducing geologic models while imposing prior heterogeneity on the calibrated model using the adaptive multiscale workflow. In addition to methodological developments of the parameterization methods, an important consideration in this dissertation is their applicability to field scale reservoir models with varying levels of prior geologic complexity on par with current industry standards.
4

Uncertainty quantification for spatial field data using expensive computer models : refocussed Bayesian calibration with optimal projection

Salter, James Martin January 2017 (has links)
In this thesis, we present novel methodology for emulating and calibrating computer models with high-dimensional output. Computer models for complex physical systems, such as climate, are typically expensive and time-consuming to run. Due to this inability to run computer models efficiently, statistical models ('emulators') are used as fast approximations of the computer model, fitted based on a small number of runs of the expensive model, allowing more of the input parameter space to be explored. Common choices for emulators are regressions and Gaussian processes. The input parameters of the computer model that lead to output most consistent with the observations of the real-world system are generally unknown, hence computer models require careful tuning. Bayesian calibration and history matching are two methods that can be combined with emulators to search for the best input parameter setting of the computer model (calibration), or remove regions of parameter space unlikely to give output consistent with the observations, if the computer model were to be run at these settings (history matching). When calibrating computer models, it has been argued that fitting regression emulators is sufficient, due to the large, sparsely-sampled input space. We examine this for a range of examples with different features and input dimensions, and find that fitting a correlated residual term in the emulator is beneficial, in terms of more accurately removing regions of the input space, and identifying parameter settings that give output consistent with the observations. We demonstrate and advocate for multi-wave history matching followed by calibration for tuning. In order to emulate computer models with large spatial output, projection onto a low-dimensional basis is commonly used. The standard accepted method for selecting a basis is to use n runs of the computer model to compute principal components via the singular value decomposition (the SVD basis), with the coefficients given by this projection emulated. We show that when the n runs used to define the basis do not contain important patterns found in the real-world observations of the spatial field, linear combinations of the SVD basis vectors will not generally be able to represent these observations. Therefore, the results of a calibration exercise are meaningless, as we converge to incorrect parameter settings, likely assigning zero posterior probability to the correct region of input space. We show that the inadequacy of the SVD basis is very common and present in every climate model field we looked at. We develop a method for combining important patterns from the observations with signal from the model runs, developing a calibration-optimal rotation of the SVD basis that allows a search of the output space for fields consistent with the observations. We illustrate this method by performing two iterations of history matching on a climate model, CanAM4. We develop a method for beginning to assess model discrepancy for climate models, where modellers would first like to see whether the model can achieve certain accuracy, before allowing specific model structural errors to be accounted for. We show that calibrating using the basis coefficients often leads to poor results, with fields consistent with the observations ruled out in history matching. We develop a method for adjusting for basis projection when history matching, so that an efficient and more accurate implausibility bound can be derived that is consistent with history matching using the computationally prohibitive spatial field.
5

An efficient Bayesian formulation for production data integration into reservoir models

Leonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
6

Performance of Assisted History Matching Techniques When Utilizing Multiple Initial Geologic Models

Aggarwal, Akshay 14 March 2013 (has links)
History matching is a process wherein changes are made to an initial geologic model of a reservoir, so that the predicted reservoir performance matches with the known production history. Changes are made to the model parameters which include rock and fluid parameters (viscosity, compressibility, relative permeability, etc.) or properties within the geologic model. Assisted History Matching (AHM) provides an algorithmic framework to minimize the mismatch in simulation, and aids in accelerating this process. The changes made by AHM techniques, however, cannot ensure a geologically consistent reservoir model. In fact, the performance of these techniques depends on the initial starting model. In order to understand the impact of the initial model, this project explored the performance of the AHM approach using a specific field case, but working with multiple distinct geologic scenarios. This project involved an integrated seismic to simulation study, wherein I interpreted the seismic data, assembled the geological information, and performed petrophysical log evaluation along with well test data calibration. The ensemble of static models obtained was carried through the AHM methodology. I used sensitivity analysis to determine the most important dynamic parameters that affect the history match. These parameters govern the large scale changes in the reservoir description and are optimized using the Evolutionary Strategy Algorithm. Finally, the streamline based techniques were used for local modifications to match the water cut well by well. The following general conclusions were drawn from this study- a) The use of multiple simple geologic models is extremely useful in screening possible geologic scenarios and especially for discarding unreasonable alternative models. This was especially true for the large scale architecture of the reservoir. b) The AHM methodology was very effective in exploring a large number of parameters, running the simulation cases, and generating the calibrated reservoir models. The calibration step consistently worked better if the models had more spatial detail, instead of the simple models used for screening. c) The AHM methodology implemented a sequence of pressure and water cut history matching. An examination of specific models indicated that a better geologic description minimized the conflict between these two match criteria.
7

Rapid assessment of redevelopment potential in marginal oil fields, application to the cut bank field

Chavez Ballesteros, Luis Eladio 17 February 2005 (has links)
Quantifying infill potential in marginal oil fields often involves several challenges. These include highly heterogeneous reservoir quality both horizontally and vertically, incomplete reservoir databases, considerably large amounts of data involving numerous wells, and different production and completion practices. The most accurate way to estimate infill potential is to conduct a detailed integrated reservoir study, which is often time-consuming and expensive for operators of marginal oil fields. Hence, there is a need for less-demanding methods that characterize and predict heterogeneity and production variability. As an alternative approach, various authors have used empirical or statistical analyses to model variable well performance. Many of the methods are based solely on the analysis of well location, production and time data. My objective is to develop an enhanced method for rapid assessment of infill-drilling potential that would combine increased accuracy of simulation-based methods with times and costs associated with statistical methods. My proposed solution is to use reservoir simulation combined with automatic history matching to regress production data to determine the permeability distribution. Instead of matching on individual cell values of reservoir properties, I match on constant values of permeability within regions around each well. I then use the permeability distribution and an array of automated simulation predictions to determine infill drilling potential throughout the reservoir. Infill predictions on a single-phase synthetic case showed greater accuracy than results from statistical techniques. The methodology successfully identified infill well locations on a synthetic case derived from Cut Bank field, a water-flooded oil reservoir. Analysis of the actual production and injection data from Cut Bank field was unsuccessful, mainly because of an incomplete production database and limitations in the commercial regression software I used. In addition to providing more accurate results than previous empirical and statistical methods, the proposed method can also incorporate other types of data, such as geological data and fluid properties. The method can be applied in multiphase fluid situations and, since it is simulation based, it provides a platform for easy transition to more detailed analysis. Thus, the method can serve as a valuable reservoir management tool for operators of stripper oil fields.
8

An efficient Bayesian formulation for production data integration into reservoir models

Leonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
9

Application of the Ensemble Kalman Filter to Estimate Fracture Parameters in Unconventional Horizontal Wells by Downhole Temperature Measurements

Gonzales, Sergio Eduardo 16 December 2013 (has links)
The increase in energy demand throughout the world has forced the oil industry to develop and expand on current technologies to optimize well productivity. Distributed temperature sensing has become a current and fairly inexpensive way to monitor performance in hydraulic fractured wells in real time by the aid of fiber optic. However, no applications have yet been attempted to describe or estimate the fracture parameters using distributed temperature sensing as the observation parameter. The Ensemble Kalman Filter, a recursive filter, has proved to be an effective tool in the application of inverse problems to determine parameters of non-linear models. Even though large amounts of data are acquired as the information used to apply an estimation, the Ensemble Kalman Filter effectively minimizes the time of operation by only using “snapshots” of the ensembles collected by various simulations where the estimation is updated continuously to be calibrated by comparing it to a reference model. A reservoir model using ECLIPSE is constructed that measures temperature throughout the wellbore. This model is a hybrid representation of what distributed temperature sensing measures in real-time throughout the wellbore. Reservoir and fracture parameters are selected in this model with similar properties and values to an unconventional well. However, certain parameters such as fracture width are manipulated to significantly diminish the computation time. A sensitivity study is performed for all the reservoir and fracture parameters in order to understand which parameters require more or less data to allow the Ensemble Kalman Filter to arrive to an acceptable estimation. Two fracture parameters are selected based on their low sensitivity and importance in fracture design to perform the Ensemble Kalman Filter on various simulations. Fracture permeability has very low sensitivity. However, when applying the estimation the Ensemble Kalman Filter arrives to an acceptable estimation. Similarly fracture halflength, with medium sensitivity, arrives to an acceptable estimation around the same number of integration steps. The true effectiveness of the Ensemble Kalman Filter is presented when both parameters are estimated jointly and arrive to an acceptable estimation without being computationally expensive. The effectiveness of the Ensemble Kalman Filter is directly connected to the quantity of data acquired. The more data available to run simulations, the better and faster the filter performs.
10

Production Optimization Of A Gas Condensate Reservoir Using A Black Oil Simulator And Nodal System Analysis:a Case Study

Mindek, Cem 01 June 2005 (has links) (PDF)
In a natural gas field, determining the life of the field and deciding the best production technique, meeting the economical considerations is the most important criterion. In this study, a field in Thrace Basin was chosen. Available reservoir data was compiled to figure out the characteristics of the field. The data, then, formatted to be used in the commercial simulator, IMEX, a subprogram of CMG (Computer Modeling Group). The data derived from the reservoir data, used to perform a history match between the field production data and the results of the simulator for a 3 year period between May 2002 and January 2005. After obtaining satisfactory history matching, it was used as a base for future scenarios. Four new scenarios were designed and run to predict future production of the field. Two new wells were defined for the scenarios after determining the best region in history matching. Scenario 1 continues production with existing wells, Scenario 2 includes a new well called W6, Scenario 3 includes another new well, W7 and Scenario 4 includes both new defined wells, W6 and W7. All the scenarios were allowed to continue until 2010 unless the wellhead pressure drops to 500 psi. None of the existing wells reached 2010 but newly defined wells achieved to be on production in 2010. After comparing all scenarios, Scenario 4, production with two new defined wells, W6 and W7, was found to give best performance until 2010. During the scenario 4, between January 2005 and January 2010, 7,632 MMscf gas was produced. The total gas production is 372 MMscf more than Scenario 2, the second best scenario which has a total production of 7,311MMscf. Scenario 3 had 7,260 MMscf and Scenario 1 had 6,821 MMscf respectively. A nodal system analysis is performed in order to see whether the initial flow rates of the wells are close to the optimum flow rates of the wells, Well 1 is found to have 6.9 MMscf/d optimum production rate. W2 has 3.2 MMscf/d, W3 has 8.3 MMscf/d, W4 has 4.8 MMscf/d and W5 has 0.95 MMscf/d optimum production rates respectively.

Page generated in 0.0854 seconds