• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 13
  • 9
  • 2
  • Tagged with
  • 53
  • 53
  • 28
  • 16
  • 12
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Uncertainty quantification for spatial field data using expensive computer models : refocussed Bayesian calibration with optimal projection

Salter, James Martin January 2017 (has links)
In this thesis, we present novel methodology for emulating and calibrating computer models with high-dimensional output. Computer models for complex physical systems, such as climate, are typically expensive and time-consuming to run. Due to this inability to run computer models efficiently, statistical models ('emulators') are used as fast approximations of the computer model, fitted based on a small number of runs of the expensive model, allowing more of the input parameter space to be explored. Common choices for emulators are regressions and Gaussian processes. The input parameters of the computer model that lead to output most consistent with the observations of the real-world system are generally unknown, hence computer models require careful tuning. Bayesian calibration and history matching are two methods that can be combined with emulators to search for the best input parameter setting of the computer model (calibration), or remove regions of parameter space unlikely to give output consistent with the observations, if the computer model were to be run at these settings (history matching). When calibrating computer models, it has been argued that fitting regression emulators is sufficient, due to the large, sparsely-sampled input space. We examine this for a range of examples with different features and input dimensions, and find that fitting a correlated residual term in the emulator is beneficial, in terms of more accurately removing regions of the input space, and identifying parameter settings that give output consistent with the observations. We demonstrate and advocate for multi-wave history matching followed by calibration for tuning. In order to emulate computer models with large spatial output, projection onto a low-dimensional basis is commonly used. The standard accepted method for selecting a basis is to use n runs of the computer model to compute principal components via the singular value decomposition (the SVD basis), with the coefficients given by this projection emulated. We show that when the n runs used to define the basis do not contain important patterns found in the real-world observations of the spatial field, linear combinations of the SVD basis vectors will not generally be able to represent these observations. Therefore, the results of a calibration exercise are meaningless, as we converge to incorrect parameter settings, likely assigning zero posterior probability to the correct region of input space. We show that the inadequacy of the SVD basis is very common and present in every climate model field we looked at. We develop a method for combining important patterns from the observations with signal from the model runs, developing a calibration-optimal rotation of the SVD basis that allows a search of the output space for fields consistent with the observations. We illustrate this method by performing two iterations of history matching on a climate model, CanAM4. We develop a method for beginning to assess model discrepancy for climate models, where modellers would first like to see whether the model can achieve certain accuracy, before allowing specific model structural errors to be accounted for. We show that calibrating using the basis coefficients often leads to poor results, with fields consistent with the observations ruled out in history matching. We develop a method for adjusting for basis projection when history matching, so that an efficient and more accurate implausibility bound can be derived that is consistent with history matching using the computationally prohibitive spatial field.
2

An efficient Bayesian formulation for production data integration into reservoir models

Leonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
3

Performance of Assisted History Matching Techniques When Utilizing Multiple Initial Geologic Models

Aggarwal, Akshay 14 March 2013 (has links)
History matching is a process wherein changes are made to an initial geologic model of a reservoir, so that the predicted reservoir performance matches with the known production history. Changes are made to the model parameters which include rock and fluid parameters (viscosity, compressibility, relative permeability, etc.) or properties within the geologic model. Assisted History Matching (AHM) provides an algorithmic framework to minimize the mismatch in simulation, and aids in accelerating this process. The changes made by AHM techniques, however, cannot ensure a geologically consistent reservoir model. In fact, the performance of these techniques depends on the initial starting model. In order to understand the impact of the initial model, this project explored the performance of the AHM approach using a specific field case, but working with multiple distinct geologic scenarios. This project involved an integrated seismic to simulation study, wherein I interpreted the seismic data, assembled the geological information, and performed petrophysical log evaluation along with well test data calibration. The ensemble of static models obtained was carried through the AHM methodology. I used sensitivity analysis to determine the most important dynamic parameters that affect the history match. These parameters govern the large scale changes in the reservoir description and are optimized using the Evolutionary Strategy Algorithm. Finally, the streamline based techniques were used for local modifications to match the water cut well by well. The following general conclusions were drawn from this study- a) The use of multiple simple geologic models is extremely useful in screening possible geologic scenarios and especially for discarding unreasonable alternative models. This was especially true for the large scale architecture of the reservoir. b) The AHM methodology was very effective in exploring a large number of parameters, running the simulation cases, and generating the calibrated reservoir models. The calibration step consistently worked better if the models had more spatial detail, instead of the simple models used for screening. c) The AHM methodology implemented a sequence of pressure and water cut history matching. An examination of specific models indicated that a better geologic description minimized the conflict between these two match criteria.
4

Rapid assessment of redevelopment potential in marginal oil fields, application to the cut bank field

Chavez Ballesteros, Luis Eladio 17 February 2005 (has links)
Quantifying infill potential in marginal oil fields often involves several challenges. These include highly heterogeneous reservoir quality both horizontally and vertically, incomplete reservoir databases, considerably large amounts of data involving numerous wells, and different production and completion practices. The most accurate way to estimate infill potential is to conduct a detailed integrated reservoir study, which is often time-consuming and expensive for operators of marginal oil fields. Hence, there is a need for less-demanding methods that characterize and predict heterogeneity and production variability. As an alternative approach, various authors have used empirical or statistical analyses to model variable well performance. Many of the methods are based solely on the analysis of well location, production and time data. My objective is to develop an enhanced method for rapid assessment of infill-drilling potential that would combine increased accuracy of simulation-based methods with times and costs associated with statistical methods. My proposed solution is to use reservoir simulation combined with automatic history matching to regress production data to determine the permeability distribution. Instead of matching on individual cell values of reservoir properties, I match on constant values of permeability within regions around each well. I then use the permeability distribution and an array of automated simulation predictions to determine infill drilling potential throughout the reservoir. Infill predictions on a single-phase synthetic case showed greater accuracy than results from statistical techniques. The methodology successfully identified infill well locations on a synthetic case derived from Cut Bank field, a water-flooded oil reservoir. Analysis of the actual production and injection data from Cut Bank field was unsuccessful, mainly because of an incomplete production database and limitations in the commercial regression software I used. In addition to providing more accurate results than previous empirical and statistical methods, the proposed method can also incorporate other types of data, such as geological data and fluid properties. The method can be applied in multiphase fluid situations and, since it is simulation based, it provides a platform for easy transition to more detailed analysis. Thus, the method can serve as a valuable reservoir management tool for operators of stripper oil fields.
5

An efficient Bayesian formulation for production data integration into reservoir models

Leonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
6

Application of the Ensemble Kalman Filter to Estimate Fracture Parameters in Unconventional Horizontal Wells by Downhole Temperature Measurements

Gonzales, Sergio Eduardo 16 December 2013 (has links)
The increase in energy demand throughout the world has forced the oil industry to develop and expand on current technologies to optimize well productivity. Distributed temperature sensing has become a current and fairly inexpensive way to monitor performance in hydraulic fractured wells in real time by the aid of fiber optic. However, no applications have yet been attempted to describe or estimate the fracture parameters using distributed temperature sensing as the observation parameter. The Ensemble Kalman Filter, a recursive filter, has proved to be an effective tool in the application of inverse problems to determine parameters of non-linear models. Even though large amounts of data are acquired as the information used to apply an estimation, the Ensemble Kalman Filter effectively minimizes the time of operation by only using “snapshots” of the ensembles collected by various simulations where the estimation is updated continuously to be calibrated by comparing it to a reference model. A reservoir model using ECLIPSE is constructed that measures temperature throughout the wellbore. This model is a hybrid representation of what distributed temperature sensing measures in real-time throughout the wellbore. Reservoir and fracture parameters are selected in this model with similar properties and values to an unconventional well. However, certain parameters such as fracture width are manipulated to significantly diminish the computation time. A sensitivity study is performed for all the reservoir and fracture parameters in order to understand which parameters require more or less data to allow the Ensemble Kalman Filter to arrive to an acceptable estimation. Two fracture parameters are selected based on their low sensitivity and importance in fracture design to perform the Ensemble Kalman Filter on various simulations. Fracture permeability has very low sensitivity. However, when applying the estimation the Ensemble Kalman Filter arrives to an acceptable estimation. Similarly fracture halflength, with medium sensitivity, arrives to an acceptable estimation around the same number of integration steps. The true effectiveness of the Ensemble Kalman Filter is presented when both parameters are estimated jointly and arrive to an acceptable estimation without being computationally expensive. The effectiveness of the Ensemble Kalman Filter is directly connected to the quantity of data acquired. The more data available to run simulations, the better and faster the filter performs.
7

Production Optimization Of A Gas Condensate Reservoir Using A Black Oil Simulator And Nodal System Analysis:a Case Study

Mindek, Cem 01 June 2005 (has links) (PDF)
In a natural gas field, determining the life of the field and deciding the best production technique, meeting the economical considerations is the most important criterion. In this study, a field in Thrace Basin was chosen. Available reservoir data was compiled to figure out the characteristics of the field. The data, then, formatted to be used in the commercial simulator, IMEX, a subprogram of CMG (Computer Modeling Group). The data derived from the reservoir data, used to perform a history match between the field production data and the results of the simulator for a 3 year period between May 2002 and January 2005. After obtaining satisfactory history matching, it was used as a base for future scenarios. Four new scenarios were designed and run to predict future production of the field. Two new wells were defined for the scenarios after determining the best region in history matching. Scenario 1 continues production with existing wells, Scenario 2 includes a new well called W6, Scenario 3 includes another new well, W7 and Scenario 4 includes both new defined wells, W6 and W7. All the scenarios were allowed to continue until 2010 unless the wellhead pressure drops to 500 psi. None of the existing wells reached 2010 but newly defined wells achieved to be on production in 2010. After comparing all scenarios, Scenario 4, production with two new defined wells, W6 and W7, was found to give best performance until 2010. During the scenario 4, between January 2005 and January 2010, 7,632 MMscf gas was produced. The total gas production is 372 MMscf more than Scenario 2, the second best scenario which has a total production of 7,311MMscf. Scenario 3 had 7,260 MMscf and Scenario 1 had 6,821 MMscf respectively. A nodal system analysis is performed in order to see whether the initial flow rates of the wells are close to the optimum flow rates of the wells, Well 1 is found to have 6.9 MMscf/d optimum production rate. W2 has 3.2 MMscf/d, W3 has 8.3 MMscf/d, W4 has 4.8 MMscf/d and W5 has 0.95 MMscf/d optimum production rates respectively.
8

History matching of surfactant-polymer flooding

Pratik Kiranrao Naik (5930765) 17 January 2019 (has links)
This thesis presents a framework for history matching and model calibration of surfactant-polymer (SP) flooding. At first, a high-fidelity mechanistic SP flood model is constructed by performing extensive lab-scale experiments on Berea cores. Then, incorporating Sobol based sensitivity analysis, polynomial chaos expansion based surrogate modelling (PCE-proxy) and Genetic algorithm based inverse optimization, an optimized model parameter set is determined by minimizing the miss-fit between PCE-proxy response and experimental observations for quantities of interests such as cumulative oil recovery and pressure profile. The epistemic uncertainty in PCE-proxy is quantified using a Gaussian regression process called Kriging. The framework is then extended to Bayesian calibration where the posterior of model parameters is inferred by directly sampling from it using Markov chain Monte Carlo (MCMC). Finally, a stochastic multi-objective optimization problem is posed under uncertainties in model parameters and oil price which is solved using a variant of Bayesian global optimization routine. <br>
9

The integration of seismic anisotropy and reservoir performance data for characterization of naturally fractured reservoirs using discrete feature network models

Will, Robert A. 30 September 2004 (has links)
This dissertation presents the development of a method for quantitative integration of seismic (elastic) anisotropy attributes with reservoir performance data as an aid in characterization of systems of natural fractures in hydrocarbon reservoirs. This new method incorporates stochastic Discrete Feature Network (DFN) fracture modeling techniques, DFN model based fracture system hydraulic property and elastic anisotropy modeling, and non-linear inversion techniques, to achieve numerical integration of production data and seismic attributes for iterative refinement of initial trend and fracture intensity estimates. Although DFN modeling, flow simulation, and elastic anisotropy modeling are in themselves not new technologies, this dissertation represents the first known attempt to integrate advanced models for production performance and elastic anisotropy in fractured reservoirs using a rigorous mathematical inversion. The following new developments are presented: . • Forward modeling and sensitivity analysis of the upscaled hydraulic properties of realistic DFN fracture models through use of effective permeability modeling techniques. . • Forward modeling and sensitivity analysis of azimuthally variant seismic attributes based on the same DFN models. . • Development of a combined production and seismic data objective function and computation of sensitivity coefficients. . • Iterative model-based non-linear inversion of DFN fracture model trend and intensity through minimization of the combined objective function. This new technique is demonstrated on synthetic models with single and multiple fracture sets as well as differing background (host) reservoir hydraulic and elastic properties. Results on these synthetic control models show that, given a well conditioned initial DFN model and good quality field production and seismic observations, the integration procedure results in convergence of both fracture trend and intensity in models with both single and multiple fracture sets. Tests show that for a single fracture set convergence is accelerated when the combined objective function is used as compared to a similar technique using only production data in the objective function. Tests performed on multiple fracture sets show that, without the addition of seismic anisotropy, the model fails to converge. These tests validate the importance of the new process for use in more realistic reservoir models.
10

Predicting the migration of CO₂ plume in saline aquifers using probabilistic history matching approaches

Bhowmik, Sayantan 20 August 2012 (has links)
During the operation of a geological carbon storage project, verifying that the CO₂ plume remains within the permitted zone is of particular interest both to regulators and to operators. However, the cost of many monitoring technologies, such as time-lapse seismic, limits their application. For adequate predictions of plume migration, proper representation of heterogeneous permeability fields is imperative. Previous work has shown that injection data (pressures, rates) from wells might provide a means of characterizing complex permeability fields in saline aquifers. Thus, given that injection data are readily available and inexpensive, they might provide an inexpensive alternative for monitoring; combined with a flow model like the one developed in this work, these data could even be used for predicting plume migration. These predictions of plume migration pathways can then be compared to field observations like time-lapse seismic or satellite measurements of surface-deformation, to ensure the containment of the injected CO₂ within the storage area. In this work, two novel methods for creating heterogeneous permeability fields constrained by injection data are demonstrated. The first method is an implementation of a probabilistic history matching algorithm to create models of the aquifer for predicting the movement of the CO₂ plume. The geologic property of interest, for example hydraulic conductivity, is updated conditioned to geological information and injection pressures. The resultant aquifer model which is geologically consistent can be used to reliably predict the movement of the CO₂ plume in the subsurface. The second method is a model selection algorithm that refines an initial suite of subsurface models representing the prior uncertainty to create a posterior set of subsurface models that reflect injection performance consistent with that observed. Such posterior models can be used to represent uncertainty in the future migration of the CO₂ plume. The applicability of both methods is demonstrated using a field data set from central Algeria. / text

Page generated in 0.0501 seconds