• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 24
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 19
  • 14
  • 12
  • 11
  • 10
  • 10
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Modelling and simulation of turbulence subject to system rotation

Grundestam, Olof January 2006 (has links)
Simulation and modelling of turbulent flows under influence of streamline curvature and system rotation have been considered. Direct numerical simulations have been performed for fully developed rotating turbulent channel flow using a pseudo-spectral code. The rotation numbers considered are larger than unity. For the range of rotation numbers studied, an increase in rotation number has a damping effect on the turbulence. DNS-data obtained from previous simulations are used to perform a priori tests of different pressure-strain and dissipation rate models. Furthermore, the ideal behaviour of the coefficients of different model formulations is investigated. The main part of the modelling is focused on explicit algebraic Reynolds stress models (EARSMs). An EARSM based on a pressure strain rate model including terms that are tensorially nonlinear in the mean velocity gradients is proposed. The new model is tested for a number of flows including a high-lift aeronautics application. The linear extensions are demonstrated to have a significant effect on the predictions. Representation techniques for EARSMs based on incomplete sets of basis tensors are also considered. It is shown that a least-squares approach is favourable compared to the Galerkin method. The corresponding optimality aspects are considered and it is deduced that Galerkin based EARSMs are not optimal in a more strict sense. EARSMs derived with the least-squares method are, on the other hand, optimal in the sense that the error of the underlying implicit relation is minimized. It is further demonstrated that the predictions of the least-squares EARSMs are in significantly better agreement with the corresponding complete EARSMs when tested for fully developed rotating turbulent pipe flow. / QC 20100825
52

Optimal Waterflood Management under Geologic Uncertainty Using Rate Control: Theory and Field Applications

Alhuthali, Ahmed Humaid H. 16 January 2010 (has links)
Waterflood optimization via rate control is receiving increased interest because of rapid developments in the smart well completions and I-field technology. The use of inflow control valves (ICV) allows us to optimize the production/injection rates of various segments along the wellbore, thereby maximizing sweep efficiency and delaying water breakthrough. It is well recognized that field scale rate optimization problems are difficult because they often involve highly complex reservoir models, production and facilities related constraints and a large number of unknowns. Some aspects of the optimization problem have been studied before using mainly optimal control theory. However, the applications to-date have been limited to rather small problems because of the computation time and the complexities associated with the formulation and solution of adjoint equations. Field-scale rate optimization for maximizing waterflood sweep efficiency under realistic field conditions has still remained largely unexplored. We propose a practical and efficient approach for computing optimal injection and production rates and thereby manage the waterflood front to maximize sweep efficiency and delay the arrival time to minimize water cycling. Our work relies on equalizing the arrival times of the waterfront at all producers within selected sub-regions of a water flood project. The arrival time optimization has favorable quasi-linear properties and the optimization proceeds smoothly even if our initial conditions are far from the solution. We account for geologic uncertainty using two optimization schemes. The first one is to formulate the objective function in a stochastic form which relies on a combination of expected value and standard deviation combined with a risk attitude coefficient. The second one is to minimize the worst case scenario using a min-max problem formulation. The optimization is performed under operational and facility constraints using a sequential quadratic programming approach. A major advantage of our approach is the analytical computation of the gradient and Hessian of the objective which makes it computationally efficient and suitable for large field cases. Multiple examples are presented to support the robustness and efficiency of the proposed optimization scheme. These include several 2D synthetic examples for validation purposes and 3D field applications.
53

Continuous reservoir model updating using an ensemble Kalman filter with a streamline-based covariance localization

Arroyo Negrete, Elkin Rafael 25 April 2007 (has links)
This work presents a new approach that combines the comprehensive capabilities of the ensemble Kalman filter (EnKF) and the flow path information from streamlines to eliminate and/or reduce some of the problems and limitations of the use of the EnKF for history matching reservoir models. The recent use of the EnKF for data assimilation and assessment of uncertainties in future forecasts in reservoir engineering seems to be promising. EnKF provides ways of incorporating any type of production data or time lapse seismic information in an efficient way. However, the use of the EnKF in history matching comes with its shares of challenges and concerns. The overshooting of parameters leading to loss of geologic realism, possible increase in the material balance errors of the updated phase(s), and limitations associated with non-Gaussian permeability distribution are some of the most critical problems of the EnKF. The use of larger ensemble size may mitigate some of these problems but are prohibitively expensive in practice. We present a streamline-based conditioning technique that can be implemented with the EnKF to eliminate or reduce the magnitude of these problems, allowing for the use of a reduced ensemble size, thereby leading to significant savings in time during field scale implementation. Our approach involves no extra computational cost and is easy to implement. Additionally, the final history matched model tends to preserve most of the geological features of the initial geologic model. A quick look at the procedure is provided that enables the implementation of this approach into the current EnKF implementations. Our procedure uses the streamline path information to condition the covariance matrix in the Kalman Update. We demonstrate the power and utility of our approach with synthetic examples and a field case. Our result shows that using the conditioned technique presented in this thesis, the overshooting/undershooting problems disappears and the limitation to work with non- Gaussian distribution is reduced. Finally, an analysis of the scalability in a parallel implementation of our computer code is given.
54

Simulação por linhas de corrente com compressibilidade e variação espacial e dinamica de composição de oleo / Streamline based simulation with compressibility and spatial and dynamic variation of oil composition

Beraldo, Valcir Tadeu 13 August 2018 (has links)
Orientadores: Denis Jose Schiozer, Martin Julian Blunt / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica e Instituto de Geociencias / Made available in DSpace on 2018-08-13T09:49:42Z (GMT). No. of bitstreams: 1 Beraldo_ValcirTadeu_D.pdf: 9605890 bytes, checksum: 4a102164c597164e50bd6a9d11c41c7c (MD5) Previous issue date: 2008 / Resumo: A variação espacial da composição inicial do óleo é um fenômeno que aparece em alguns reservatórios e que deve ser considerada nas simulações. O objetivo desta tese é implementar uma formulação que considera essa variação em simuladores por linhas de corrente. Esse tipo de simulação pode, em muitas situações, ser mais rápido que os simuladores por diferenças finitas. Uma das limitações importantes da simulação por linhas de corrente é o tratamento de compressibilidades de rocha e fluido. Por isso, foi também implementada uma formulação que considera compressibilidade com variação da qualidade do óleo. Inicialmente um simulador bifásico por linhas de corrente para sistema incompressível foi alterado para trabalhar com dois componentes na fase óleo, permitindo assim considerar a variação das propriedades desta fase. Em seguida, o simulador foi modificado, incorporando a formulação para sistemas compressíveis com variação de qualidade de óleo. Foi necessário criar, nesta fase, alguns procedimentos para tornar o programa estável nas diversas situações testadas. As implementações foram validadas através de comparações com simuladores comerciais por diferenças finitas em uma série de modelos que representam situações normalmente encontradas. Os testes mostraram que, em ambas formulações, foi possível a reprodução satisfatória dos resultados, utilizando os simuladores por linha de corrente. Na formulação para sistema compressível, foi feita uma análise de sensibilidade do tempo de execução e da qualidade da solução a alguns parâmetros de controle numérico que foram definidos no código computacional. Em modelos de sistemas compressíveis com variação nas propriedades de óleo, heterogêneos e refinados, os resultados mostraram que a combinação adequada de parâmetros permite a simulação por linhas de corrente em tempos sensivelmente menores que a simulação por diferenças finitas, mantendo-se a qualidade dos resultados. / Abstract: Spatial oil composition variation can be found in some reservoirs and it has to be considered in simulations. The goal of this thesis is to implement a formulation that considers this variation in streamline simulators, which can be, in many situations, faster than finite difference simulators. One of the important restraints of streamline simulations is the treatment of rock and fluid compressibility. Therefore, a formulation that considers oil quality variation with compressibility has also been implemented. At first, a two phase streamline simulator for incompressible system was modified to work with two components in the oleic phase, allowing consideration of property variations on this phase. Then, the simulator was modified in order to incorporate the formulation for compressible system with oil quality variation. The implementations have been validated by comparisons with a finite difference commercial simulator in several compressible reservoir models, showing good results. Using the formulation for compressible systems, it has been done a sensitivity analysis of execution time and quality of solution with the variation of some numerical parameters that have been defined in the computational code. In models of heterogeneous and very refined reservoirs with oil property variation, the results showed that the appropriate combination of numerical parameters allows running the streamline simulation much faster than finite difference simulation, while keeping the quality of the results. / Doutorado / Reservatórios e Gestão / Doutor em Ciências e Engenharia de Petróleo
55

PENSADO A MANO, FABRICADO EN SERIE. Pioneros del Diseño Industrial. Transformación y adaptabilidad de las profesiones creativas

Silvestre Navarro, Francisco Miguel 05 April 2016 (has links)
[EN] The consolidation of the figure of the industrial designer in the United States during the middle decades of the twentieth century is the issue raised by this investigation. This process involves the transition between craft and industry that had begun centuries ago. American designers specialized in a new profession, they set up a shared, multidisciplinary intelligence. Perhaps unwittingly they pioneered in blurring the boundaries between the creative disciplines. It is a valuable model of transformation and adaptability that you can draw conclusions and parallels with the current period, an example with which to prepare and understand the acceleration produced by the third industrial revolution, and the changes produced in the creative professions. There have always been more or less traditional way, ways to mass produce everything around us, from everyday objects to architectures. But it is the second industrial revolution, with the emergence of Taylorism and the Ford revolution, which by new machines and systematization of manufacturing processes introduced a turning point in this development. Movements in Europe as the Arts and Crafts succeeded, groups and associations were established as the Wiener Werkstätte and the Bauhaus and later Werkbunds was established, all connected and that influenced in part on the evolution of design in the young American nation. The United States gave an exceptional context: a market coupled with unprecedented scale, a company with an innovative technical comfort in energy, materials and media led to a way of life, The American way of life, with its lights and shadows. It was just the right distance from the European tradition of the elements that favored the consolidation of the industrial design profession in US territory. The historical uniqueness lies in the level of technical expertise had increased to make the knowledge to produce a well exceed the ability of a person. The pioneers of American design coordinated work of multidisciplinary teams of engineers, architects, manufacturers, advertisers... while accompanying the product until the communication process, encouraged by the needs of large companies. This allowed them to work in a variety of projects that did not happen so broadly notable examples from the Renaissance. They worked in fields such as graphic design, industrial design, interior design, architectural design, urban design, automotive design, railways, naval design, aircraft design... even aerospace design, creating new types and extending the scope of their work. The most important designers of the period were Raymond Loewy, Norman Bel Geddes, Walter Dorwin Teague and Henry Dreyfuss, from the world of stage design, illustration and window dressing. Guided by intuition helped the beauty industry from the 1920s. Speaking of the birth of industrial design assumes deepen relations between art and industry, between man and machine. It invites us to ask questions that make up what is to come. / [ES] La consolidación de la figura del diseñador industrial en Estados Unidos durante las décadas centrales del siglo XX es el tema que suscita la presente investigación. Este proceso supone la transición entre artesanía e industria que había comenzado siglos atrás. Los diseñadores americanos se especializaron en una nueva profesión, pusieron en marcha una inteligencia compartida y multidisciplinar. Quizás sin saberlo fueron pioneros en desdibujar los límites entre las disciplinas creativas. Es un valioso modelo de transformación y adaptabilidad del que se pueden extraer conclusiones y paralelismos con la época actual, un ejemplo con el que poder preparar y entender la aceleración producida por la tercera revolución industrial, y los cambios que produce en las profesiones creativas. Siempre han existido, de forma más o menos artesanal, maneras para producir en serie todo lo que nos rodea, desde los objetos más cotidianos hasta las arquitecturas. Pero es la segunda revolución industrial, con la aparición del Taylorismo y la revolución Ford, la que mediante nuevas máquinas y la sistematización de los procesos de fabricación introducen un punto de inflexión en este avance. En Europa se sucedieron instituciones, escuelas y movimientos como las Arts and Crafts, se establecieron colectivos y asociaciones como los Wiener Werkstätte y las Werkbunds y posteriormente se constituyó la Bauhaus, conectados todos ellos y que influyeron en parte en la evolución del diseño en la joven nación norteamericana. En Estados Unidos se dio un contexto excepcional: un mercado unido con unas dimensiones sin precedentes, una sociedad con un confort técnico novedoso en energías, materiales y medios de comunicación dieron lugar a un modo de vida, The american way of life, con sus luces y sus sombras. Fue la distancia justa de la tradición europea uno de los elementos que favoreció la consolidación del Diseño Industrial como profesión en territorio norteamericano. La singularidad histórica radica en que el nivel de especialización técnica había aumentado hasta hacer que los conocimientos para producir un bien excedieran la capacidad de una persona. Los pioneros del diseño americano coordinaban el trabajo de equipos multidisciplinares formados por ingenieros, arquitectos, fabricantes, publicistas¿ a la vez que acompañaban al producto hasta el proceso de comunicación, animados por las necesidades de las grandes compañías. Esto les permitió trabajar en una variedad de proyectos que no sucedía de forma tan amplia desde los notables ejemplos del Renacimiento. Trabajaron en campos como el diseño gráfico, el diseño industrial, diseño de interiores, diseño arquitectónico, diseño urbanístico, diseño automovilístico, diseño de ferrocarriles, diseño naval, diseño aeronáutico¿ incluso diseño aeroespacial, generando nuevos tipos y ampliando el espectro de sus trabajos. Los diseñadores más relevantes de esta época fueron Raymond Loewy, Norman Bel Geddes, Walter Dorwin Teague y Henry Dreyfuss, procedentes del mundo de la escenografía, la ilustración y el escaparatismo. Guiados por la intuición de la belleza ayudaron a la industria a partir de la década de 1920. Hablar del nacimiento del Diseño industrial supone profundizar en las relaciones entre arte e industria, entre el hombre y la máquina. Nos invita a plantearnos preguntas que inventen lo que está por venir. / [CAT] La consolidació de la figura del dissenyador industrial als Estats Units durant les dècades centrals del segle XX és el tema que suscita la present investigació. Aquest procés suposa la transició entre artesania i indùstria que havia començat segles arrere. Els dissenyadors americans es van especialitzar en una nova professió, van posar en funcionament una intel¿ligència compartida i multidisciplinar. Tal vegada, sense saber-ho, van ser pioners en desdibuixar els límits entre les disciplines creatives. És un valuós model de transformació i adaptabilitat del que es poden extraure conclusions i paral¿lelismes amb l'època actual, un exemple amb el què poder preparar i entendre l'acel¿leració produïda per la tercera revolució industrial, i els canvis que produeix amb les professions creatives. Sempre han existit, de forma més o menys artesanal, maneres per a produïr en sèrie tot allò que ens envolta, des de els objectes més quotidians, fins a les arquitectures. Però és la segona revolució industrial, amb el sorgiment del taylorisme i la revolució Ford, la que, mitjançant noves màquines i la sistematització dels processos de fabricació, introdueixen un punt d'inflexió en aquest avançament. A Europa es van succeïr institucions, escoles i moviments com les Arts and Crafts, es van establir col¿lectius i associacions com els Wiener Werkstätte i les Werkbunds, i posteriorment es va constituir la Bauhaus, connectats tots ells i que van influir en part en la evolució del disseny a la jove nació nordamericana. Als Estats Units es va donar un context excepcional: un mercat unit amb unes dimensions sense precedents, una societat amb un comfort tècnic novedós en energies, materials i mitjans de comunicació van donar lloc a un estil de vida, The american way of life, amb les seues llums i les seues ombres. Va ser la distància justa de la tradició europea un dels elements que va afavorir la consolidació del Disseny Industrial com a professió en territori nordamericà. La singularitat històrica radica en què el nivell d'especialització tècnica havia augmentat fins a fer que els coneixements per a produïr un bé excediren la capacitat d'una persona. Els pioners del disseny americà coordinaven el treball d'equips multidisciplinars formats per enginyers, arquitectes, fabricants, publicistes¿ a la vegada què acompanyaven el producte fins al procés de comunicació, recolzats per les necessitats de les grans companyies. Allò els va permetre treballar en una varietat de projectes que no succeïa de forma tan ampla des dels notables exemples del Renaixement. Van treballar en camps com el disseny gràfic, el disseny industrial, disseny d'interiors, disseny arquitectònic, disseny urbanístic, disseny automovilístic, disseny de ferrocarrils, disseny naval, disseny aeronàutic¿ inclús disseny aeroespacial, generant nous tipus i ampliant l'espectre dels seus treballs. Els dissenyadors més destacats d'aquesta època van ser Raymond Loewy, Norman Bel Geddes, Walter Dorwin Teague i Henry Dreyfuss, procedents del món de l'escenografia, la il¿lustració i l'escaparatisme. Guiats per la intuició de la bellessa, van ajudar a la indùstria a partir de la dècada de 1920. Parlar del naixement del Disseny Industrial suposa aprofundir en les relacions entre art i indùstria, entre l'home i la màquina. Ens convida a plantejar-nos preguntes que inventen allò que està per vindre. / Silvestre Navarro, FM. (2016). PENSADO A MANO, FABRICADO EN SERIE. Pioneros del Diseño Industrial. Transformación y adaptabilidad de las profesiones creativas [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62207 / TESIS
56

Stabilization of POD-ROMs

Wells, David Reese 17 June 2015 (has links)
This thesis describes several approaches for stabilizing POD-ROMs (that is, reduced order models based on basis functions derived from the proper orthogonal decomposition) for both the CDR (convection-diffusion-reaction) equation and the NSEs (Navier-Stokes equations). Stabilization is necessary because standard POD-ROMs of convection-dominated problems usually display numerical instabilities. The first stabilized ROM investigated is a streamline-upwind Petrov-Galerkin ROM (SUPG-ROM). I prove error estimates for the SUPG-ROM and derive optimal scalings for the stabilization parameter. I test the SUPG-ROM with the optimal parameter in the numerical simulation of a convection-dominated CDR problem. The SUPG-ROM yields more accurate results than the standard Galerkin ROM (G-ROM) by eliminating the inherent numerical artifacts (noise) in the data and dampening spurious oscillations. I next propose two regularized ROMs (Reg-ROMs) based on ideas from large eddy simulation and turbulence theory: the Leray ROM (L-ROM) and the evolve-then-filter ROM (EF-ROM). Both Reg-ROMs use explicit POD spatial filtering to regularize (smooth) some of the terms in the standard G-ROM. I propose two different POD spatial filters: one based on the POD projection and a novel POD differential filter. These two new Reg-ROMs and the two spatial filters are investigated in the numerical simulation of the three-dimensional flow past a circular cylinder problem at Re = 100. The numerical results show that EF-ROM-DF is the most accurate Reg-ROM and filter combination and the differential filter generally yields better results than the projection filter. The Reg-ROMs perform significantly better than the standard G-ROM and decrease the CPU time (compared against the direct numerical simulation) by orders of magnitude (from about four days to four minutes). / Ph. D.
57

Supervised Learning for White Matter Bundle Segmentation

Bertò, Giulia 03 June 2020 (has links)
Accurate delineation of anatomical structures in the white matter of the human brain is of paramount importance for multiple applications, such as neurosurgical planning, characterization of neurological disorders, and connectomic studies. Diffusion Magnetic Resonance Imaging (dMRI) techniques can provide, in-vivo, a mathematical representation of thousands of fibers composing such anatomical structures, in the form of 3D polylines called streamlines. Given this representation, a task of invaluable interest is known as white matter bundle segmentation, whose aim is to virtually group together streamlines sharing a similar pathway into anatomically meaningful structures, called white matter bundles. Obtaining a good and reliable bundle segmentation is however not trivial, mainly because of the intrinsic complexity of the data. Most of the current methods for bundle segmentation require extensive neuroanatomical knowledge, are time consuming, or are not able to adapt to different data settings. To overcome these limitations, the main goal of this thesis is to develop a new automatic method for accurate white matter bundle segmentation, by exploiting, combining and extending multiple up-to-date supervised learning techniques. The main contribution of the project is the development of a novel streamline-based bundle segmentation method based on binary linear classification, which simultaneously combines information from atlases, bundle geometries, and connectivity patterns. We prove that the proposed method reaches unprecedented quality of segmentation, and that is robust to a multitude of diverse settings, such as when there are differences in bundle size, tracking algorithm, and/or quality of dMRI data. In addition, we show that some of the state-of-the-art bundle segmentation methods are deeply affected by a geometrical property of the shape of the bundles to be segmented, their fractal dimension. Important factors involved in the task of streamline classification are: (i) the need for an effective streamline distance function and (ii) the definition of a proper feature space. To this end, we compare some of the most common streamline distance functions available in the literature and we provide some guidelines on their practical use for the task of supervised bundle segmentation. Moreover, we investigate the possibility to include, in a streamline-based segmentation method, additional information to the typically employed streamline distance measure. Specifically, we provide evidence that considering additional anatomical information regarding the cortical terminations of the streamlines and their proximity to specific Regions of Interest (ROIs) helps to improve the results of bundle segmentation. Lastly, significant attention is paid to reproducibility in neuroscience. Following the FAIR (Findable, Accessible, Interoperable and Reusable) Data Principles, we have integrated our pipelines of analysis into an online open platform devoted to promoting reproducibility of scientific results and to facilitating knowledge discovery.
58

Solving Optimal Control Time-dependent Diffusion-convection-reaction Equations By Space Time Discretizations

Seymen, Zahire 01 February 2013 (has links) (PDF)
Optimal control problems (OCPs) governed by convection dominated diffusion-convection-reaction equations arise in many science and engineering applications such as shape optimization of the technological devices, identification of parameters in environmental processes and flow control problems. A characteristic feature of convection dominated optimization problems is the presence of sharp layers. In this case, the Galerkin finite element method performs poorly and leads to oscillatory solutions. Hence, these problems require stabilization techniques to resolve boundary and interior layers accurately. The Streamline Upwind Petrov-Galerkin (SUPG) method is one of the most popular stabilization technique for solving convection dominated OCPs. The focus of this thesis is the application and analysis of the SUPG method for distributed and boundary OCPs governed by evolutionary diffusion-convection-reaction equations. There are two approaches for solving these problems: optimize-then-discretize and discretize-then-optimize. For the optimize-then-discretize method, the time-dependent OCPs is transformed to a biharmonic equation, where space and time are treated equally. The resulting optimality system is solved by the finite element package COMSOL. For the discretize-then-optimize approach, we have used the so called allv at-once method, where the fully discrete optimality system is solved as a saddle point problem at once for all time steps. A priori error bounds are derived for the state, adjoint, and controls by applying linear finite element discretization with SUPG method in space and using backward Euler, Crank- Nicolson and semi-implicit methods in time. The stabilization parameter is chosen for the convection dominated problem so that the error bounds are balanced to obtain L2 error estimates. Numerical examples with and without control constraints for distributed and boundary control problems confirm the effectiveness of both approaches and confirm a priori error estimates for the discretize-then-optimize approach.
59

A Hierarchical History Matching Method and its Applications

Yin, Jichao 2011 December 1900 (has links)
Modern reservoir management typically involves simulations of geological models to predict future recovery estimates, providing the economic assessment of different field development strategies. Integrating reservoir data is a vital step in developing reliable reservoir performance models. Currently, most effective strategies for traditional manual history matching commonly follow a structured approach with a sequence of adjustments from global to regional parameters, followed by local changes in model properties. In contrast, many of the recent automatic history matching methods utilize parameter sensitivities or gradients to directly update the fine-scale reservoir properties, often ignoring geological inconsistency. Therefore, there is need for combining elements of all of these scales in a seamless manner. We present a hierarchical streamline-assisted history matching, with a framework of global-local updates. A probabilistic approach, consisting of design of experiments, response surface methodology and the genetic algorithm, is used to understand the uncertainty in the large-scale static and dynamic parameters. This global update step is followed by a streamline-based model calibration for high resolution reservoir heterogeneity. This local update step assimilates dynamic production data. We apply the genetic global calibration to unconventional shale gas reservoir specifically we include stimulated reservoir volume as a constraint term in the data integration to improve history matching and reduce prediction uncertainty. We introduce a novel approach for efficiently computing well drainage volumes for shale gas wells with multistage fractures and fracture clusters, and we will filter stochastic shale gas reservoir models by comparing the computed drainage volume with the measured SRV within specified confidence limits. Finally, we demonstrate the value of integrating downhole temperature measurements as coarse-scale constraint during streamline-based history matching of dynamic production data. We first derive coarse-scale permeability trends in the reservoir from temperature data. The coarse information are then downscaled into fine scale permeability by sequential Gaussian simulation with block kriging, and updated by local-scale streamline-based history matching. he power and utility of our approaches have been demonstrated using both synthetic and field examples.
60

Automatic history matching in Bayesian framework for field-scale applications

Mohamed Ibrahim Daoud, Ahmed 12 April 2006 (has links)
Conditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.

Page generated in 0.1899 seconds