• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An efficient Bayesian formulation for production data integration into reservoir models

Leonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
2

An efficient Bayesian formulation for production data integration into reservoir models

Leonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
3

Climate change impact assessment and uncertainty analysis of the hydrology of a northern, data-sparse catchment using multiple hydrological models

Bohrn, Steven 17 December 2012 (has links)
The objective of this research was to determine the impact of climate change on the Churchill River basin and perform analysis on uncertainty related to this impact. Three hydrological models were used to determine this impact and were calibrated to approximately equivalent levels of efficiency. These include WATFLOODTM, a semi-physically based, distributed model; HBV-EC, a semidistributed, conceptual model; and HMETS, a lumped, conceptual model. These models achieved Nash-Sutcliffe calibration values ranging from 0.51 to 0.71. Climate change simulations indicated that the average of simulations predict a small increase in flow for the 2050s and a slight decrease for the 2080s. Each hydrological model predicted earlier freshets and a shift in timing of low flow events. Uncertainty analysis indicated that the chief contributor of uncertainty was the selection of GCM followed by hydrological model with less significant sources of uncertainty being parameterization of the hydrological model and selection of emissions scenario.
4

Climate change impact assessment and uncertainty analysis of the hydrology of a northern, data-sparse catchment using multiple hydrological models

Bohrn, Steven 17 December 2012 (has links)
The objective of this research was to determine the impact of climate change on the Churchill River basin and perform analysis on uncertainty related to this impact. Three hydrological models were used to determine this impact and were calibrated to approximately equivalent levels of efficiency. These include WATFLOODTM, a semi-physically based, distributed model; HBV-EC, a semidistributed, conceptual model; and HMETS, a lumped, conceptual model. These models achieved Nash-Sutcliffe calibration values ranging from 0.51 to 0.71. Climate change simulations indicated that the average of simulations predict a small increase in flow for the 2050s and a slight decrease for the 2080s. Each hydrological model predicted earlier freshets and a shift in timing of low flow events. Uncertainty analysis indicated that the chief contributor of uncertainty was the selection of GCM followed by hydrological model with less significant sources of uncertainty being parameterization of the hydrological model and selection of emissions scenario.
5

Analysis of main parameters in adaptive ES-MDA history matching. / Análise dos principais parâmetros no ajuste de histórico utilizando ES-MDA adaptativo.

Ranazzi, Paulo Henrique 06 June 2019 (has links)
In reservoir engineering, history matching is the technique that reviews the uncertain parameters of a reservoir simulation model in order to obtain a response according to the observed production data. Reservoir properties have uncertainties due to their indirect acquisition methods, that results in discrepancies between observed data and reservoir simulator response. A history matching method is the Ensemble Smoother with Multiple Data assimilation (ES-MDA), where an ensemble of models is used to quantify the parameters uncertainties. In ES-MDA, the number of iterations must be defined previously the application by the user, being a determinant parameter for a good quality matching. One way to handle this, is by implementing adaptive methodologies when the algorithm keeps iterating until it reaches good matchings. Also, in large-scale reservoir models it is necessary to apply the localization technique, in order to mitigate spurious correlations and high uncertainty reduction of posterior models. The main objective of this dissertation is to evaluate two main parameters of history matching when using an adaptive ES-MDA: localization and ensemble size, verifying the impact of these parameters in the adaptive scheme. The adaptive ES-MDA used in this work defines the number of iterations and the inflation factors automatically and distance-based Kalman gain localization was used to evaluate the localization influence. The parameters influence was analyzed by applying the methodology in the benchmark UNISIM-I-H: a synthetic large-scale reservoir model based on an offshore Brazilian field. The experiments presented considerable reduction of the objective function for all cases, showing the ability of the adaptive methodology of keep iterating until a desirable overcome is obtained. About the parameters evaluated, a relationship between the localization and the required number of iterations to complete the adaptive algorithm was verified, and this influence has not been observed as function of the ensemble size. / Em engenharia de reservatórios, ajuste de histórico é a técnica que revisa os parâmetros incertos de um modelo de simulação de reservatório para obter uma resposta condizente com os dados de produção observados. As propriedades do reservatório possuem incertezas, devido aos métodos indiretos em que foram adquiridas, resultando em discrepâncias entre os dados observados e a resposta do simulador de reservatório. Um método de ajuste de histórico é o Conjunto Suavizado com Múltiplas Aquisições de Dados (sigla em inglês ES-MDA), onde um conjunto de modelos é utilizado para quantificar as incertezas dos parâmetros. No ES-MDA o número de iterações necessita ser definido previamente pelo usuário antes de sua aplicação, sendo um parâmetro determinante para um ajuste de boa qualidade. Uma forma de contornar esta limitação é implementar metodologias adaptativas onde o algoritmo continue as iterações até que alcance bons ajustes. Por outro lado, em modelos de reservatórios de larga-escala é necessário aplicar alguma técnica de localização para evitar correlações espúrias e uma alta redução de incertezas dos modelos a posteriori. O principal objetivo desta dissertação é avaliar dois principais parâmetros do ajuste de histórico quando aplicado um ES-MDA adaptativo: localização e tamanho do conjunto, verificando o impacto destes parâmetros no método adaptativo. O ES-MDA adaptativo utilizado define o número de iterações e os fatores de inflação automaticamente e a localização no ganho de Kalman baseada na distância foi utilizada para avaliar a influência da localização. Assim, a influência dos parâmetros foi analisada aplicando a metodologia no benchmark UNISIM-I-H: um modelo de reservatório sintético de larga escala baseado em um campo offshore brasileiro. Os experimentos apresentaram considerável redução da função objetivo para todos os casos, mostrando a capacidade da metodologia adaptativa de continuar iterando até que resultados aceitáveis fossem obtidos. Sobre os parâmetros avaliados, foi verificado uma relação entre a localização e o número de iterações necessárias, influência esta que não foi observada em função do tamanho do conjunto.
6

Experimental analysis of thermal mixing at reactor conditions

Bergagio, Mattia January 2016 (has links)
High-cycle thermal fatigue arising from turbulent mixing of non-isothermal flows is a key issue associated with the life management and extension of nuclear power plants. The induced thermal loads and damage are not fully understood yet. With the aim of acquiring extensive data sets for the validation of codes modeling thermal mixing at reactor conditions, thermocouples recorded temperature time series at the inner surface of a vertical annular volume where turbulent mixing occurred. There, a stream at either 333 K or 423 K flowed upwards and mixed with two streams at 549 K. Pressure was set at 72E5 Pa. The annular volume was formed between two coaxial stainless-steel tubes. Since the thermocouples could only cover limited areas of the mixing region, the inner tube to which they were soldered was lifted, lowered, and rotated around its axis, to extend the measurement region both axially and azimuthally. Trends, which stemmed from the variation of the experimental boundary conditions over time, were subtracted from the inner-surface temperature time series collected. An estimator assessing intensity and inhomogeneity of the mixing process in the annulus was also computed. In addition, a frequency analysis of the detrended inner-surface temperature time series was performed. In the cases examined, frequencies between 0.03 Hz and 0.10 Hz were detected in the subregion where mixing inhomogeneity peaked. The uncertainty affecting such measurements was then estimated. Furthermore, a preliminary assessment of the radial heat flux at the inner surface was conducted. / <p>QC 20161116</p>
7

Estimating and Modeling Transpiration of a Mountain Meadow Encroached by Conifers Using Sap Flow Measurements

Marks, Simon Joseph 01 December 2021 (has links) (PDF)
Mountain meadows in the western USA are experiencing increased rates of conifer encroachment due to climate change and land management practices. Past research has focused on conifer removal as a meadow restoration strategy, but there has been limited work on conifer transpiration in a pre-restoration state. Meadow restoration by conifer removal has the primary goal of recovering sufficient growing season soil moisture necessary for endemic, herbaceous meadow vegetation. Therefore, conifer water use represents an important hydrologic output toward evaluating the efficacy of this active management approach. This study quantified and evaluated transpiration of encroached conifers in a mountain meadow using sap flow prior to restoration by tree removal. We report results of lodgepole pine transpiration estimates for an approximate 1-year period and an evaluation of key environmental variables influencing water use during a dry growing season. The study was conducted at Rock Creek Meadow (RCM) in the southern Cascade Range near Chester, CA, USA. Sap flow data were collected in a sample of lodgepole pine and scaled on a per-plot basis to the larger meadow using tree survey data within a stratified random sampling design (simple scaling). These estimates were compared to a MODIS evapotranspiration (ET) estimate for the meadow. The 1-year period for transpiration estimates overlapped each of the 2019 and 2020 growing seasons partially. The response of lodgepole pine transpiration to solar radiation, air temperature, vapor pressure deficit, and volumetric soil water content was investigated by calibrating a modified Jarvis-Stewart (MJS) model to hourly sap flow data collected during the 2020 growing season, which experienced below average antecedent winter precipitation. The model was validated using spatially different sap flow data in the meadow from the 2021 growing season, also part of a dry year. Calibration and validation were completed using a MCMC approach via the DREAM(ZS) algorithm and a generalized likelihood (GL) function, enabling model parameter and total uncertainty assessment. We also used the model to inform transpiration scaling for the calibration period in select plots in the meadow, which allowed comparison with simple scaling transpiration estimates. Average total lodgepole pine transpiration at RCM was estimated between 220.57 ± 25.28 and 393.39 ± 45.65 mm for the entire campaign (mid-July 2019 to mid-August 2020) and between 100.22 ± 11.49 and 178.75 ± 20.74 mm for the 2020 partial growing season (April to mid-August). The magnitude and seasonal timing were similar to MODIS ET. The model showed good agreement between observed and predicted sap velocity for the 2020 partial growing season (RMSE = 1.25 cm h-1), with meteorological variables modulating early growing season sap flow and volumetric soil water content decline imposing transpiration decrease in the late growing season. The model validation performed similarly to calibration in terms of performance metrics and the influence of meteorological variables. The consistency of the declining volumetric soil water content effect during the late growing season between periods could not be evaluated due to an abridged validation period. Overall, the implementation GL-DREAM(ZS) showed promise for future use in MJS models. Lastly, the model derived transpiration estimates for the 2020 partial growing season showed some of the potential utility in using the MJS model to scale sap flow at the study locale. It also highlights some of the key limitations of this approach as it is executed in the present study.
8

Estimation de profondeur à partir d'images monoculaires par apprentissage profond / Depth estimation from monocular images by deep learning

Moukari, Michel 01 July 2019 (has links)
La vision par ordinateur est une branche de l'intelligence artificielle dont le but est de permettre à une machine d'analyser, de traiter et de comprendre le contenu d'images numériques. La compréhension de scène en particulier est un enjeu majeur en vision par ordinateur. Elle passe par une caractérisation à la fois sémantique et structurelle de l'image, permettant d'une part d'en décrire le contenu et, d'autre part, d'en comprendre la géométrie. Cependant tandis que l'espace réel est de nature tridimensionnelle, l'image qui le représente, elle, est bidimensionnelle. Une partie de l'information 3D est donc perdue lors du processus de formation de l'image et il est d'autant plus complexe de décrire la géométrie d'une scène à partir d'images 2D de celle-ci.Il existe plusieurs manières de retrouver l'information de profondeur perdue lors de la formation de l'image. Dans cette thèse nous nous intéressons à l’estimation d'une carte de profondeur étant donné une seule image de la scène. Dans ce cas, l'information de profondeur correspond, pour chaque pixel, à la distance entre la caméra et l'objet représenté en ce pixel. L'estimation automatique d'une carte de distances de la scène à partir d'une image est en effet une brique algorithmique critique dans de très nombreux domaines, en particulier celui des véhicules autonomes (détection d’obstacles, aide à la navigation).Bien que le problème de l'estimation de profondeur à partir d'une seule image soit un problème difficile et intrinsèquement mal posé, nous savons que l'Homme peut apprécier les distances avec un seul œil. Cette capacité n'est pas innée mais acquise et elle est possible en grande partie grâce à l'identification d'indices reflétant la connaissance a priori des objets qui nous entourent. Par ailleurs, nous savons que des algorithmes d'apprentissage peuvent extraire ces indices directement depuis des images. Nous nous intéressons en particulier aux méthodes d’apprentissage statistique basées sur des réseaux de neurones profond qui ont récemment permis des percées majeures dans de nombreux domaines et nous étudions le cas de l'estimation de profondeur monoculaire. / Computer vision is a branch of artificial intelligence whose purpose is to enable a machine to analyze, process and understand the content of digital images. Scene understanding in particular is a major issue in computer vision. It goes through a semantic and structural characterization of the image, on one hand to describe its content and, on the other hand, to understand its geometry. However, while the real space is three-dimensional, the image representing it is two-dimensional. Part of the 3D information is thus lost during the process of image formation and it is therefore non trivial to describe the geometry of a scene from 2D images of it.There are several ways to retrieve the depth information lost in the image. In this thesis we are interested in estimating a depth map given a single image of the scene. In this case, the depth information corresponds, for each pixel, to the distance between the camera and the object represented in this pixel. The automatic estimation of a distance map of the scene from an image is indeed a critical algorithmic brick in a very large number of domains, in particular that of autonomous vehicles (obstacle detection, navigation aids).Although the problem of estimating depth from a single image is a difficult and inherently ill-posed problem, we know that humans can appreciate distances with one eye. This capacity is not innate but acquired and made possible mostly thanks to the identification of indices reflecting the prior knowledge of the surrounding objects. Moreover, we know that learning algorithms can extract these clues directly from images. We are particularly interested in statistical learning methods based on deep neural networks that have recently led to major breakthroughs in many fields and we are studying the case of the monocular depth estimation.
9

[en] SEISMIC TO FACIES INVERSION USING CONVOLVED HIDDEN MARKOV MODEL / [pt] INVERSAO SÍSMICA PARA FÁCIES USANDO MODELO DE MARKOV OCULTO COM EFEITO CONVOLUTIVO

ERICK COSTA E SILVA TALARICO 07 January 2019 (has links)
[pt] A indústria de óleo e gás utiliza a sísmica para investigar a distribuição de tipos de rocha (facies) em subsuperfície. Por outro lado, apesar de seu corriqueiro uso em geociências, medidas sísmicas costumam ser ruidosas, e a inversão do dado sísmico para a distribuição de facies é um problema mal posto. Por esta razão, diversos autores estudam esta inversão sob o ponto de vista probabilístico, para ao menos estimar as incertezas da solução do problema inverso. O objetivo da presente dissertação é desenvolver método quantitativo para estimar a probabilidade de reservatório com hidrocarboneto, dado um traço sísmico de reflexão, integrando modelagem sísmica direta, e conhecimento geológico a priori. Utiliza-se, um dos métodos mais recentes para resolver o problema inverso: Modelo de Markov Oculto com Efeito Convolucional (mais especificamente, a Aproximação por Projeção de (1)). É demonstrado que o método pode ser reformulado em termos do Modelo de Markov Oculto (MMO) ordinário. A teoria de sísmica de AVA é apresentada, e usada conjuntamente com MMO com Efeito Convolucional para resolver a inversão de sísmica para facies. A técnica de inversão é avaliada usando-se medidas difundidas em Aprendizado de Máquina, em um conjunto de experimentos variados e realistas. Apresenta-se uma técnica para medir a capacidade do algoritmo em estimar valores confiáveis de probabilidade. Pelos testes realizados a aproximação por projeção apresenta distorções de probabilidade inferiores a 5 por cento, tornando-a uma técnica útil para a indústria de óleo e gás. / [en] Oil and Gas Industry uses seismic data in order to unravel the distribution of rock types (facies) in the subsurface. But, despite its widespread use, seismic data is noisy and the inversion from seismic data to the underlying rock distribution is an ill-posed problem. For this reason, many authors have studied the topic in a probabilistic formulation, in order to provide uncertainty estimations about the solution of the inversion problem. The objective of the present thesis is to develop a quantitative method to estimate the probability of hydrocarbon bearing reservoir, given a seismic reflection profile, and, to integrate geological prior knowledge with geophysical forward modelling. One of the newest methods for facies inversion is used: Convolved Hidden Markov Model (more specifically the Projection Approximation from (1)). It is demonstrated how Convolved HMM can be reformulated as an ordinary Hidden Markov Model problem (which models geological prior knowledge). Seismic AVA theory is introduced, and used with Convolved HMM theory to solve the seismic to facies problem. The performance of the inversion technique is measured with common machine learning scores, in a broad set of realistic experiments. The technique capability of estimating reliable probabilities is quantified, and it is shown to present distortions smaller than 5 percent. As a conclusion, the studied Projection Approximation is applicable for risk management in Oil and Gas applications, which integrates geological and geophysical knowledge.
10

[pt] INCORPORAÇÃO DA INCERTEZA DOS PARÂMETROS DO MODELO ESTOCÁSTICO DE VAZÕES NA POLÍTICA OPERATIVA DO DESPACHO HIDROTÉRMICO / [en] STOCHASTIC HYDROTHERMAL SCHEDULING WITH PARAMETER UNCERTAINTY IN THE STREAMFLOW MODELS

BERNARDO VIEIRA BEZERRA 26 October 2015 (has links)
[pt] O objetivo do planejamento da operação hidrotérmica de médio e longo prazo é definir as metas para geração de cada hidroelétrica e termelétrica, a fim de atender à carga ao menor custo esperado de operação e respeitando as restrições operacionais. Algoritmos de Programação Dinâmica Estocástica (PDE) e de Programação Dinâmica Dual Estocástica (PDDE) têm sido amplamente aplicados para determinar uma política operativa ideal o despacho hidrotérmico. Em ambas as abordagens a estocasticidade das afluências é comumente produzida por modelos periódicos autoregressivos de lag p - PAR(p), cuja estimativa dos parâmetros é baseada nos dados históricos disponíveis. Como os estimadores são funções de fenômenos aleatórios, além da incerteza sobre as vazões, também há incerteza sobre os parâmetros estatísticos, o que não é capturado no modelo PAR (p) padrão. A existência de incerteza nos parâmetros significa que há um risco de que a política da operação hidrotérmica planejada não será a ótima. O objetivo desta tese é apresentar uma metodologia para incorporar a incerteza dos parâmetros do modelo PAR (p) no problema de programação estocástica hidrotérmica. São apresentados estudos de caso ilustrando o impacto da incerteza dos parâmetros nos custos operativos do sistema e como uma política operativa que incorpore esta incerteza pode reduzir este impacto. / [en] The objective of the medium and long-term hydrothermal scheduling problem is to define operational target for each power plant in order to meet the load at the lowest expected cost and respecting the operational constraints. Stochastic Dynamic Programming (SDP) and Stochastic Dual Dynamic Programming (SDDP) algorithms have been widely applied to determine the optimal operating policy for the hydrothermal dispatch. In both approaches, the stochasticity of the inflows is usually produced by periodic auto-regressive models - PAR (p), whose parameters are estimated based on available historical data. As the estimators are a function of random phenomena, besides the inflows uncertainty there is statistical parameter uncertainty, which is not captured in the standard PAR (p) model. The existence of uncertainty in the parameters means that there is a risk that the hydrothermal operating policy will not be optimal. This thesis presents a methodology to incorporate the PAR(p) parameter uncertainty into stochastic hydrothermal scheduling and to assess the resulting impact on the computation of a hydro operations policy. Case studies are presented illustrating the impact of parameter uncertainty in the system operating costs and how an operating policy that incorporates this uncertainty can reduce this impact.

Page generated in 0.1054 seconds