• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 93
  • 24
  • 7
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 444
  • 188
  • 80
  • 78
  • 65
  • 60
  • 57
  • 52
  • 52
  • 50
  • 50
  • 50
  • 47
  • 47
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

MATHEMATICAL MODELLING OF SOIL DIVERSITY INDICES UNDER DIFFERENT USES AND MANAGEMENTS

SILVA, Raimunda Alves 06 March 2017 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-10-04T20:06:24Z No. of bitstreams: 1 RaimundaSilva.pdf: 3007003 bytes, checksum: 6d2583372b22c581e239bf77c0d1338e (MD5) / Made available in DSpace on 2017-10-04T20:06:24Z (GMT). No. of bitstreams: 1 RaimundaSilva.pdf: 3007003 bytes, checksum: 6d2583372b22c581e239bf77c0d1338e (MD5) Previous issue date: 2017-03-06 / Fundação de Amparo à Pesquisa e ao Desenvolvimento Científico e Tecnológico do Maranhão / ABSTRACT: Soil is the habitat for a number of living organisms that perform essential functions to the ecosystem. The present work aimed to determine the edaphic diversity in large groups under different uses and management of the soil in Cerrado Biome. The study was developed in the city of Mata Roma (3º 70 '80.88' 'S and 43º 18' 71.27 '' W), in the eastern region of Maranhão state, Brazil. It were installed 130 pitfall traps in five areas with different management (millet, soybean, maize, eucalyptus, and pasture) and two reference areas with natural vegetation with different uses (anthropized Cerrado and preserved Cerrado). The traps remained in the field for a period of seven days, after this, the contents were maintained in plastic bottles and taken to the laboratory, where they were sampled and identified in large groups (orders and family). After identification, the biodiversity indexes were determined: (Shanon index, Pielou, Average and total richness and abundance). The data were analyzed using descriptive statistics and multivariate techniques using group dissimilarity. The geostatistical analysis was evaluated by a semivariogram, adjusted to a geostatistical, spherical, gaussian or exponential model. The multifractality was analyzed by the current method, in successive segments of different sizes of 2k , k=0 a k= 7 in the range of q = +10 to q = -10. 20,995 arthropods were collected throughout the study. The highest abundance was found for millet (9,974 individuals), and the lowest abundance values were reported for soybean (222) and maize (824), respectively. The highest biodiversity index is reported for the soybean area (2.69), although there is less abundance, in this area, the groups are evenly distributed due to the homogeneous management in the study area. The main axis in the analysis of the main components (PCA) explained 50.9% of the correlation of the groups with the sampled areas. The dendrogram had demonstrated that the area of soybean and maize are similar and had isolated the area of millet with the most dissimilar in relation to the others. The use and management of the soil in the study areas determine the occurrence of soil arthropods in function of food availability. For the areas of millet, maize, eucalyptus, anthropized Cerrado and pasture the Shanon diversity index obtained pure nugget effect. For the areas of millet, maize, anthropized Cerrado and pasture, the total diversity index was adjusted to the gaussian model. Only for the areas of soybean and pasture the staggered semivariograms showed similarity in the spatial variability of indexes, indicating that they behave similarly. The multifractality generated generalized dimension, D0, for all the indexes in the millet area, with invariant values, D0 = 1.000 ± 0.000. The singularity spectra were curves in concave parables with greater or smaller asymmetry for all areas sampled. In general, the fauna of soil presented spatial variability and multifractal parameters. / ABSTRACT: Soil is the habitat for a number of living organisms that perform essential functions to the ecosystem. The present work aimed to determine the edaphic diversity in large groups under different uses and management of the soil in Cerrado Biome. The study was developed in the city of Mata Roma (3º 70 '80.88' 'S and 43º 18' 71.27 '' W), in the eastern region of the State of Maranhão, Brazil. Were installed 130 pitfall traps in five areas with different management (Millet, Soybean, Maize, Eucalyptus, and Pasture) and two reference areas with natural vegetation with different uses (anthropized Cerrado and preserved Cerrado). The traps remained in the field for a period of seven days, after this, the contents were maintained in plastic bottles and taken to the laboratory, where they were sampled and identified in large groups (orders and family). After identification, the biodiversity indexes were determined: (Shanon index, Pielou, Average and total richness and abundance). The data were analyzed using descriptive statistics and multivariate techniques using group dissimilarity. 20,995 arthropods were collected throughout the study. The highest abundance was found for millet (9,974 individuals), and the lowest abundance values were reported for soybean (222) and maize (824), respectively. The highest biodiversity index is reported for the soybean area (2.69), although there is less abundance, in this area, the groups are evenly distributed due to the homogeneous management in the study area. The main axis in the analysis of the main components (PCA) explained 50.9% of the correlation of the groups with the sampled areas. The dendrogram had demonstrated that the area of soybean and maize are similar and had isolated the area of millet with the most dissimilar in relation to the others. The use and management of the soil in the study areas determine the occurrence of soil arthropods in function of food availability.
42

Contributions de la géostatistique à l'amélioration de l'estimation de la qualité de l'air / Contributions of Geostatistics to improve the estimation in Air Quality

Beauchamp, Maxime 10 December 2018 (has links)
Les méthodes géostatistiques sont couramment utilisées pour la cartographie de la qualité de l'air. En France, le système PREV'AIR diffuse depuis 2003 des prévisions quotidiennes à trois jours ainsi que des cartographies de qualité de l'air à l'échelle nationale et européenne. Ces cartographies sont construites par krigeage des observations en utilisant les sorties du modèle déterministe de chimie-transport CHIMERE en dérive externe. Dans ce domaine, de nombreuses questions et développements restent en suspens.La première partie de la thèse vise à améliorer les cartographies locales, nationales et européennes des principaux polluants réglementaires. En mode analyse (réalisation de cartographies à partir des observations passées, et donc connues), on revient sur l'utilisation des variables explicatives dans le krigeage : il s'agit de voir comment ces variables peuvent être intégrées à l'estimation pour la modélisation des non stationnarités et l’élaboration de cartographies de plus haute résolution spatiale que les simulations du modèle de chimie-transport. La réflexion porte sur la modélisation de la composante déterministe du processus stochastique, mais aussi sur la covariance du résidu. Une attention particulière est portée au réseau de mesures. L'utilisation des observations PM10 pour cartographier les PM2.5, dont le réseau de mesures est moins dense, est explorée. L'estimation dans les zones peu informées est également examinée. Enfin, une réflexion est menée sur l'extension de ces méthodes au mode prévision, afin d'améliorer le système PREV'AIR, qui dissocie les composantes temporelles et spatiales de la prévision. Après une revue des méthodes spatio-temporelles d'estimation, différentes méthodes sont évaluées.Dans la deuxième partie sont présentées des approximations pragmatiques permettant d'exploiter au mieux les cartographies produites quotidiennement ou ponctuellement afin de satisfaire aux exigencesréglementaires. A l'échelle nationale, l'exploitation des sorties des cartes analysées (horaires, journalières) de PREV'AIR permet de déduire des cartes de probabilités de dépassement des seuils réglementaires. Ces travaux amènent à reconsidérer la question de la représentativité spatiale des stations de mesures. Enfin, on insiste sur la nécessité d'utiliser des méthodes d’estimation appropriées pour le calcul des surfaces en dépassement de valeurs réglementaires. / Geostatistical methods are commonly used in air quality mapping. In France, since 2003, the PREV'AIR system has been broadcasting daily three-days forecasts as well as national and European air quality maps. They are built by a kriging of the observations using the deterministic chemistry-transport model CHIMERE outputs as an external drift. However, many issues or developments remain unresolved.The first part of the thesis aims at improving the local, national and european maps of the main regulated pollutants. Regarding the analysis (mapping of the past observations), we get back to the question of the use of explanatory variables in kriging. What are the best options for a covariate-based modelling of the underlying non stationarity, that also enables downscaling the model outputs? The statistical modelling for the deterministic component of the stochastic process is investigated, as well as the modelling for the covariance of the residual. A focus is also made on the spatial sampling of the monitoring network. The use of PM10 observations to map PM2.5, whose monitoring network is less dense, is studied and a thought is also given to the estimation in far-off extrapolation. Last, we discuss how to extend these methods to the prediction problem (mapping of the future, where no observations are available). Therefore, we could improve the PREV'AIR system which currently dissociates the time and the spatial components of the underlying process. A review of spatio-temporal methods is carried out, and some of them are evaluated.In a second part, some pragmatic though justified approximations are presented to deal with regulatory requirements. At both local and national levels, how the analyses can be used to deduce probability maps of exceedances of the regulatory thresholds? Along this line, we also come back to the question of the spatial representativeness of the monitoring stations. Last, the pragmatic approximations are confronted to non-linear estimations, theoretically more convenient to deal with non-linear functions of the concentrations. We engage a discussion to show the need of considering the appropriate estimation method to compute the surface exceeding the limit value.
43

Estudo do efeito de suavização da krigagem ordinária em diferentes distribuições estatísticas / A study of ordinary kriging smoothing effect using diferent statistics distributions

Souza, Anelise de Lima 12 July 2007 (has links)
Esta dissertação apresenta os resultados da investigação quanto à eficácia do algoritmo de pós-processamento para a correção do efeito de suavização nas estimativas da krigagem ordinária. Foram consideradas três distribuições estatísticas distintas: gaussiana, lognormal e lognormal invertida. Como se sabe, dentre estas distribuições, a distribuição lognormal é a mais difícil de trabalhar, já que neste tipo de distribuição apresenta um grande número de valores baixos e um pequeno número de valores altos, sendo estes responsáveis pela grande variabilidade do conjunto de dados. Além da distribuição estatística, outros parâmetros foram considerados: a influencia do tamanho da amostra e o numero de pontos da vizinhança. Para distribuições gaussianas e lognormais invertidas o algoritmo de pós-processamento funcionou bem em todas a situações. Porém, para a distribuição lognormal, foi observada a perda de precisão global. Desta forma, aplicou-se a krigagem ordinária lognormal para este tipo de distribuição, na realidade, também foi aplicado um método recém proposto de transformada reversa de estimativas por krigagem lognormal. Esta técnica é baseada na correção do histograma das estimativas da krigagem lognormal e, então, faz-se a transformada reversa dos dados. Os resultados desta transformada reversa sempre se mostraram melhores do que os resultados da técnica clássica. Além disto, a as estimativas de krigagem lognormal se provaram superiores às estimativas por krigagem ordinária. / This dissertation presents the results of an investigation into the effectiveness of the post-processing algorithm for correcting the smoothing effect of ordinary kriging estimates. Three different statistical distributions have been considered in this study: gaussian, lognormal and inverted lognormal. As we know among these distributions, the lognormal distribution is the most difficult one to handle, because this distribution presents a great number of low values and a few high values in which these high values are responsible for the great variability of the data set. Besides statistical distribution other parameters have been considered in this study: the influence of the sample size and the number of neighbor data points as well. For gaussian and inverted lognormal distributions the post-processing algorithm worked well in all situations. However, it was observed loss of local accuracy for lognormal data. Thus, for these data the technique of ordinary lognormal kriging was applied. Actually, a recently proposed approach for backtransforming lognormal kriging estimates was also applied. This approach is based on correcting the histogram of lognormal kriging estimates and then backtransforming it to the original scale of measurement. Results of back-transformed lognormal kriging estimates were always better than the traditional approach. Furthermore, lognormal kriging estimates have provided better results than the normal kriging ones.
44

A probabilistic approach to the value chain of underground iron ore mining : From deposit to product

Ellefmo, Steinar Løve January 2005 (has links)
<p>Mining activities will eventually deplete any deposit. In a sustainability perspective, the deposit should therefore be utilised optimally during production. A prerequisite to achieve this is the deliberate and consistent utilisation of the variations in the deposit.</p><p>In an ideal world everything is certain. In the real world nothing is certain. In the real world everything is more or less probable.</p><p>Therefore, the question asked is how an underground iron ore mining company like Rana Gruber AS can benefit from knowing and exploiting the uncertainty and variability of decisive ore parameters. The perspective is the value chain from in-situ ore to product, whereas the focus is on deposit characterisation and production.</p><p>In order to answer this question the existing database with geodata from the Kvannevann Iron Ore is reviewed and estimation techniques based on kriging and geostatistical simulation algorithms (Turning Band) are implemented to identify and assess the ore deposit uncertainties and variations and associated risks. Emphasise is on total iron in the ore (FeTot), total iron in the ore originating from magnetite (FeMagn), manganese oxide (MnO) and joint parameters. Due to insufficient number of assays of MnO, a geochemical MnO-signature is developed using cluster analysis. This geochemical signature is applied as input in the kriging with inequalities procedure. This procedure is based on soft data (lithologies) and a conditional expectation of the MnO level in the different lithologies.</p><p>A cut-off based on both hematite and magnetite is estimated. A process analysis is performed to visualise the working processes, related inputs, outputs and controlling-, supporting- and risk elements. The process analysis is based on the IDEF process modelling methodology. Given the identified deposit uncertainties and variations, systems to evaluate potential mining stope performance are developed and implemented for one of the mining stopes. To test the possibility to decrease the ore-related uncertainty, a method for collection of drill cuttings has been developed and tested. The correlation between magnetic susceptibility and FeMagn and the correlation between ore density and FeTot are both investigated.</p><p>The results show that an illustrative and useful overview can be won by using the IDEF-based process modelling methodology. A non-linear relationship between density and FeTot is established and it is shown that the density can be used as a FeTot indicator. This relationship is also used in the reserve and resource estimation. As expected a positive correlation between FeMagn and magnetic susceptibility measured on cores could be established. However, the deviation from other reported relationships is considerable. The importance of magnetite is emphasised and quantified by the cut-off estimation. The cluster analysis reveals that the MnO levels in the different lithologies are significantly different. This result is implemented into the kriging with inequalities procedure and immediate effects can be observed.</p><p>The development of the geodata collector and the collection of drill cuttings show that it is possible to obtain precise analysis of collected drill cutting material. Although high- and low assay values have been correlated with geological observation in the mine, the accuracy has been difficult to assess.</p><p>The estimation and the simulation of the ore properties illustrate and quantify the uncertainties and variations in the ore deposit well. The structural analysis performed prior to the estimation and the simulation reveals anisotropies for all ore decisive parameters. The quantification of ore variations provides a useful input into the a-priori assessment of stope performance. It is also shown that the probability that a SMU is above or below some cut-off value can be assessed using the simulation results and the systems developed in standard software.</p><p>It is concluded that the process analysis approach offers valuable input to gain an overview of the mining value chain. It is also an approach that constitutes an important step in the identification and assessment of IT-requirements, bottlenecks, input- and output requirements and role- and skill requirements along the value chain. However, the process analysis approach requires sufficient organisational resources, which also is the case regarding the implementation of the grade- and stability issues that are presented. Further it is concluded that the ore variations can be utilised to some extent by using standard software.</p><p>The ore in question is a Neoproterozoic (600 to 700 Ma) metasedimentary magnetite-hematite ore deposited under shallow marine conditions. Primary precipitate was probably ferric hydroxide.</p><p>Applied methods have been chosen to handle the uncertainty along the value chain of Rana Gruber AS. Every aspect of these methods may therefore not be directly applicable to other mining operations. However, the general aspects have a broad area of use.</p>
45

Spatial prediction of soil properties: the Bayesian Maximum Entropy approach./ Prédiction spatiale de propriétés pédologiques : l'approche du Maximum d'Entropie Bayésien.

D'Or, Dimitri 13 May 2003 (has links)
Soil properties play important roles in a lot of environmental issues like diffuse pollution, erosion hazards or precision agriculture. With the developments of soil process models and geographical information systems, the need for accurate knowledge about soil properties becomes more acute. However, while the sources of information become each year more numerous and diversified, they rarely provide us with data at the same time having the required level of spatial and attribute accuracy. An important challenge thus consists in combining those data sources at best so as to meet the high accuracy requirements. The Bayesian Maximum Entropy (BME) approach appears as a potential candidate for achieving this task: it is especially designed for managing simultaneously data of various nature and quality ("hard" and "soft" data, continuous or categorical). It relies on a two-steps procedure involving an objective way for obtaining a prior distribution in accordance with the general knowledge at hand (the ME part), and a Bayesian conditionalization step for updating this prior probability distribution function (pdf) with respect to the specific data collected on the study site. At each prediction location, an entire pdf is obtained, allowing subsequently the easy computation of elaborate statistics chosen for their adequacy with the objectives of the study. In this thesis, the theory of BME is explained in a simplified way using standard probabilistic notations. The recent developments towards categorical variables are incorporated and an attempt is made to formulate a unified framework for both categorical and continuous variables, thus emphasizing the generality and flexibility of the BME approach. The potential of the method for predicting continuous variables is then illustrated by a series of studies dealing with the soil texture fractions (sand, silt and clay). For the categorical variables, a case study focusing on the prediction of the status of the water table is presented. The use of multiple and sometimes contradictory data sources is also analyzed. Throughout the document, BME is compared to classic geostatistical techniques like simple, ordinary or indicator kriging. Thorough discussions point out the inconsistencies of those methods and explain how BME is solving the problems. Rather than being but another geostatistical technique, BME has to be considered as a knowledge processing approach. With BME, practitioners will find a valuable tool for analyzing their spatio-temporal data sets and for providing the stake-holders with accurate information about the environmental issues to which they are confronted. Read one of the articles extracted from Chapter V at : D'Or D., Bogaert P. and Christakos, G. (2001). Application of the BME Approach to Soil Texture Mapping. Stochastic Environmental Research and Risk Assessment 15(1): 87-100 ©Springer-2001. http://springerlink.metapress.com/app/home/contribution.asp?wasp=cbttlcpaeg1rqmdb4xv2&referrer=parent&backto=issue,6,6;journal,13,29;linkingpublicationresults,1,1
46

Multivariate Analysis of Diverse Data for Improved Geostatistical Reservoir Modeling

Hong, Sahyun 11 1900 (has links)
Improved numerical reservoir models are constructed when all available diverse data sources are accounted for to the maximum extent possible. Integrating various diverse data is not a simple problem because data show different precision and relevance to the primary variables being modeled, nonlinear relations and different qualities. Previous approaches rely on a strong Gaussian assumption or the combination of the source-specific probabilities that are individually calibrated from each data source. This dissertation develops different approaches to integrate diverse earth science data. First approach is based on combining probability. Each of diverse data is calibrated to generate individual conditional probabilities, and they are combined by a combination model. Some existing models are reviewed and a combination model is proposed with a new weighting scheme. Weakness of the probability combination schemes (PCS) is addressed. Alternative to the PCS, this dissertation develops a multivariate analysis technique. The method models the multivariate distributions without a parametric distribution assumption and without ad-hoc probability combination procedures. The method accounts for nonlinear features and different types of the data. Once the multivariate distribution is modeled, the marginal distribution constraints are evaluated. A sequential iteration algorithm is proposed for the evaluation. The algorithm compares the extracted marginal distributions from the modeled multivariate distribution with the known marginal distributions and corrects the multivariate distribution. Ultimately, the corrected distribution satisfies all axioms of probability distribution functions as well as the complex features among the given data. The methodology is applied to several applications including: (1) integration of continuous data for a categorical attribute modeling, (2) integration of continuous and a discrete geologic map for categorical attribute modeling, (3) integration of continuous data for a continuous attribute modeling. Results are evaluated based on the defined criteria such as the fairness of the estimated probability or probability distribution and reasonable reproduction of input statistics. / Mining Engineering
47

A probabilistic approach to the value chain of underground iron ore mining : From deposit to product

Ellefmo, Steinar Løve January 2005 (has links)
Mining activities will eventually deplete any deposit. In a sustainability perspective, the deposit should therefore be utilised optimally during production. A prerequisite to achieve this is the deliberate and consistent utilisation of the variations in the deposit. In an ideal world everything is certain. In the real world nothing is certain. In the real world everything is more or less probable. Therefore, the question asked is how an underground iron ore mining company like Rana Gruber AS can benefit from knowing and exploiting the uncertainty and variability of decisive ore parameters. The perspective is the value chain from in-situ ore to product, whereas the focus is on deposit characterisation and production. In order to answer this question the existing database with geodata from the Kvannevann Iron Ore is reviewed and estimation techniques based on kriging and geostatistical simulation algorithms (Turning Band) are implemented to identify and assess the ore deposit uncertainties and variations and associated risks. Emphasise is on total iron in the ore (FeTot), total iron in the ore originating from magnetite (FeMagn), manganese oxide (MnO) and joint parameters. Due to insufficient number of assays of MnO, a geochemical MnO-signature is developed using cluster analysis. This geochemical signature is applied as input in the kriging with inequalities procedure. This procedure is based on soft data (lithologies) and a conditional expectation of the MnO level in the different lithologies. A cut-off based on both hematite and magnetite is estimated. A process analysis is performed to visualise the working processes, related inputs, outputs and controlling-, supporting- and risk elements. The process analysis is based on the IDEF process modelling methodology. Given the identified deposit uncertainties and variations, systems to evaluate potential mining stope performance are developed and implemented for one of the mining stopes. To test the possibility to decrease the ore-related uncertainty, a method for collection of drill cuttings has been developed and tested. The correlation between magnetic susceptibility and FeMagn and the correlation between ore density and FeTot are both investigated. The results show that an illustrative and useful overview can be won by using the IDEF-based process modelling methodology. A non-linear relationship between density and FeTot is established and it is shown that the density can be used as a FeTot indicator. This relationship is also used in the reserve and resource estimation. As expected a positive correlation between FeMagn and magnetic susceptibility measured on cores could be established. However, the deviation from other reported relationships is considerable. The importance of magnetite is emphasised and quantified by the cut-off estimation. The cluster analysis reveals that the MnO levels in the different lithologies are significantly different. This result is implemented into the kriging with inequalities procedure and immediate effects can be observed. The development of the geodata collector and the collection of drill cuttings show that it is possible to obtain precise analysis of collected drill cutting material. Although high- and low assay values have been correlated with geological observation in the mine, the accuracy has been difficult to assess. The estimation and the simulation of the ore properties illustrate and quantify the uncertainties and variations in the ore deposit well. The structural analysis performed prior to the estimation and the simulation reveals anisotropies for all ore decisive parameters. The quantification of ore variations provides a useful input into the a-priori assessment of stope performance. It is also shown that the probability that a SMU is above or below some cut-off value can be assessed using the simulation results and the systems developed in standard software. It is concluded that the process analysis approach offers valuable input to gain an overview of the mining value chain. It is also an approach that constitutes an important step in the identification and assessment of IT-requirements, bottlenecks, input- and output requirements and role- and skill requirements along the value chain. However, the process analysis approach requires sufficient organisational resources, which also is the case regarding the implementation of the grade- and stability issues that are presented. Further it is concluded that the ore variations can be utilised to some extent by using standard software. The ore in question is a Neoproterozoic (600 to 700 Ma) metasedimentary magnetite-hematite ore deposited under shallow marine conditions. Primary precipitate was probably ferric hydroxide. Applied methods have been chosen to handle the uncertainty along the value chain of Rana Gruber AS. Every aspect of these methods may therefore not be directly applicable to other mining operations. However, the general aspects have a broad area of use.
48

Characterization and modeling of paleokarst reservoirs using multiple-point statistics on a non-gridded basis

Erzeybek Balan, Selin 25 February 2013 (has links)
Paleokarst reservoirs consist of complex cave networks, which are formed by various mechanisms and associated collapsed cave facies. Traditionally, cave structures are defined using variogram-based methods in flow models and this description does not precisely represent the reservoir geology. Algorithms based on multiple-point statistics (MPS) are widely used in modeling complex geologic structures. Statistics required for these algorithms are inferred from gridded training images. However, structures like modern cave networks are represented by point data sets. Thus, it is not practical to apply rigid and gridded templates and training images for the simulation of such features. Therefore, a quantitative algorithm to characterize and model paleokarst reservoirs based on physical and geological attributes is needed. In this study, a unique non-gridded MPS analysis and pattern simulation algorithms are developed to infer statistics from modern cave networks and simulate distribution of cave structures in paleokarst reservoirs. Non-gridded MPS technique is practical by eliminating use of grids and gridding procedure, which is challenging to apply on cave network due to its complex structure. Statistics are calculated using commonly available cave networks, which are only represented by central line coordinates sampled along the accessible cave passages. Once the statistics are calibrated, a cave network is simulated by using a pattern simulation algorithm in which the simulation is conditioned to sparse data in the form of locations with cave facies or coordinates of cave structures. To get an accurate model for the spatial extent of the cave facies, an algorithm is also developed to simulate cave zone thickness while simulating the network. The proposed techniques are first implemented to represent connectivity statistics for synthetic data sets, which are used as point-set training images and are analogous to the data typically available for a cave network. Once the applicability of the algorithms is verified, non-gridded MPS analysis and pattern simulation are conducted for the Wind Cave located in South Dakota. The developed algorithms successfully characterize and model cave networks that can only be described by point sets. Subsequently, a cave network system is simulated for the Yates Field in West Texas which is a paleokarst reservoir. Well locations with cave facies and identified cave zone thickness values are used for conditioning the pattern simulation that utilizes the MP-histograms calibrated for Wind Cave. Then, the simulated cave network is implemented into flow simulation models to understand the effects of cave structures on fluid flow. Calibration of flow model against the primary production data is attempted to demonstrate that the pattern simulation algorithm yields detailed description of spatial distribution of cave facies. Moreover, impact of accurately representing network connectivity on flow responses is explored by a water injection case. Fluid flow responses are compared for models with cave networks that are constructed by non-gridded MPS and a traditional modeling workflow using sequential indicator simulation. Applications on the Yates Field show that the cave network and corresponding cave facies are successfully modeled by using the non-gridded MPS. Detailed description of cave facies in the reservoir yields accurate flow simulation results and better future predictions. / text
49

Geostatistical data integration in complex reservoirs

Elahi Naraghi, Morteza 03 February 2015 (has links)
One of the most challenging issues in reservoir modeling is to integrate information coming from different sources at disparate scales and precision. The primary data are borehole measurements, but in most cases, these are too sparse to construct accurate reservoir models. Therefore, in most cases, the information from borehole measurements has to be supplemented with other secondary data. The secondary data for reservoir modeling could be static data such as seismic data or dynamic data such as production history, well test data or time-lapse seismic data. Several algorithms for integrating different types of data have been developed. A novel method for data integration based on the permanence of ratio hypothesis was proposed by Journel in 2002. The premise of the permanence of ratio hypothesis is to assess the information from each data source separately and then merge the information accounting for the redundancy between the information sources. The redundancy between the information from different sources is accounted for using parameters (tau or nu parameters, Krishnan, 2004). The primary goal of this thesis is to derive a practical expression for the tau parameters and demonstrate the procedure for calibrating these parameters using the available data. This thesis presents two new algorithms for data integration in reservoir modeling. The algorithms proposed in this thesis overcome some of the limitations of the current methods for data integration. We present an extension to the direct sampling based multiple-point statistics method. We present a methodology for integrating secondary soft data in that framwork. The algorithm is based on direct pattern search through an ensemble of realizations. We show that the proposed methodology is sutiable for modeling complex channelized reservoirs and reduces the uncertainty associated with production performance due to integration of secondary data. We subsequently present the permanence of ratio hypothesis for data integration in great detail. We present analytical equations for calculating the redundancy factor for discrete or continuous variable modeling. Then, we show how this factor can be infered using available data for different scenarios. We implement the method to model a carbonate reservoir in the Gulf of Mexico. We show that the method has a better performance than when primary hard and secondary soft data are used within the traditional geostatistical framework. / text
50

The Relative Importance of Head, Flux and Prior Information in Hydraulic Tomography Analysis

Tso, Chak Hau Michael January 2015 (has links)
Using cross-correlation analysis, we demonstrate that flux measurements at observation locations during hydraulic tomography (HT) surveys carry non-redundant information about heterogeneity that are complementary to head measurements at the same locations. We then hypothesize that a joint interpretation of head and flux data can enhance the resolution of HT estimates. Subsequently, we use numerical experiments to test this hypothesis and investigate the impact of stationary and non-stationary hydraulic conductivity field, and prior information such as correlation lengths, and initial mean models (uniform or distributed means) on HT estimates. We find that flux and head data from HT have already possessed sufficient heterogeneity characteristics of aquifers. While prior information (as uniform mean or layered means, correlation scales) could be useful, its influence on the estimates is limited as more non-redundant data are used in the HT analysis (see Yeh and Liu [2000]). Lastly, some recommendation for conducting HT surveys and analysis are presented.

Page generated in 0.0947 seconds