Spelling suggestions: "subject:"kriging."" "subject:"briging.""
211 |
[en] 3D ROCK MASS GEOMECHANICAL MODELING TO EXCAVATED SLOPE BI-DIMENSIONAL STABILITY ANALYSIS AT AHE-SIMPLÍCIO POWER HOUSE / [pt] MODELAGEM GEOMECÂNICA TRIDIMENSIONAL DE MACIÇOS ROCHOSOS PARA ANÁLISE BIDIMENSIONAL DA ESTABILIDADE DOS TALUDES DE ESCAVAÇÃO DA CASA DE FORÇA DO AHE-SIMPLÍCIO02 December 2008 (has links)
[pt] Na engenharia geotécnica, o conhecimento das propriedades
geomecânicas de subsuperfície é fundamental aos cálculos
de
estabilidade. Na
prática cotidiana, a estimativa dessa distribuição é
realizada pelo
geólogo/geotécnico responsável, que traça seções
geológico-
geotécnicas em
função de sua experiência e da observação dos dados das
investigações de
campo e laboratório. Tais estimativas bidimensionais,
porém, não contemplam a
distribuição espacial. Assim, o objetivo deste trabalho é
enriquecer a
compreensão global do maciço com a previsão da
distribuição
tridimensional
dessas propriedades. Utilizou-se o modelador geológico
PETREL(TM) para a
estimativa geoestatística da variação espacial dos
valores
de RMR e grau de
fraturamento ao longo do maciço rochoso destinado à
implantação da casa de
força do Aproveitamento Hidrelétrico de Simplício. Essa
distribuição foi estimada
em função da disposição dos dados ao longo dos furos de
sondagem localizados
espacialmente no modelo. Também foi lançada a geometria
da
futura superfície
de escavação, para que se pudesse avaliar qual a seção
crítica de qualidade do
maciço em relação aos taludes de corte. Essa seção foi
então extraída do
modelo com a distribuição ponto a ponto do valor de RMR,
de
onde foram
obtidos os parâmetros de resistência c e Ø para a análise
de estabilidade.
Analisou-se então a estabilidade desse talude em um
programa que permite
essa variação espacial de parâmetros de resistência e, em
paralelo também
foram realizadas as análises convencionais de
estabilidade
de taludes rochosos,
de forma a apresentar a importância da modelagem
tridimensional para tal
estudo. / [en] In geotechnical engineering, the knowledge of geomechanical
subsurface
properties is fundamental to stability analyses. The usual
practice is that a
geologist/geotechnical engineer does this distribution
estimate. The professional
makes some geologic-geotechnical sections based on his/her
experience and by
the observation of field/laboratory investigations data.
But these 2D sections
cannot take spatial distribution. To solve this problem and
provide a better global
comprehension of rock mass, this work presents a study of
3D property
distribution. The geological modeler PETREL(TM) has been
used to do a
geostatistical estimate of RMR and fracture density spatial
variety at the rock
mass where the AHE Simplício`s power house will be
installed. This distribution
has been estimated based on data linked to the boreholes,
spatially arranged at
the model. The geometry of the future excavated surface was
introduced in the
model. The purpose was to carry out a study to find the
critical quality section of
the model related to the excavated slopes. This section was
extracted with its
RMR value distribution. Before the stability analysis, the
resistance parameters c
and Ø were calculated to every point on this section from
the RMR correlations.
The slope stability of the critical section was calculated
in a program where the
resistance parameters can vary point by point. The
conventional rock slope
stability analysis has been done too. At the end of the
study, the first analysis
was compared against the last one, proving the importance
of the subsurface
tridimensional modeling.
|
212 |
Comparing Random forest and Kriging Methods for Surrogate ModelingAsritha, Kotha Sri Lakshmi Kamakshi January 2020 (has links)
The issue with conducting real experiments in design engineering is the cost factor to find an optimal design that fulfills all design requirements and constraints. An alternate method of a real experiment that is performed by engineers is computer-aided design modeling and computer-simulated experiments. These simulations are conducted to understand functional behavior and to predict possible failure modes in design concepts. However, these simulations may take minutes, hours, days to finish. In order to reduce the time consumption and simulations required for design space exploration, surrogate modeling is used. \par Replacing the original system is the motive of surrogate modeling by finding an approximation function of simulations that is quickly computed. The process of surrogate model generation includes sample selection, model generation, and model evaluation. Using surrogate models in design engineering can help reduce design cycle times and cost by enabling rapid analysis of alternative designs.\par Selecting a suitable surrogate modeling method for a given function with specific requirements is possible by comparing different surrogate modeling methods. These methods can be compared using different application problems and evaluation metrics. In this thesis, we are comparing the random forest model and kriging model based on prediction accuracy. The comparison is performed using mathematical test functions. This thesis conducted quantitative experiments to investigate the performance of methods. After experimental analysis, it is found that the kriging models have higher accuracy compared to random forests. Furthermore, the random forest models have less execution time compared to kriging for studied mathematical test problems.
|
213 |
Produtividade da soja correlacionada com atributos físicos de um latossolo sob dois sistemas de manejo /Videira, Ligia Maria Lucas. January 2017 (has links)
Orientador: Rafael Montanari / Resumo: Os estudos dos atributos físicos do solo, componentes de produção e produtividade das culturas em conjunto com a geoestatística, e com a geração de mapas de krigagem, resultam na racionalização dos recursos e insumos nas tomadas de decisões para um manejo mais adequado. Objetivou-se com este trabalho avaliar o efeito da escarificação do solo nos atributos físicos e a sua implicação na produtividade da soja em um Latossolo Vermelho distrófico e analisar os principais atributos físicos do solo que melhor se relacionam com a produtividade, visando implementar um manejo adequado e proporcionar zonas específicas de manejo. O experimento foi no ano agrícola 2015/16, na Fazenda de Ensino, Pesquisa e Extensão da Faculdade de Engenharia - UNESP, Câmpus de Ilha Solteira/SP e consistiu em duas áreas de cultivo, uma sob sistema plantio direto (SPD) e outra sob cultivo mínimo (CM). Para a coleta dos dados, foram alocadas duas malhas geoestatísticas, uma em cada área de plantio. Cada malha foi constituída de 51 pontos equidistantes com uma distância entre pontos de 10 m. Avaliaram-se a população de plantas (POP), o número de vagens por planta (NVP), o número de grãos por planta (NGP), o número de grãos por vagem (NGV), a massa de 100 grãos (MCG), a produtividade de grãos (PRG), bem como os atributos físicos do solo como macroporosidade (MA), microporosidade (MI), porosidade total (PT), densidade do solo (DS), densidade da partícula (DP), teor de argila (Arg), areia (Are), silte (Sil), umid... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
|
214 |
Geostatistical techniques for predicting bird species occurrencesShahiruzzaman, Mohammad, Rauf, Adnan January 2011 (has links)
Habitat loss and fragmentation are major threats to biodiversity. Geostatistical methods, especially kriging, are widely used in ecology. Bird counts data often fail to show normal distribution over an area which is required for most of the kriging methods. Hence choosing an interpolation method without understanding the implications may lead to bias results. United Kingdom’s Exprodat Consulting Ltd had set an Exploratory Spatial Data Analysis (ESDA) workflow for optimising interpolation of petroleum dataset. This workflow was applied in this study to predict capercaillie bird species over whole Sweden. There was no trend found in the dataset. Also the dataset was not spatially auto-correlated. A completely regularized spline surface model was created with RMSE 1.336. Medium to high occurrences (8-16) were found over two very small areas, within Västerbottens county and Västra Götlands county. Low occurrences (1-3) were found all over Sweden. Urban areas like Stockholm city and Malmö city had low occurrences. Another kriging prediction surface was created with RMSE 1.314 to compare the results. There were no prediction values from 5 to 16 in kriging surface. In-depth studies were carried out by selecting three areas. The studies showed that the results of local kriging surfaces did not match with the results of global surface. Uncertainty in GIS may exist at any level. Having low RMSE value does not always mean a good result. Hence ESDA before choosing interpolation method is an effective way. And a post result field investigation could make it more valid. Regression analysis is also widely used in ecology and there are certain different methods that are available to be used. Ordinary Least Squares is the first method that was tested upon bird counts data set. Adjusted R-squared value was 0.008616 which indicated that explanatory variables pine, spruce, roads, urban areas and wetlands were just contributing to 0.8% to the dependent variable bird counts. It was also found that there was no linear relationship between dependent and explanatory variables. Logistic regression was the next step as it had the capability to work with nonlinear data also. The Spatial Data Modeller (SDM) tool was used to perform logistic regression in ArcGIS 9.3. Initially results of logistic regression were unexpected, hence focal statistics was performed upon all the independent variables. Logistic regression with these new independent variables generated meaningful results. This time the probability of occurrence of birds had weak positive relationship with all the independent variables. Coefficients of pine, spruce, roads, urban areas and wetlands were found to be 0.39, 0.23, 0.13, 0.24 and 0.14 respectively. Pine and spruce are natural attractors for birds, hence results were quite acceptable. But the overall model performance remained poor. Positive coefficient for roads, urban areas and wetlands may well be due to redundancy in these datasets or observer bias in bird species reporting. IDRISI Andes also came up with almost the same results when logistic regression with same dependent and independent variables was performed. IDRISI Andes output contained the pseudo R-square value, found to be 0.0416. This was an indication of biasness in the dataset also. The results of in-depth studies by selecting three areas also showed that LR with focal statistics were having better results than LR without focal statistics, but the overall performance remained poor. The SDM tool is a good choice for performing logistic regression on small scale datasets due to its limitation. Comparison of results between the two geostatistical methods, interpolation and regression depicts the similarity at discrete places; an unbiased dataset might have resulted in a better comparison of two methods.
|
215 |
Accuracy and precision of bedrock sur-face prediction using geophysics and geostatistics.Örn, Henrik January 2015 (has links)
In underground construction and foundation engineering uncertainties associated with subsurface properties are inevitable to deal with. Site investigations are expensive to perform, but a limited understanding of the subsurface may result in major problems; which often lead to an unexpected increase in the overall cost of the construction project. This study aims to optimize the pre-investigation program to get as much correct information out from a limited input of resources, thus making it as cost effective as possible. To optimize site investigation using soil-rock sounding three different sampling techniques, a varying number of sample points and two different interpolation methods (Inverse distance weighting and point Kriging) were tested on four modeled reference surfaces. The accuracy of rock surface predictions was evaluated using a 3D gridding and modeling computer software (Surfer 8.02®). Samples with continuously distributed data, resembling profile lines from geophysical surveys were used to evaluate how this could improve the accuracy of the prediction compared to adding additional sampling points. The study explains the correlation between the number of sampling points and the accuracy of the prediction obtained using different interpolators. Most importantly it shows how continuous data significantly improves the accuracy of the rock surface predictions and therefore concludes that geophysical measurement should be used combined with traditional soil rock sounding to optimize the pre-investigation program.
|
216 |
Optimisation de structures viscoplastiques par couplage entre métamodèle multi-fidélité et modèles réduits / Structural design optimization by coupling multi-fidelity metamodels and reduced-order modelsNachar, Stéphane 11 October 2019 (has links)
Les phases de conception et de validation de pièces mécaniques nécessitent des outils de calculs rapides et fiables, permettant de faire des choix technologiques en un temps court. Dans ce cadre, il n'est pas possible de calculer la réponse exacte pour l'ensemble des configurations envisageables. Les métamodèles sont alors couramment utilisés mais nécessitent un grand nombre de réponses, notamment dans le cas où celles-ci sont non-linéaires. Une solution est alors d'exploiter plusieurs sources de données de qualité diverses pour générer un métamodèle multi-fidélité plus rapide à calculer pour une précision équivalente. Ces données multi-fidélité peuvent être extraites de modèles réduits.Les travaux présentés proposent une méthode de génération de métamodèles multi-fidélité pour l'optimisation de structures mécaniques par la mise en place d'une stratégie d'enrichissement adaptatif des informations sur la réponse de la structure, par utilisation de données issues d'un solveur LATIN-PGD permettant de générer des données de qualités adaptées, et d'accélérer le calcul par la réutilisation des données précédemment calculées. Un grand nombre de données basse-fidélité sont calculées avant un enrichissement intelligent par des données haute-fidélité.Ce manuscrit présente les contributions aux métamodèles multi-fidélité et deux approches de la méthode LATIN-PGD avec la mise en place d'une stratégie multi-paramétrique pour le réemploi des données précédemment calculées. Une implémentation parallèle des méthodes a permis de tester la méthode sur trois cas-tests, pour des gains pouvant aller jusqu'à 37x. / Engineering simulation provides the best design products by allowing many design options to be quickly explored and tested, but fast-time-to-results requirement remains a critical factor to meet aggressive time-to-market requirements. In this context, using high-fidelity direct resolution solver is not suitable for (virtual) charts generation for engineering design and optimization.Metamodels are commonly considered to explore design options without computing every possibility, but if the behavior is nonlinear, a large amount of data is still required. A possibility is to use further data sources to generate a multi-fidelity surrogate model by using model reduction. Model reduction techniques constitute one of the tools to bypass the limited calculation budget by seeking a solution to a problem on a reduced order basis (ROB).The purpose of the present work is an online method for generating a multi-fidelity metamodel nourished by calculating the quantity of interest from the basis generated on-the-fly with the LATIN-PGD framework for elasto-viscoplastic problems. Low-fidelity fields are obtained by stopping the solver before convergence, and high-fidelity information is obtained with converged solution. In addition, the solver ability to reuse information from previously calculated PGD basis is exploited.This manuscript presents the contributions to multi-fidelity metamodels and the LATIN-PGD method with the implementation of a multi-parametric strategy. This coupling strategy was tested on three test cases for calculation time savings of more than 37x.
|
217 |
COVID-19 Disease Mapping Based on Poisson Kriging Model and Bayesian Spatial Statistical ModelMu, Jingrui 25 January 2022 (has links)
Since the start of the COVID-19 pandemic in December 2019, much research has
been done to develop the spatial-temporal methods to track it and to predict the
spread of the virus. In this thesis, a COVID-19 dataset containing the number of biweekly infected cases registered in Ontario since the start of the pandemic to the end
of June 2021 is analysed using Bayesian Spatial-temporal models and Area-to-area
(Area-to-point) Poisson Kriging models. With the Bayesian models, spatial-temporal
effects on infected risk will be checked and ATP Poisson Kriging models will show
how the virus spreads over the space and the spatial clustering feature. According
to these models, a Shinyapp website https://mujingrui.shinyapps.io/covid19 is
developed to present the results.
|
218 |
Surrogate-based global optimization of composite material parts under dynamic loadingValladares Guerra, Homero Santiago 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The design optimization of laminated composite structures is of relevance in automobile, naval, aerospace, construction and energy industry. While several optimization methods have been applied in the design of laminated composites, the majority of those methods are only applicable to linear or simplified nonlinear models that are unable to capture multi-body contact. Furthermore, approaches that consider composite failure still remain scarce. This work presents an optimization approach based on design and analysis of computer experiments (DACE) in which smart sampling and continuous metamodel enhancement drive the design process towards a global optimum. Kriging metamodel is used in the optimization algorithm. This metamodel enables the definition of an expected improvement function that is maximized at each iteration in order to locate new designs to update the metamodel and find optimal designs. This work uses explicit finite element analysis to study the crash behavior of composite parts that is available in the commercial code LS-DYNA. The optimization algorithm is implemented in MATLAB. Single and multi-objective optimization problems are solved in this work. The design variables considered in the optimization include the orientation of the plies as well as the size of zones that control the collapse of the composite parts. For the ease of manufacturing, the fiber orientation is defined as a discrete variable. Objective functions such as penetration, maximum displacement and maximum acceleration are defined in the optimization problems. Constraints are included in the optimization problem to guarantee the feasibility of the solutions provided by the optimization algorithm. The results of this study show that despite the brittle behavior of composite parts, they can be optimized to resist and absorb impact. In the case of single objective problems, the algorithm is able to find the global solution. When working with multi-objective problems, an enhanced Pareto is provided by the algorithm.
|
219 |
Clarification of saline groundwater system in sedimentary rock area by geostatistical analyses of drilling investigation data / 試錐調査データの地球統計学的解析による堆積岩域での高塩分地下水系の解明Lu, Lei 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第18966号 / 工博第4008号 / 新制||工||1617(附属図書館) / 31917 / 京都大学大学院工学研究科都市社会工学専攻 / (主査)教授 小池 克明, 教授 石田 毅, 准教授 水戸 義忠 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
220 |
INTERPOLATING HYDROLOGIC DATA USING LAPLACE FORMULATIONTianle Xu (10802667) 14 May 2021 (has links)
Spatial interpolation techniques play an important
role in hydrology as many point observations need to be interpolated to create
continuous surfaces. Despite the availability of several tools and methods for
interpolating data, <a>not all of them work consistently
for hydrologic applications</a><a>. One of the techniques,
Laplace Equation, which is used in hydrology for creating flownets, has rarely
been used for interpolating hydrology data</a>. The objective of this study is
to examine the efficiency of Laplace formulation (LF) in interpolating hydrologic
data and compare it wih other widely used methods such as the inverse distance
weighting (IDW), natural neighbor, and kriging. Comparison is performed
quantitatively for using root mean square error (RMSE), visually for creating
reasonable surfaces and computationally for ease of operation and speed. Data
related to surface elevation, river bathymetry, precipitation, temperature, and
soil moisture data are used for different areas in the United States. RMSE
results show that LF performs better than IDW and is comparable to other
methods for accuracy. LF is easy to use
as it requires fewer input parameters compared to IDW and Kriging.
Computationally, LF is comparable to other methods in terms of speed when the
datasets are not large. Overall, LF offers an robust alternative to existing
methods for interpolating various hydrology data. Further work is required to
improve its computational efficiency with large data size and find out the
effects of different cell size.
|
Page generated in 0.0717 seconds