• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 151
  • 38
  • 21
  • 13
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 452
  • 123
  • 60
  • 58
  • 57
  • 51
  • 49
  • 45
  • 42
  • 40
  • 39
  • 36
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Characterization and impact of ambient air pollution measurement error in time-series epidemiologic studies

Goldman, Gretchen Tanner 28 June 2011 (has links)
Time-series studies of ambient air pollution and acute health outcomes utilize measurements from fixed outdoor monitoring sites to assess changes in pollution concentration relative to time-variable health outcome measures. These studies rely on measured concentrations as a surrogate for population exposure. The degree to which monitoring site measurements accurately represent true ambient concentrations is of interest from both an etiologic and regulatory perspective, since associations observed in time-series studies are used to inform health-based ambient air quality standards. Air pollutant measurement errors associated with instrument precision and lack of spatial correlation between monitors have been shown to attenuate associations observed in health studies. Characterization and adjustment for air pollution measurement error can improve effect estimates in time-series studies. Measurement error was characterized for 12 ambient air pollutants in Atlanta. Simulations of instrument and spatial error were generated for each pollutant, added to a reference pollutant time-series, and used in a Poisson generalized linear model of air pollution and cardiovascular emergency department visits. This method allows for pollutant-specific quantification of impacts of measurement error on health effect estimates, both the assessed strength of association and its significance. To inform on the amount and type of error present in Atlanta measurements, air pollutant concentrations were simulated over the 20-county metropolitan area for a 6-year period, incorporating several distribution characteristics observed in measurement data. The simulated concentration fields were then used to characterize the amount and type of error due to spatial variability in ambient concentrations, as well as the impact of use of different exposure metrics in a time-series epidemiologic study. Finally, methodologies developed for the Atlanta area were applied to air pollution measurements in Dallas, Texas with consideration for the impact of this error on a health study of the Dallas-Fort Worth region that is currently underway.
352

Untersuchung von Optimierungsverfahren für rechenzeitaufwändige technische Anwendungen in der Motorenentwicklung

Stöcker, Martin 09 October 2007 (has links) (PDF)
In der Motorenentwicklung treten Optimierungsprobleme auf, die sich nur schwer mit klassischen Methoden der Optimierung lösen lassen. Daher untersucht diese Arbeit nichtlineare Verfahren der ein- und multikriteriellen Optimierung, die unter Einhaltung nichtlinearer Nebenbedingungen mit relativ wenigen Funktionswertberechnungen in der Lage sind globale Extrema zu finden. Vorgestellt werden ein Genetischer Algorithmus und zwei Ersatzmodell-gestützte Optimierungsverfahren, die in das Optimierungsmodul der IAV EngineeringToolbox integriert wurden. Die Tauglichkeit der Algorithmen wurde an technischen Beispielen (1D-Strömungssimulation, Kettentriebsoptimierung), sowie an geeigneten Testfunktionen überprüft.
353

Optimisation of liquid fuel injection in gas turbine engines

Comer, Adam Landon January 2013 (has links)
No description available.
354

An efficient approach for high-fidelity modeling incorporating contour-based sampling and uncertainty

Crowley, Daniel R. 13 January 2014 (has links)
During the design process for an aerospace vehicle, decision-makers must have an accurate understanding of how each choice will affect the vehicle and its performance. This understanding is based on experiments and, increasingly often, computer models. In general, as a computer model captures a greater number of phenomena, its results become more accurate for a broader range of problems. This improved accuracy typically comes at the cost of significantly increased computational expense per analysis. Although rapid analysis tools have been developed that are sufficient for many design efforts, those tools may not be accurate enough for revolutionary concepts subject to grueling flight conditions such as transonic or supersonic flight and extreme angles of attack. At such conditions, the simplifying assumptions of the rapid tools no longer hold. Accurate analysis of such concepts would require models that do not make those simplifying assumptions, with the corresponding increases in computational effort per analysis. As computational costs rise, exploration of the design space can become exceedingly expensive. If this expense cannot be reduced, decision-makers would be forced to choose between a thorough exploration of the design space using inaccurate models, or the analysis of a sparse set of options using accurate models. This problem is exacerbated as the number of free parameters increases, limiting the number of trades that can be investigated in a given time. In the face of limited resources, it can become critically important that only the most useful experiments be performed, which raises multiple questions: how can the most useful experiments be identified, and how can experimental results be used in the most effective manner? This research effort focuses on identifying and applying techniques which could address these questions. The demonstration problem for this effort was the modeling of a reusable booster vehicle, which would be subject to a wide range of flight conditions while returning to its launch site after staging. Contour-based sampling, an adaptive sampling technique, seeks cases that will improve the prediction accuracy of surrogate models for particular ranges of the responses of interest. In the case of the reusable booster, contour-based sampling was used to emphasize configurations with small pitching moments; the broad design space included many configurations which produced uncontrollable aerodynamic moments for at least one flight condition. By emphasizing designs that were likely to trim over the entire trajectory, contour-based sampling improves the predictive accuracy of surrogate models for such designs while minimizing the number of analyses required. The simplified models mentioned above, although less accurate for extreme flight conditions, can still be useful for analyzing performance at more common flight conditions. The simplified models may also offer insight into trends in the response behavior. Data from these simplified models can be combined with more accurate results to produce useful surrogate models with better accuracy than the simplified models but at less cost than if only expensive analyses were used. Of the data fusion techniques evaluated, Ghoreyshi cokriging was found to be the most effective for the problem at hand. Lastly, uncertainty present in the data was found to negatively affect predictive accuracy of surrogate models. Most surrogate modeling techniques neglect uncertainty in the data and treat all cases as deterministic. This is plausible, especially for data produced by computer analyses which are assumed to be perfectly repeatable and thus truly deterministic. However, a number of sources of uncertainty, such as solver iteration or surrogate model prediction accuracy, can introduce noise to the data. If these sources of uncertainty could be captured and incorporated when surrogate models are trained, the resulting surrogate models would be less susceptible to that noise and correspondingly have better predictive accuracy. This was accomplished in the present effort by capturing the uncertainty information via nuggets added to the Kriging model. By combining these techniques, surrogate models could be created which exhibited better predictive accuracy while selecting the most informative experiments possible. This significantly reduced the computational effort expended compared to a more standard approach using space-filling samples and data from a single source. The relative contributions of each technique were identified, and observations were made pertaining to the most effective way to apply the separate and combined methods.
355

Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

Razavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
356

Value-based global optimization

Moore, Roxanne Adele 21 May 2012 (has links)
Computational models and simulations are essential system design tools that allow for improved decision making and cost reductions during all phases of the design process. However, the most accurate models are often computationally expensive and can therefore only be used sporadically. Consequently, designers are often forced to choose between exploring many design alternatives with less accurate, inexpensive models and evaluating fewer alternatives with the most accurate models. To achieve both broad exploration of the alternatives and accurate determination of the best alternative with reasonable costs incurred, surrogate modeling and variable accuracy modeling are used widely. A surrogate model is a mathematically tractable approximation of a more expensive model based on a limited sampling of that model, while variable accuracy modeling involves a collection of different models of the same system with different accuracies and computational costs. As compared to using only very accurate and expensive models, designers can determine the best solutions more efficiently using surrogate and variable accuracy models because obviously poor solutions can be eliminated inexpensively using only the less expensive, less accurate models. The most accurate models are then reserved for discerning the best solution from the set of good solutions. In this thesis, a Value-Based Global Optimization (VGO) algorithm is introduced. The algorithm uses kriging-like surrogate models and a sequential sampling strategy based on Value of Information (VoI) to optimize an objective characterized by multiple analysis models with different accuracies. It builds on two primary research contributions. The first is a novel surrogate modeling method that accommodates data from any number of analysis models with different accuracies and costs. The second contribution is the use of Value of Information (VoI) as a new metric for guiding the sequential sampling process for global optimization. In this manner, the cost of further analysis is explicitly taken into account during the optimization process. Results characterizing the algorithm show that VGO outperforms Efficient Global Optimization (EGO), a similar global optimization algorithm that is considered to be the current state of the art. It is shown that when cost is taken into account in the final utility, VGO achieves a higher utility than EGO with statistical significance. In further experiments, it is shown that VGO can be successfully applied to higher dimensional problems as well as practical engineering design examples.
357

Ανάπτυξη και αξιολόγηση μεθοδολογίας για τη δημιουργία πλεγματικών (gridded) ισοτοπικών δεδομένων

Σαλαμαλίκης, Βασίλειος 20 April 2011 (has links)
Διάφορες κλιματολογικές, υδρολογικές και περιβαλλοντικές μελέτες απαιτούν ακριβή γνώση της χωρικής κατανομής των σταθερών ισοτόπων του υδρογόνου και του οξυγόνου στον υετό. Δεδομένου ότι ο αριθμός των σταθμών συλλογής δειγμάτων υετού για ισοτοπική ανάλυση είναι μικρός και όχι ομογενώς κατανεμημένος σε πλανητικό επίπεδο, η πλανητκή κατανομή των σταθερών ισοτόπων μπορεί να υπολογισθεί μέσω της δημιουργίας πλεγματικών ισοτοπικών δεδομένων, για τη δημιουργία των οποίων έχουν προταθεί διάφορες μέθοδοι. Ορισμένες χρησιμοποιούν εμπειρικές σχέσεις και γεωστατιστικές μεθόδους ώστε να ελαχιστοποιήσουν τα σφάλματα λόγω παρεμβολής. Στην εργασία αυτή γίνεται μια προσπάθεια να δημιουργηθούν βάσεις πλεγματικών δεδομένων της ισοτοπικής σύστασης του υετού με ανάλυση 10΄ × 10΄ για την περιοχή της Κεντρικής και Ανατολικής Μεσογείου. Προσδιορίζονται στατιστικά πρότυπα λαμβάνοντας υπ’ όψιν γεωγραφικές και μετεωρολογικές παραμέτρους, ως ανεξάρτητες μεταβλητές. Η αρχική μεθοδολογία χρησιμοποιεί μόνο το υψόμετρο της περιοχής και το γεωγραφικό της πλάτος ως ανεξάρτητες μεταβλητές. Επειδή η ισοτοπική σύσταση εξαρτάται και από το γεωγραφικό μήκος προστέθηκαν στα υφιστάμενα πρότυπα, εκτός των γεωγραφικών μεταβλητών και μετεωρολογικές. Προτείνεται σειρά προτύπων τα οποία περιλαμβάνουν είτε ορισμένες είτε συνδυασμό αυτών των παραμέτρων. Η αξιολόγηση των προτύπων γίνεται με εφαρμογή των μεθόδων Thin Plate Splines (TPSS) και Ordinary Kriging (ΟΚ). / Several climatic, hydrological and environmental studies require the accurate knowledge of the spatial distribution of stable isotopes in precipitation. Since the number of rain sampling stations for isotope analysis is small and not evenly distributed around the globe, the global distribution of stable isotopes can be calculated via the production of gridded isotopic data sets. Several methods have been proposed for this purpose. Some of them use empirical equations and geostatistical methods in order to minimize eventual errors due to interpolation. In this work a methodology is proposed for the development of 10΄ × 10΄ gridded isotopic data of precipitation in Central and Eastern Mediterranean. Statistical models are developed taking into account geographical and meteorological parameters as independent variables. The initial methodology takes into account only the altitude and latitude of an area. Since however the isotopic composition of precipitation depends also on longitude, the existing models have been modified by adding meteorological parameters as independent variables also. A series of models is proposed taking into account some or a combination of the above mentioned variables. The models are validated using the Thin Plate Smoothing Splines (TPSS) and the Ordinary Kriging (OK) methods.
358

Uso de componentes de imagens de satélites na modelagem espacial do volume em povoamento de Eucalyptus sp. / Use of satellite imagery components in spatial modeling of volume in Eucalyptus sp. stands.

Aló, Lívia Lanzi 17 May 2016 (has links)
Submitted by Milena Rubi (milenarubi@ufscar.br) on 2017-08-08T17:10:26Z No. of bitstreams: 1 ALO_Livia_2016.pdf: 35886856 bytes, checksum: e8f725fab0ed2763bd7a05881893a6d8 (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-08-08T17:10:37Z (GMT) No. of bitstreams: 1 ALO_Livia_2016.pdf: 35886856 bytes, checksum: e8f725fab0ed2763bd7a05881893a6d8 (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-08-08T17:10:47Z (GMT) No. of bitstreams: 1 ALO_Livia_2016.pdf: 35886856 bytes, checksum: e8f725fab0ed2763bd7a05881893a6d8 (MD5) / Made available in DSpace on 2017-08-08T17:11:12Z (GMT). No. of bitstreams: 1 ALO_Livia_2016.pdf: 35886856 bytes, checksum: e8f725fab0ed2763bd7a05881893a6d8 (MD5) Previous issue date: 2016-05-17 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Forest inventory is an important tool used to estimate forest wood production. However, some methodologies used in forest inventory are based in Classical Statistics, which disregards any spatial continuity that may exist between sample unities. Some geostatistic interpolators such as ordinary kriging (OK) and external drift kriging (EDK) allow us to assess this spatial structure. Furthermore, besides spatial variability, interpolators as EDK use one or more auxiliary variables. Satellite images have different components that interrelate with dendrometric variables and that can be used as auxiliary variables in order to increase the degree of precision of estimates. The aim of this study was to assess EDK performance on the volume estimation of Eucalyptus sp. stands using satellite image components as secondary variables and to compare it with OK performance. With this purpose, a forest inventory of 210 circular plots of 500 m² was carried out in order to estimate the volume (m³ ha-1 ) in each plot. Images obtained of studied area had blue, green, red and near infrared band. From these bands, it were extracted: gray level in each band, the ratio between bands, vegetation index (NDVI, SAVI e ARVI), texture measures and index generated from textures related to plot area. Covariance model adjustement throughout Stepwise method and selection by AIC (Akaike Information Criterion) method were made to EDK geostatistic. EDK and OK semivariograms were adjusted by different theoretical models through Ordinary Least Squares (OLS) method and the choice of the best model was given by the lowest value of residual standard error. From statistic analysis of images and correlation matrix, it was observed a correlation of variables with volume and also autocorrelation between these variables. The best covariance model selected was composed by band 2, measure of COR texture of band 2, MULCOR texture index of band 1 and by age. In the two semivariograms, the best model adjusted was the exponential one. Analysing the results, volume estimates generated by EDK produced better results than OK estimates and had the lowest value of residual standard error and the best area under curve (AUC) in receiver operating characteristic (ROC) curve analysis. / O inventário florestal é uma importante ferramenta utilizada para estimar a produção dos povoamentos florestais. Contudo, algumas metodologias utilizadas no inventário são embasadas na Estatística Clássica, que desconsidera qualquer continuidade espacial que possa existir entre as unidades amostrais. Alguns interpoladores geoestatísticos, tais como a krigagem ordinária (KO) e a krigagem de deriva externa (KDE), permitem avaliar essa estrutura espacial. Além disso, interpoladores como a KDE utilizam, além da variável espacial, uma ou mais variáveis auxiliares. As imagens de satélites possuem diferentes componentes que se correlacionam com as variáveis dendrométricas podendo ser usados como variáveis auxiliares, visando o aumento do grau de precisão das estimativas. O objetivo deste estudo foi avaliar o desempenho da KDE na estimativa do volume de povoamentos florestais de Eucalyptus sp., utilizando os componentes de imagens de satélites como variáveis auxiliares e compará-la com o desempenho da KO. Com esse propósito, processouse um inventário florestal de 210 parcelas circulares de 500 m², a fim de estimar o volume (m³ ha-1 ) por parcela. As imagens obtidas da área do estudo continham as bandas azul, verde, vermelho e infravermelho próximo. A partir destas, foram extraídos o nível de cinza (NC) de cada banda, da razão simples entre as bandas, índices de vegetação (NDVI, SAVI e ARVI), medidas de textura e índices gerados a partir das texturas referentes à área da parcela. Para a geoestatística KDE, foi feito o ajuste do modelo de covariância através do método Stepwise e a seleção pelo método AIC (Critério de Informação de Akaike). Os semivariogramas da KDE e da KO foram ajustados por diferentes modelos teóricos por meio do método dos Mínimos Quadrados Ordinários (MQO) e a escolha do melhor modelo se deu pelo menor valor do erro padrão residual. Nas análises das estatísticas das imagens e da matriz de correlação geradas, foi possível observar a correlação das variáveis com o volume e também a autocorrelação existente entre as variáveis. O melhor modelo de covariância selecionado foi composto por banda 2, medida de textura COR (correlação) da banda 2, índice de textura MULCOR (correlação multiplicado pela banda) da banda 1 e pela idade. Nos dois semivariogramas, o modelo que melhor se ajustou foi o exponencial. Nas análises dos resultados, as estimativas de volume geradas pela KDE produziram melhores resultados que as estimativas da KO, obtendo o menor valor de erro padrão residual e a melhor área sob a curva (AUC) na análise da curva ROC (Receiver Operating Characteristic).
359

Avaliação dos interpoladores krigagem e Topo to Raster na geração de Modelos Digitais de Elevação a partir de dados batimétricos / Avaliação dos interpoladores krigagem e Topo to Raster na geração de Modelos Digitais de Elevação a partir de dados batimétricos / Avaliação dos interpoladores krigagem e Topo to Raster na geração de Modelos Digitais de Elevação a partir de dados batimétricos / Avaliação dos interpoladores krigagem e Topo to Raster na geração de Modelos Digitais de Elevação a partir de dados batimétricos / Avaliação dos interpoladores krigagem e Topo to Raster na geração de Modelos Digitais de Elevação a partir de dados batimétricos / Avaliação dos interpoladores krigagem e Topo to Raster na geração de Modelos Digitais de Elevação a partir de dados batimétricos

Carmo, Edilson José do 25 November 2014 (has links)
Made available in DSpace on 2015-03-26T13:28:36Z (GMT). No. of bitstreams: 1 texto completo.pdf: 1715165 bytes, checksum: ba96ef5c3249cefde8551b95cd57d667 (MD5) Previous issue date: 2014-11-25 / With developments in recent years, methods for bathymetric surveys using acoustic sensors (echo sounders) and signals transmitted by satellite navigation receivers, it becomes possible to describe the submerged same level of detail which is embossed with the surface relief describes terrestrial. A graphical representation of the underwater relief occurs from Digital Elevation Models (DEMs) generated by interpolating seeking, from measurements, predict the depth at unsampled locations. The geometrical information taken from bathymetric surveys, the volume of liquid water or mud present, for example in a reservoir, is the most relevant. This study sought to determine the products generated by single beam bathymetric survey, as well as the methods of kriging interpolation and Topo to Raster. The study areas were a box of decanting of the water treatment plant and a dam lake of Furnas, where surveys were conducted, with total station and bathymetric surveys, with a single beam echo sounder which operates at frequencies 33 kHz and 210 kHz . In the settling box, first bathymetric survey was conducted (Lbat 1). Then be emptied and cleaned the box and before filling it again, surveying, (Ltop) was performed with total station. Immediately after filling, without water and mud decanted, another bathymetric survey (Lbat 2) was performed. To survey the dam lake of Furnas, held automated bathymetric survey, ie, using the GNSS technology for positioning. First, the data from the survey, we evaluated the methods of kriging interpolation and Topo to Raster in the DEM generation of the settling box. The conclusion was that the interpolator Top to Raster conditioning showed remarkable deformations at the edges and in the center of the study area and should be discarded. The next stage of this work was to evaluate the accuracy of DEMs generated by applying kriging on data from single beam bathymetric survey, Lbat 2, using the frequencies of 33 kHz and 210 kHz. Possession of DEMs were calculated discrepancies between information from them and points of surveying. The results showed an accuracy of about 5 cm in mean depths of 3.21 m, and that as the surveys were conducted after cleaning the box, with clean water, there was no significant difference in the accuracy of DEMs generated with depths raised to the frequencies 33 kHz and 210 kHz. Then the DEMs were generated by kriging for the first bathymetric survey, Lbat 1, when solid waste resulting from the settling process still existed. Volumes were calculated and compared to evaluate the frequency of 33 kHz echo sounder to determine the volume of mud in the box. After analyzing the results, it was found that using only the first bathymetric survey, with the frequencies 33 kHz and 210 kHz, there was detected 186 m3 of sludge, in a total volume of 799 m3 of water and mud which, with the clean water the volume is determined by accurate bathymetry, showing discrepancy of 0.63% for the frequency of 210 kHz and 0.12% for the frequency of 33 kHz. To survey the lake of Furnas DEMs were generated by kriging interpolation method and the Top to Raster, varying the spacing between the regular survey lines using horizontal scan lines to the regular survey lines. Through statistical analysis of the discrepancies between estimated by interpolating depths and depths observed in the scan lines of the bathymetric survey, the kriging showed better results for spacings of 40 and 80 meters. Considering the isobatimétricas between different DEMs created, it is observed that for larger spacings the interpolator to Top Raster presented smoother features when compared to the DEMs generated by kriging. / Com a evolução, nos últimos anos, dos métodos de levantamentos batimétricos utilizando sensores acústicos (ecobatímetros) e receptores de sinais transmitidos por satélites de navegação, torna-se possível descrever o relevo submerso com mesmo nível de detalhe com que se descreve o relevo da superfície terrestre. A representação gráfica do relevo submerso se dá a partir de Modelos Digitais de Elevação (MDEs) gerados por interpoladores que buscam, a partir das medidas realizadas, predizer a profundidade em locais não amostrados. Das informações geométricas extraídas de levantamentos batimétricos, o volume de água ou lama líquida presente, por exemplo em um reservatório, é a mais relevante. Este trabalho busca avaliar produtos gerados pelo levantamento batimétrico monofeixe, assim como os métodos de interpolação krigagem e Topo to Raster. As áreas de estudo foram uma caixa de decantação de estação de tratamento de água e um lago da represa de furnas, onde foram realizados levantamentos topográficos, com estação total e levantamentos batimétricos, com ecobatímetro monofeixe que opera nas frequências 33 kHz e 210 kHz. Na caixa de decantação, primeiramente foi realizado levantamento batimétrico (Lbat 1). Em seguida, esvaziou-se e limpou-se a caixa e, antes de enchê-la novamente, foi realizado um levantamento topográfico, (Ltop), com estação total. Logo após o enchimento, com água e sem lama decantada, foi realizado outro levantamento batimétrico (Lbat 2). Para o levantamento no lago da represa de furnas, realizou-se levantamento batimétrico automatizado, ou seja, empregando a tecnologia GNSS para o posicionamento. Primeiramente, com os dados do levantamento topográfico, se avaliou os métodos de interpolação krigagem e Topo to Raster na geração do MDE da caixa de decantação. A conclusão foi que o interpolador Topo to Raster condicionado apresentou notáveis deformações nas bordas e no centro da área de estudo, devendo ser descartado. A próxima etapa do trabalho foi avaliar a acurácia dos MDEs gerados aplicando krigagem nos dados do levantamento batimétrico monofeixe, Lbat 2, utilizando as frequências de 33 kHz e 210 kHz. De posse dos MDEs foram calculadas as discrepâncias entre informações extraídas deles e os pontos do levantamento topográfico. Os resultados apresentaram uma acurácia em torno de 5 cm, em profundidades médias de 3,21 m, e que, como os levantamentos foram realizados após a limpeza da caixa, com água limpa, não se verificou diferença significativa na acurácia dos MDEs gerados com as profundidades levantadas com as frequências 33 kHz e 210 kHz. Em seguida foram gerados os MDEs pela krigagem para o primeiro levantamento batimétrico, Lbat 1, quando ainda existiam resíduos sólidos resultantes do processo de decantação. Os volumes foram calculados e comparados a fim de avaliar a frequência de 33 kHz do ecobatímetro para determinar o volume de lama na caixa. Após analisar os resultados verificou-se que usando somente o primeiro levantamento batimétrico, com as frequências 33 kHz e 210 kHz, não se detectou 186 m3 de lama, num volume total de 799 m3 de água e lama, e que, com a água limpa, o volume determinado pela batimetria é acurado, apresentando discrepância de 0,63% para a frequência de 210 khz e 0,12% para a frequência de 33 khz. Para o levantamento do lago de furnas foram gerados MDEs pelo método de krigagem e pelo Topo to Raster, variando o espaçamento entre as linhas regulares de sondagem, utilizando linhas de verificação transversais às linhas regulares de sondagem. Através da análise estatística das discrepâncias entre as profundidades estimadas pelos interpoladores e as profundidades observadas nas linhas de verificação do levantamento batimétrico, a krigagem apresentou melhores resultados para espaçamentos de 20 a 80 metros. A diferença entre os volumes determinados para os espaçamentos de 20 a 40 metros foi menor que 2%,. Considerando as isobatimétricas entre os diversos MDEs criados, observa-se que para espaçamentos maiores o interpolador Topo to Raster apresentou feições mais suavizadas quando comparado com os MDEs gerados pela krigagem.
360

Uso da Krigagem Indicativa na seleção de áreas propícias ao cultivo de café em consorciação ou rotação com outras culturas / Use of Kriging Indicative in selecting areas for the cultivation of coffee in intercropping or rotation with other crops

Almeida, Maria de Fátima Ferreira 28 February 2013 (has links)
Made available in DSpace on 2015-03-26T13:32:19Z (GMT). No. of bitstreams: 1 texto completo.pdf: 1676755 bytes, checksum: 2df7ed933bf4edc87e49ea0558d83114 (MD5) Previous issue date: 2013-02-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Geoestatistics stands out, mainly because it is an an interdisciplinary science that allows an exchange of information between geologists, petroleum engineers, mathematicians, statisticians and other professional groups thus enabling better interpretation of geological and environmental reality. Among the highlights Kriging techniques to Ordinary Kriging and Kriging Indicative. Where the first is a linear kriging predictor of timely considering the average unknown and incorporates in its formulation the procedure a weighted mobile, but what sets it apart is the fact that the weights are obtained taking into account the continuity represented by the semivariograma. The Indicative Kriging predictor is one that uses the technique of ordinary kriging or simple kriging of the data processed through a nonlinear function composed of binary 0 and 1. One of the great advantages of Kriging Indicative is the fact of being a nonparametric estimator that allows transform qualitative variables (presence or absence) or quantitative variables (according to a cutoff point of interest) and to estimate ranges of probability of occurrence of the variable. In agriculture, its use allows planning of soil correction of localized and identify management zones for rotation or intercropping. This paper aims to present a theoretical and practical study of the advantages and disadvantages of using the Kriging Indicative planning soil remediation technique for implantation of intercropping with banana cultivation of coffee, using data from soil chemical properties through samples collected at a farm cultivated with coffee in the city of Araponga - Zona da Mata Mineira. / A Geoestatística se destaca, principalmente por ser uma ciência interdisciplinar que permite uma troca de informações entre geólogos, engenheiros de petróleo, matemáticos, estatísticos e demais categorias profissionais possibilitando assim uma melhor interpretação da realidade geológica e ambiental. Dentre as técnicas de Krigagem destaca-se a Krigagem Ordinária e a Krigagem Indicativa. Em que a primeira é um preditor de Krigagem linear pontual que considera a média desconhecida e incorpora em sua formulação o procedimento de uma média ponderada móvel, porém o que a diferencia é o fato de que os pesos são obtidos levando em consideração a continuidade representada pelo semivariograma. A Krigagem Indicativa é um preditor que utiliza-se da técnica de Krigagem Ordinária ou de Krigagem Simples dos dados transformados por meio de uma função não linear binária composta por 0 e 1. Uma das grandes vantagens da Krigagem Indicativa reside no fato de ser um estimador não paramétrico que permite transformar variáveis qualitativas (presença ou ausência) ou variáveis quantitativas (de acordo com um ponto de corte de interesse) e estimar probabilidade de ocorrência da variável. Na agricultura, o seu uso permite fazer planejamento de correção do solo de forma localizada e identificar zonas de manejo para rotação ou consorciação de culturas. Este trabalho tem por objetivo apresentar um estudo teóricoaplicado das vantagens e desvantagens no uso da Krigagem Indicativa para o planejamento de correção do solo para implantação da técnica de consorciação de cultivo de bananeira com o cultivo de café, utilizando dados de propriedades químicas do solo por meio de amostras coletadas em uma fazenda cultivada com café no Município de Araponga- Zona da Mata Mineira.

Page generated in 0.049 seconds