• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 11
  • 11
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Kriging radio environment map construction

Lundqvist, Erik January 2022 (has links)
With the massive increase in usage of some parts of the electromagnetic spectrum during the last decades, the ability to create real time maps of signal coverage is now more important than ever before. This Masters project is designed to test two different methods of generating such maps with a one second limit to processing time. The interpolation methods under consideration are known as inverse distance weighting and kriging. Several different variants of kriging are considered and compared some of which were implemented specif cally for the project and one variant designed by a third party.The data used is acquired from an antenna array inside a laboratory room at LTU rather than being simulated. The data collection is done with the transmitter at several different positions in the room to make sure the interpolation works consistently. The results show only small differences in both the mean and median of the absolute error when comparing inverse distance weighting and kriging and the variations between transmitter positions are signifcant enough that no single variant is consistently the best using that metric. Using a resolution with 25cm2 pixel size there were no problems reaching significantly lower than the 1sec time limit. If the resolution is increased to apixel size of 1cm2 neither method is able to consistently update the map at the required pace. Kriging however showed that it can generate values outside the range of observed values which could make the extra effort required to implement it worth it since such a characteristic might be very useful for  finding the transmitter.
2

COSINE: A tool for constraining spatial neighbourhoods in marine environments

Suarez, Cesar Augusto 20 September 2013 (has links)
Spatial analysis methods used for detecting, interpolating or predicting local patterns require a delineation of a neighbourhood defining the extent of spatial interaction in geographic data. The most common neighbourhood delineation techniques include fixed distance bands, k-nearest neighbours, or spatial adjacency (contiguity) matrices optimized to represent spatial dependency in data. However, these standard approaches do not take into consideration the geographic or environmental constraints such as impassable mountain ranges, road networks or coastline barriers. Specifically, complex marine landscapes and coastlines present common problematic neighbourhood definitions for standard neighbourhood matrices used in the spatial analysis of marine environments. Therefore, the goal of our research is to present a new approach to constraining spatial neighbourhoods when conducting geographical analysis in marine environments. To meet this goal, we developed methods and software (COnstraining SpatIal NEighbourhoods - COSINE) for modifying spatial neighbourhoods, and demonstrate their utility in two case studies. Our method enables delineation of neighbourhoods that are constrained by coastlines and the direction of marine currents. Our software calculates and evaluates whether neighbouring features are separated by land, or are within a user defined angle that excludes interaction based on directional processes. Using decision rules a modified spatial weight matrix is created, either in binary or row-standardized format. Within open source software (R), a graphical user interface enables users to modify the standard spatial neighbourhood definition distance, inverse distance and k-nearest neighbour. Two case studies are presented to demonstrate the usefulness of the new approach for detecting spatial patterns: the first case study observes marine mammals’ abundance and the second, oil spill observation. Our results indicate that constraining spatial neighbourhoods in marine environments is particularly important at larger spatial scales. The COSINE tool has many applications for modelling both environmental and human processes. / Graduate / 0463 / 0366 / suarezc@uvic.ca
3

An investigation of fuzzy modeling for spatial prediction with sparsely distributed data

Thomas, Robert 31 August 2018 (has links)
Dioxins are highly toxic persistent environmental pollutants that occur in marine harbour sediments as the results of industrial practices around the world and pose a significant risk to human health. To adequately remediate contaminated sediments, the spatial extent of contamination must first be determined by spatial interpolation. The ability to lower sampling frequency and perform laboratory analysis on fewer samples, yet still produce an adequate pollutant distribution map, would reduce the initial cost of new remediation projects. Fuzzy Set Theory has been shown as a way to reduce uncertainty due to data sparsity and provides an advantageous way to quantify gradational changes like those of pollutant concentrations through fuzzing clustering based approaches; Fuzzy modelling has the ability to utilize these advantages for making spatial predictions. To assess the ability of fuzzy modeling to make spatial predictions using fewer sample points, its predictive ability was compared to Ordinary Kriging (OK) and Inverse Distance Weighting (IDW) under increasingly sparse data conditions. This research used a Takagi-Sugeno (T-S) fuzzy modelling approach with fuzzy c-means clustering to make spatial predictions of lead concentrations in soil to determine the efficacy of the fuzzy model for applications of modeling dioxins in marine sediment. The spatial density of the data used to make the predictions was incrementally reduced to simulate increasingly sparse spatial data conditions. To determine model performance, the data at each increment not used for making the spatial predictions was used as validation data, which the model attempted to predict and the performance was analyzed. Initially, the parameters associated with the T-S fuzzy model were determined by the optimum observed performance, where the combination of parameters that produced the most accurate prediction of the validation data were retained as optimal for each increment of the data reduction. To determine performance Mean Absolute Error, the Coefficient of Determination, and Root Mean Squared Error were selected as metrics. To give each metric equal weighting a binned scoring system was developed where each metric received a score from 1 to 10, the average represented that methods score. The Akaike Information Criterion (AIC) was also employed to determine the effect of the varied validation set lengths on performance. For the T-S fuzzy model as the amount of data used to solve the respective validation set points was reduced the number of clusters was lower and the cluster centres were more spread out, the fuzzy overlap between clusters was larger, and the widths of the membership function in the T-S fuzzy model were wider. Although it was possible to determine an optimal number of clusters, fuzzy overlap, and membership function width that yielded an optimal prediction of the validation data, gain in performance was minor compared to many other combinations of parameters. Therefore, for the data used in this study the T-S fuzzy model was insensitive to parameter choice. For OK, as the data was reduced, the range of spatial dependence in the data from variography became lower, and for IDW the power parameters optimal value became lower to give a greater weighting to more widely spread points. For the TS fuzzy model, OK, and IDW the increasingly sparse data conditions resulted in an increasingly poor model performance for all metrics. This was supported by AIC values for each method at each increment of the data reduction that were within 1 point of each other. The ability of the methods to predict outlier points and reproduce the variance in the validation sets was very similar and overall quite poor. Based on the scoring system IDW did exhibit a slight outperformance of the T-S fuzzy model, which slightly outperformed OK. However, the scoring system employed in this research was overly sensitive and so was only useful for assessing relative performance. The performance of the T-S model was very dependent on the number of outliers in the respective validation set. For modeling under sparse data conditions, the T-S fuzzy modeling approach using FCM clustering and constant width Gaussian shaped membership functions used in this research did not show any advantages over IDW and OK for the type of data tested. Therefore, it was not possible to speculate on a possible reduction in sampling frequency for delineating the extent of contamination for new remediation projects. / Graduate
4

Accuracy and precision of bedrock sur-face prediction using geophysics and geostatistics.

Örn, Henrik January 2015 (has links)
In underground construction and foundation engineering uncertainties associated with subsurface properties are inevitable to deal with. Site investigations are expensive to perform, but a limited understanding of the subsurface may result in major problems; which often lead to an unexpected increase in the overall cost of the construction project. This study aims to optimize the pre-investigation program to get as much correct information out from a limited input of resources, thus making it as cost effective as possible. To optimize site investigation using soil-rock sounding three different sampling techniques, a varying number of sample points and two different interpolation methods (Inverse distance weighting and point Kriging) were tested on four modeled reference surfaces. The accuracy of rock surface predictions was evaluated using a 3D gridding and modeling computer software (Surfer 8.02®). Samples with continuously distributed data, resembling profile lines from geophysical surveys were used to evaluate how this could improve the accuracy of the prediction compared to adding additional sampling points. The study explains the correlation between the number of sampling points and the accuracy of the prediction obtained using different interpolators. Most importantly it shows how continuous data significantly improves the accuracy of the rock surface predictions and therefore concludes that geophysical measurement should be used combined with traditional soil rock sounding to optimize the pre-investigation program.
5

Comparison of heat maps showing residence price generated using interpolation methods / Jämförelse av färgdiagram för bostadspriser genererade med hjälp av interpolationsmetoder

Wong, Mark January 2017 (has links)
In this report we attempt to provide insights in how interpolation can be used for creating heat maps showing residence prices for different residence markets in Sweden. More specifically, three interpolation methods are implemented and are then used on three Swedish residence markets. These three residence markets are of varying characteristics such as size and residence type. Data of residence sales and the physical definitions of the residence markets were collected. As residence sales are never identical, residence sales were preprocessed to make them comparable. For comparison, a so-called external predictor was used as an extra parameter for the interpolation method. In this report, distance to nearest public transportation was used as an external predictor. The interpolated heat maps were compared and evaluated using both quantitative and qualitative approaches. Results show that each interpolation method has its own strengths and weaknesses, and that using an external predictor results in better heat maps compared to only using residence price as predictor. Kriging was found to be the most robust method and consistently resulted in the best interpolated heat maps for all residence markets. On the other hand, it was also the most time-consuming interpolation method. / Den här rapporten försöker ge insikter i hur interpolation kan användas för att skapa färgdiagram över bostadspriser för olika bostadsmarknader i Sverige. Mer specifikt implementeras tre interpolationsmetoder som sedan används på tre olika svenska bostadsmarknader. Dessa tre bostadsmarknader är av olika karaktär med hänsyn till storlek och bostadstyp. Bostadsförsäljningsdata och de fysiska definitionerna för bostadsmarknaderna samlades in. Eftersom bostadsförsäljningar aldrig är identiska, behandlas de först i syfte att göra dem jämförbara. En extern indikator, vilket är en extra parameter för interpolationsmetoder, undersöktes även. I den här rapporten användes avståndet till närmaste kollektiva transportmedel som extern indikator. De interpolerade färgdiagrammen jämfördes och utvärderades både med en kvantiativ och en kvalitativ metod. Resultaten visar att varje interpolationsmetod har sina styrkor och svagheter och att användandet av en extern indikator alltid renderade i ett bättre färgdiagram jämfört med att endast använda bostadspris som indikator. Kriging bedöms vara den mest robusta interpolationsmetoden och interpolerade även de bästa färgdiagrammen för alla bostadsmarknader. Samtidigt var det även den mest tidskrävande interpolationsmetoden.
6

INVERSE-DISTANCE INTERPOLATION BASED SET-POINT GENERATION METHODS FOR CLOSED-LOOP COMBUSTION CONTROL OF A CIDI ENGINE

Maringanti, Rajaram Seshu 15 December 2009 (has links)
No description available.
7

Evaluation of Spatial Interpolation Techniques Built in the Geostatistical Analyst Using Indoor Radon Data for Ohio,USA

Sarmah, Dipsikha January 2012 (has links)
No description available.
8

An investigation of sea-breeze driven convection along the northern Gulf Coast

Ford, Caitlin 13 May 2022 (has links) (PDF)
Although sea-breezes frequently initiate convection, it is oftentimes challenging to forecast the precise location of storm development. This research examines temporal and spatial characteristics of sea-breeze driven convection and environmental conditions that support convective or non-convective sea-breeze days along the Northern Gulf Coast. Base reflectivity products were used to identify the initial time of convection (values greater than 30 dBZs) along the sea-breeze front. It was found that convective sea-breezes initiated earlier in the day compared to non-convective sea-breezes. Mapping convective cells in ArcGIS revealed favored locations of thunderstorm development including the southeastern cusp of Mobile County, Alabama and convex coastlines. Meteorological variables from the North American Regional Reanalysis dataset were compared between convective and non-convective sea-breeze days via a bootstrap analysis to reveal environmental characteristics pertinent to forecasting sea-breeze driven convection. Lapse rates, CAPE, CIN, specific humidity, dew point temperature, relative humidity, and precipitable water values were statistically significant.
9

Interpolation sur les variétés grassmanniennes et applications à la réduction de modèles en mécanique / Interpolation on Grassmann manifolds and applications to reduced order methods in mechanics

Mosquera Meza, Rolando 26 June 2018 (has links)
Ce mémoire de thèse concerne l'interpolation sur les variétés de Grassmann et ses applications à la réduction de modèles en mécanique et plus généralement aux systèmes d'équations aux dérivées partielles d'évolution. Après une description de la méthode POD, nous introduisons les fondements théoriques en géométrie des variétés de Grassmann, qui seront utilisés dans le reste de la thèse. Ce chapitre donne à ce mémoire à la fois une rigueur mathématique au niveau des algorithmes mis au point, leur domaine de validité ainsi qu'une estimation de l'erreur en distance grassmannienne, mais également un caractère auto-contenu "self-contained" du manuscrit. Ensuite, on présente la méthode d'interpolation sur les variétés de Grassmann introduite par David Amsallem et Charbel Farhat. Cette méthode sera le point de départ des méthodes d'interpolation que nous développerons dans les chapitres suivants. La méthode de Amsallem-Farhat consiste à choisir un point d'interpolation de référence, envoyer l'ensemble des points d'interpolation sur l'espace tangent en ce point de référence via l'application logarithme géodésique, effectuer une interpolation classique sur cet espace tangent, puis revenir à la variété de Grassmann via l'application exponentielle géodésique. On met en évidence par des essais numériques l'influence du point de référence sur la qualité des résultats. Dans notre premier travail, nous présentons une version grassmannienne d'un algorithme connu dans la littérature sous le nom de Pondération par Distance Inverse (IDW). Dans cette méthode, l'interpolé en un point donné est considéré comme le barycentre des points d'interpolation où les coefficients de pondération utilisés sont inversement "proportionnels" à la distance entre le point considéré et les points d'interpolation. Dans notre méthode, notée IDW-G, la distance géodésique sur la variété de Grassmann remplace la distance euclidienne dans le cadre standard des espaces euclidiens. L'avantage de notre algorithme, dont on a montré la convergence sous certaines conditions assez générales, est qu'il ne requiert pas de point de référence contrairement à la méthode de Amsallem-Farhat. Pour remédier au caractère itératif (point fixe) de notre première méthode, nous proposons une version directe via la notion de barycentre généralisé. Notons enfin que notre algorithme IDW-G dépend nécessairement du choix des coefficients de pondération utilisés. Dans notre second travail, nous proposons une méthode qui permet un choix optimal des coefficients de pondération, tenant compte de l'auto-corrélation spatiale de l'ensemble des points d'interpolation. Ainsi, chaque coefficient de pondération dépend de tous les points d'interpolation et non pas seulement de la distance entre le point considéré et un point d'interpolation. Il s'agit d'une version grassmannienne de la méthode de Krigeage, très utilisée en géostatique. La méthode de Krigeage grassmannienne utilise également le point de référence. Dans notre dernier travail, nous proposons une version grassmannienne de l'algorithme de Neville qui permet de calculer le polynôme d'interpolation de Lagrange de manière récursive via l'interpolation linéaire entre deux points. La généralisation de cet algorithme sur une variété grassmannienne est basée sur l'extension de l'interpolation entre deux points (géodésique/droite) que l'on sait faire de manière explicite. Cet algorithme ne requiert pas le choix d'un point de référence, il est facile d'implémentation et très rapide. De plus, les résultats numériques obtenus sont remarquables et nettement meilleurs que tous les algorithmes décrits dans ce mémoire. / This dissertation deals with interpolation on Grassmann manifolds and its applications to reduced order methods in mechanics and more generally for systems of evolution partial differential systems. After a description of the POD method, we introduce the theoretical tools of grassmannian geometry which will be used in the rest of the thesis. This chapter gives this dissertation a mathematical rigor in the performed algorithms, their validity domain, the error estimate with respect to the grassmannian distance on one hand and also a self-contained character to the manuscript. The interpolation on Grassmann manifolds method introduced by David Amsallem and Charbel Farhat is afterward presented. This method is the starting point of the interpolation methods that we will develop in this thesis. The method of Amsallem-Farhat consists in chosing a reference interpolation point, mapping forward all interpolation points on the tangent space of this reference point via the geodesic logarithm, performing a classical interpolation on this tangent space and mapping backward the interpolated point to the Grassmann manifold by the geodesic exponential function. We carry out the influence of the reference point on the quality of the results through numerical simulations. In our first work, we present a grassmannian version of the well-known Inverse Distance Weighting (IDW) algorithm. In this method, the interpolation on a point can be considered as the barycenter of the interpolation points where the used weights are inversely proportional to the distance between the considered point and the given interpolation points. In our method, denoted by IDW-G, the geodesic distance on the Grassmann manifold replaces the euclidean distance in the standard framework of euclidean spaces. The advantage of our algorithm that we show the convergence undersome general assumptions, does not require a reference point unlike the method of Amsallem-Farhat. Moreover, to carry out this, we finally proposed a direct method, thanks to the notion of generalized barycenter instead of an earlier iterative method. However, our IDW-G algorithm depends on the choice of the used weighting coefficients. The second work deals with an optimal choice of the weighting coefficients, which take into account of the spatial autocorrelation of all interpolation points. Thus, each weighting coefficient depends of all interpolation points an not only on the distance between the considered point and the interpolation point. It is a grassmannian version of the Kriging method, widely used in Geographic Information System (GIS). Our grassmannian Kriging method require also the choice of a reference point. In our last work, we develop a grassmannian version of Neville's method which allow the computation of the Lagrange interpolation polynomial in a recursive way via the linear interpolation of two points. The generalization of this algorithm to grassmannian manifolds is based on the extension of interpolation of two points (geodesic/straightline) that we can do explicitly. This algorithm does not require the choice of a reference point, it is easy to implement and very quick. Furthermore, the obtained numerical results are notable and better than all the algorithms described in this dissertation.
10

Nickel Resource Estimation And Reconciliation At Turkmencardagi Laterite Deposits

Gencturk, Bilgehan 01 September 2012 (has links) (PDF)
In recent years nickel is mostly produced from lateritic ore deposits such as nontronite, limonite, etc. Resource estimation is difficult for laterite deposits as they have a weak and heterogeneous form. 3D modeling software are rather suitable for deposits having tabular or vein type ores. In this study the most appropriate estimation technique for resource estimation of nickel laterite deposits was investigated. One of the known nickel laterite deposits in Turkey is located at T&uuml / rkmen&ccedil / ardagi - G&ouml / rdes region. Since the nickel (Ni) grade recovered from drilling studies seem to be very low, a reconciliation pit having dimensions of 40 m x 40 m x 15 m in x-y-z directions was planned by Meta Nikel Kobalt Mining Company (META), the license owner of the mine, to produce nickel ore. 13 core drilling and 13 reverse circulation drilling (RC) and 26 column samplings adjacent to each drillholes were located in this area. Those three sampling results were compared to each other and as well as the actual production values obtained from reconciliation pit. On the other side 3D computer modeling was also used to model the nickel resource in T&uuml / rkmen&ccedil / ardagi - G&ouml / rdes laterites. The results obtained from both inverse distance weighting and kriging methods were compared to the results of actual production to find out the applicability of 3D modeling to laterite deposits. Modeling results showed that Ni grade of the reconciliation pit in T&uuml / rkmen&ccedil / ardagi - G&ouml / rdes, considering 0.5% Ni cut-off value, by using drillholes data, inverse distance weighting method estimates 622 tonnes with 0.553% Ni and kriging method estimates 749 tonnes with 0.527% Ni. The actual production pit results provided 4,882 tonnes of nickel ore with 0.649% Ni grade. These results show that grade values seem to be acceptable but in terms of tonnage, there are significant differences between theoretical estimated values and production values.

Page generated in 0.1282 seconds