• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 92
  • 92
  • 92
  • 19
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improved modelling in finite-sample and nonlinear frameworks

Lawford, Stephen Derek Charles January 2001 (has links)
No description available.
2

Essays on wage dispersion

Davies, Stuart January 1999 (has links)
No description available.
3

Assessing the Impacts of Anthropogenic Drainage Structures on Hydrologic Connectivity Using High-Resolution Digital Elevation Models

Bhadra, Sourav 01 August 2019 (has links)
Stream flowline delineation from high-resolution digital elevation models (HRDEMs) can be problematic due to the fine representation of terrain features as well as anthropogenic drainage structures (e.g., bridges, culverts) within the grid surface. The anthropogenic drainage structures (ADS) may create digital dams while delineating stream flowlines from HRDEMs. The study assessed the effects of ADS locations, spatial resolution (ranged from 1m to 10m), depression processing methods, and flow direction algorithms (D8, D-Infinity, and MFD-md) on hydrologic connectivity through digital dams using HRDEMs in Nebraska. The assessment was conducted based on the offset distances between modeled stream flowlines and original ADS locations using kernel density estimation (KDE) and calculated frequency of ADS samples within offset distances. Three major depression processing techniques (i.e., depression filling, stream breaching, and stream burning) were considered for this study. Finally, an automated method, constrained burning was proposed for HRDEMs which utilizes ancillary datasets to create underneath stream crossings at possible ADS locations and perform DEM reconditioning. The results suggest that coarser resolution DEMs with depression filling and breaching can produce better hydrologic connectivity through ADS compared with finer resolution DEMs with different flow direction algorithms. It was also found that stream burning with known stream crossings at ADS locations outperformed depression filling and breaching techniques for HRDEMs in terms of hydrologic connectivity. The flow direction algorithms combining with depression filling and breaching techniques do not have significant effects on the hydrologic connectivity of modeled stream flowlines. However, for stream burning methods, D8 was found as the best performing flow direction algorithm in HRDEMs with statistical significance. The stream flowlines delineated using the proposed constrained burning method from the HRDEM was found better than depression filling and breaching techniques. This method has an overall accuracy of 78.82% in detecting possible ADS locations within the study area.
4

Multivariate Analysis of Diverse Data for Improved Geostatistical Reservoir Modeling

Hong, Sahyun 11 1900 (has links)
Improved numerical reservoir models are constructed when all available diverse data sources are accounted for to the maximum extent possible. Integrating various diverse data is not a simple problem because data show different precision and relevance to the primary variables being modeled, nonlinear relations and different qualities. Previous approaches rely on a strong Gaussian assumption or the combination of the source-specific probabilities that are individually calibrated from each data source. This dissertation develops different approaches to integrate diverse earth science data. First approach is based on combining probability. Each of diverse data is calibrated to generate individual conditional probabilities, and they are combined by a combination model. Some existing models are reviewed and a combination model is proposed with a new weighting scheme. Weakness of the probability combination schemes (PCS) is addressed. Alternative to the PCS, this dissertation develops a multivariate analysis technique. The method models the multivariate distributions without a parametric distribution assumption and without ad-hoc probability combination procedures. The method accounts for nonlinear features and different types of the data. Once the multivariate distribution is modeled, the marginal distribution constraints are evaluated. A sequential iteration algorithm is proposed for the evaluation. The algorithm compares the extracted marginal distributions from the modeled multivariate distribution with the known marginal distributions and corrects the multivariate distribution. Ultimately, the corrected distribution satisfies all axioms of probability distribution functions as well as the complex features among the given data. The methodology is applied to several applications including: (1) integration of continuous data for a categorical attribute modeling, (2) integration of continuous and a discrete geologic map for categorical attribute modeling, (3) integration of continuous data for a continuous attribute modeling. Results are evaluated based on the defined criteria such as the fairness of the estimated probability or probability distribution and reasonable reproduction of input statistics. / Mining Engineering
5

Modelling Probability Distributions from Data and its Influence on Simulation

Hörmann, Wolfgang, Bayar, Onur January 2000 (has links) (PDF)
Generating random variates as generalisation of a given sample is an important task for stochastic simulations. The three main methods suggested in the literature are: fitting a standard distribution, constructing an empirical distribution that approximates the cumulative distribution function and generating variates from the kernel density estimate of the data. The last method is practically unknown in the simulation literature although it is as simple as the other two methods. The comparison of the theoretical performance of the methods and the results of three small simulation studies show that a variance corrected version of kernel density estimation performs best and should be used for generating variates directly from a sample. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
6

EMPIRICAL BAYES NONPARAMETRIC DENSITY ESTIMATION OF CROP YIELD DENSITIES: RATING CROP INSURANCE CONTRACTS

Ramadan, Anas 16 September 2011 (has links)
This thesis examines a newly proposed density estimator in order to evaluate its usefulness for government crop insurance programs confronted by the problem of adverse selection. While the Federal Crop Insurance Corporation (FCIC) offers multiple insurance programs including Group Risk Plan (GRP), what is needed is a more accurate method of estimating actuarially fair premium rates in order to eliminate adverse selection. The Empirical Bayes Nonparametric Kernel Density Estimator (EBNKDE) showed a substantial efficiency gain in estimating crop yield densities. The objective of this research was to apply EBNKDE empirically by means of a simulated game wherein I assumed the role of a private insurance company in order to test for profit gains from the greater efficiency and accuracy promised by using EBNKDE. Employing EBNKDE as well as parametric and nonparametric methods, premium insurance rates for 97 Illinois counties for the years 1991 to 2010 were estimated using corn yield data from 1955 to 2010 taken from the National Agricultural Statistics Service (NASS). The results of this research revealed substantial efficiency gain from using EBNKDE as opposed to other estimators such as Normal, Weibull, and Kernel Density Estimator (KDE). Still, further research using other crops yield data from other states will provide greater insight into EBNKDE and its performance in other situations.
7

Multivariate Analysis of Diverse Data for Improved Geostatistical Reservoir Modeling

Hong, Sahyun Unknown Date
No description available.
8

Bayesian kernel density estimation

Rademeyer, Estian January 2017 (has links)
This dissertation investigates the performance of two-class classi cation credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and naive Bayes (NB), as well as the non-parametric Parzen classi ers are extended, using Bayes' rule, to include either a class imbalance or a Bernoulli prior. This is done with the aim of addressing the low default probability problem. Furthermore, the performance of Parzen classi cation with Silverman and Minimum Leave-one-out Entropy (MLE) Gaussian kernel bandwidth estimation is also investigated. It is shown that the non-parametric Parzen classi ers yield superior classi cation power. However, there is a longing for these non-parametric classi ers to posses a predictive power, such as exhibited by the odds ratio found in logistic regression (LR). The dissertation therefore dedicates a section to, amongst other things, study the paper entitled \Model-Free Objective Bayesian Prediction" (Bernardo 1999). Since this approach to Bayesian kernel density estimation is only developed for the univariate and the uncorrelated multivariate case, the section develops a theoretical multivariate approach to Bayesian kernel density estimation. This approach is theoretically capable of handling both correlated as well as uncorrelated features in data. This is done through the assumption of a multivariate Gaussian kernel function and the use of an inverse Wishart prior. / Dissertation (MSc)--University of Pretoria, 2017. / The financial assistance of the National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the authors and are not necessarily to be attributed to the NRF. / Statistics / MSc / Unrestricted
9

Development of a robbery prediction model for the City of Tshwane Metropolitan Municipality

Kemp, Nicolas James January 2020 (has links)
Crime is not spread evenly over space or time. This suggests that offenders favour certain areas and/or certain times. People base their daily activities on this notion and make decisions to avoid certain areas or feel the need to be more alert in some places rather than others. Even when making choices of where to stay, shop, and go to school, people take into account how safe they feel in those places. Crime in relation to space and time has been studied over several centuries; however, the era of the computer has brought new insight to this field. Indeed, computing technology and in particular geographic information systems (GIS) and crime mapping software, has increased the interest in explaining criminal activities. It is the ability to combine the type, time and spatial occurrences of crime events that makes the use of these computing technologies attractive to crime analysts. This current study predicts robbery crime events in the City of Tshwane Metropolitan Municipality. By combining GIS and statistical models, a proposed method was developed to predict future robbery hotspots. More specifically, a robbery probability model was developed for the City of Tshwane Metropolitan Municipality based on robbery events that occurred during 2006 and this model is evaluated using actual robbery events that occurred in the 2007. This novel model was based on the social disorganisation, routine activity, crime pattern and temporal constraint crime theories. The efficacy of the model was tested by comparing it to a traditional hotspot model. The robbery prediction model was developed using both built and social environmental features. Features in the built environment were divided into two main groups: facilities and commuter nodes. The facilities used in the current study included cadastre parks, clothing stores, convenience stores, education facilities, fast food outlets, filling stations, office parks and blocks, general stores, restaurants, shopping centres and supermarkets. The key commuter nodes consisted of highway nodes, main road nodes and railway stations. The social environment was built using demographics obtained from the 2001 census data. The selection of these features that may impact the occurrence of robbery was guided by spatial crime theories housed within the school of environmental criminology. Theories in this discipline argue that neighbourhoods experiencing social disorganisation are more prone to crime, while different facilities act as crime attractors or generators. Some theories also include a time element suggesting that criminals are constrained by time, leaving little time to explore areas far from commuting nodes. The current study combines these theories using GIS and statistics. A programmatic approach in R was used to create kernel density estimations (hotspots), select relevant features, compute regression models with the use of the caret and mlr packages and predict crime hotspots. R was further used for the majority of spatial queries and analyses. The outcome consisted of various hotspot raster layers predicting future robbery occurrences. The accuracy of the model was tested using 2007 robbery events. Therefore, this current study not only provides a novel statistical predictive model but also showcases R’s spatial capabilities. The current study found strong supporting evidence for the routine activity and crime pattern theory in that robberies tended to cluster around facilities within the city of Tshwane, South Africa. The findings also show a strong spatial association between robberies and neighbourhoods that experience high social disorganisation. Support was also found for the time constraint theory in that a large portion of robberies occur in the immediate vicinity of highway nodes, main road nodes and railway stations. When tested against the traditional hotspot model the robbery probability model was found slightly less effective in predicting future events. However, the current study showcases the effectiveness of the robbery probability model which can be improved upon and used in future studies to determine the effect that future urban development will have on crime. / Dissertation (MSc)--University of Pretoria, 2020. / Geography, Geoinformatics and Meteorology / MSc / Unrestricted
10

A Novel Data-based Stochastic Distribution Control for Non-Gaussian Stochastic Systems

Zhang, Qichun, Wang, H. 06 April 2021 (has links)
Yes / This note presents a novel data-based approach to investigate the non-Gaussian stochastic distribution control problem. As the motivation of this note, the existing methods have been summarised regarding to the drawbacks, for example, neural network weights training for unknown stochastic distribution and so on. To overcome these disadvantages, a new transformation for dynamic probability density function is given by kernel density estimation using interpolation. Based upon this transformation, a representative model has been developed while the stochastic distribution control problem has been transformed into an optimisation problem. Then, data-based direct optimisation and identification-based indirect optimisation have been proposed. In addition, the convergences of the presented algorithms are analysed and the effectiveness of these algorithms has been evaluated by numerical examples. In summary, the contributions of this note are as follows: 1) a new data-based probability density function transformation is given; 2) the optimisation algorithms are given based on the presented model; and 3) a new research framework is demonstrated as the potential extensions to the existing st

Page generated in 0.1424 seconds