• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 462
  • 32
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • 10
  • 6
  • 6
  • Tagged with
  • 683
  • 683
  • 142
  • 141
  • 115
  • 89
  • 86
  • 57
  • 55
  • 49
  • 49
  • 40
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Some non-standard statistical dependence problems

Bere, Alphonce January 2016 (has links)
Philosophiae Doctor - PhD / The major result of this thesis is the development of a framework for the application of pair-mixtures of copulas to model asymmetric dependencies in bivariate data. The main motivation is the inadequacy of mixtures of bivariate Gaussian models which are commonly fitted to data. Mixtures of rotated single parameter Archimedean and Gaussian copulas are fitted to real data sets. The method of maximum likelihood is used for parameter estimation. Goodness-of-fit tests performed on the models giving the highest log-likelihood values show that the models fit the data well. We use mixtures of univariate Gaussian models and mixtures of regression models to investigate the existence of bimodality in the distribution of the widths of autocorrelation functions in a sample of 119 gamma-ray bursts. Contrary to previous findings, our results do not reveal any evidence of bimodality. We extend a study by Genest et al. (2012) of the power and significance levels of tests of copula symmetry, to two copula models which have not been considered previously. Our results confirm that for small sample sizes, these tests fail to maintain their 5% significance level and that the Cramer-von Mises-type statistics are the most powerful.
442

Improved tree species discrimination at leaf level with hyperspectral data combining binary classifiers

Dastile, Xolani Collen January 2011 (has links)
The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.
443

Pricing options under stochastic volatility

Venter, Rudolf Gerrit 05 September 2005 (has links)
Please read the abstract in the section 00front of this document / Dissertation (MSc (Mathematics of Finance))--University of Pretoria, 2006. / Mathematics and Applied Mathematics / unrestricted
444

Causal Inference : controlling for bias in observational studies using propensity score methods

Msibi, Mxolisi January 2020 (has links)
Adjusting for baseline pre-intervention characteristics between treatment groups, through the use of propensity score matching methods, is an important step that enables researchers to do causal inference with confidence. This is critical, largely, due to the fact that practical treatment allocation scenarios are non-randomized in nature, with various inherent biases that are inevitable, and therefore requiring such experimental manipulations. These propensity score matching methods are the available tools to be used as control mechanisms, for such intrinsic system biases in causal studies, without the benefits of randomization (Lane, To, Kyna , & Robin, 2012). Certain assumptions need to be verifiable or met, before one may embark on a propensity score matching causal effects journey, using the Rubin causal model (Holland, 1986), of which the main ones are conditional independence (unconfoundedness) and common support (positivity). In particular, with this dissertation we are concerned with elaborating the applications of these matching methods, for a ‘strong-ignorability’ case (Rosenbaum & Rubin, 1983), i.e. when both the overlap and unconfoundedness properties are valid. We will take a journey from explaining different experimental designs and how the treatment effect is estimated, closing with a practical example based on two cohorts of enrolled introductory statistics students prior and post-clickers intervention, at a public South African university, and the relevant causal conclusions thereof. Keywords: treatment, conditional independence, propensity score, counterfactual, confounder, common support / Dissertation (MSc)--University of Pretoria, 2020. / Statistics / MSc / Unrestricted
445

The rank analysis of triple comparisons

Pendergrass, Robert Nixon 12 March 2013 (has links)
General extensions of the probability model for paired comparisons, which was developed by R. A. Bradley and M. E. Terry, are considered. Four generalizations to triple comparisons are discussed. One of these models is used to develop methods of analysis of data obtained from the ranks of items compared in groups of size three. / Ph. D.
446

Transformation-based approaches for determining the distribution of software life-cycle variables

Akhter, Farzana 01 July 2000 (has links)
No description available.
447

Using the Haar-Fisz wavelet transform to uncover regions of constant light intensity in Saturn's rings

Paulson, Courtney L. 01 January 2010 (has links)
Saturn's ring system is actually comprised of a multitude of separate rings, yet each of these rings has areas with more or less constant structural properties which are hard to uncover by observation alone. By measuring stellar occultations, data is collected in the form of Poisson counts (of over 6 million observations) which need to be denoised in order to find these areas with constant properties. At present, these areas are found by visual inspection or examining moving averages, which is hard to do when the amount of data is huge. It is also impossible to do this using the changepoint analysis-based method by Scargle (1998, 2005). For the purpose of finding areas of constant Poisson intensity, a state-of-the-art Haar-Fisz algorithm for Poisson intensity estimation is employed. This algorithm is based on wavelet-like transformation of the original data and subsequent denoising, a methodology originally developed by Nason and Fryzlewicz (2005). We apply the HaarFisz transform to the original data, which normalizes the noise level, then apply the Haar wavelet transform and threshold wavelet coefficients. Finally, we apply the inverse Haar-Fisz transform to recover the Poisson intensity function. We implement the algorithm using R programming language. The program was first tested using synthetic data and then applied to original Saturn ring observations, resulting in a quick, easy method to resolve data into discrete blocks with equal mean average intensities.
448

Aspects of cash-flow valuation

Armerin, Fredrik January 2004 (has links)
This thesis consists of five papers. In the first two papers we consider a general approach to cash flow valuation, focusing on dynamic properties of the value of a stream of cash flows. The third paper discusses immunization theory, where old results are shown to hold in general deterministic models, but often fail to be true in stochastic models. In the fourth paper we comment on the connection between arbitrage opportunities and an immunized position. Finally, in the last paper we study coherent and convex measure of risk applied to portfolio optimization and insurance.
449

Stochastic Modeling and Statistical Inference of Geological Fault Populations and Patterns

Borgos, Hilde Grude January 2000 (has links)
<p>The focus of this work is on faults, and the main issue is statistical analysis and stochastic modeling of faults and fault patterns in petroleum reservoirs. The thesis consists of Part I-V and Appendix A-C. The units can be read independently. Part III is written for a geophysical audience, and the topic of this part is fault and fracture size-frequency distributions. The remaining parts are written for a statistical audience, but can also be read by people with an interest in quantitative geology. The topic of Part I and II is statistical model choice for fault size distributions, with a samling algorithm for estimating Bayes factor. Part IV describes work on spatial modeling of fault geometry, and Part V is a short note on line partitioning. Part I, II and III constitute the main part of the thesis. The appendices are conference abstracts and papers based on Part I and IV.</p> / Paper III: reprinted with kind permission of the American Geophysical Union. An edited version of this paper was published by AGU. Copyright [2000] American Geophysical Union
450

Stochastic Modeling and Statistical Inference of Geological Fault Populations and Patterns

Borgos, Hilde Grude January 2000 (has links)
The focus of this work is on faults, and the main issue is statistical analysis and stochastic modeling of faults and fault patterns in petroleum reservoirs. The thesis consists of Part I-V and Appendix A-C. The units can be read independently. Part III is written for a geophysical audience, and the topic of this part is fault and fracture size-frequency distributions. The remaining parts are written for a statistical audience, but can also be read by people with an interest in quantitative geology. The topic of Part I and II is statistical model choice for fault size distributions, with a samling algorithm for estimating Bayes factor. Part IV describes work on spatial modeling of fault geometry, and Part V is a short note on line partitioning. Part I, II and III constitute the main part of the thesis. The appendices are conference abstracts and papers based on Part I and IV. / Paper III: reprinted with kind permission of the American Geophysical Union. An edited version of this paper was published by AGU. Copyright [2000] American Geophysical Union

Page generated in 0.1032 seconds