• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 8
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 62
  • 21
  • 17
  • 16
  • 15
  • 13
  • 13
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

New Non-Parametric Methods for Income Distributions

Luo, Shan 26 April 2013 (has links)
Low income proportion (LIP), Lorenz curve (LC) and generalized Lorenz curve (GLC) are important indexes in describing the inequality of income distribution. They have been widely used for measuring social stability by governments around the world. The accuracy of estimating those indexes is essential to quantify the economics of a country. Established statistical inferential methods for these indexes are based on an asymptotic normal distribution, which may have poor performance when the real income data is skewed or has outliers. Recent applications of nonparametric methods, though, allow researchers to utilize techniques without giving data the parametric distribution assumption. For example, existing research proposes the plug-in empirical likelihood (EL)-based inferences for LIP, LC and GLC. However, this method becomes computationally intensive and mathematically complex because of the presence of nonlinear constraints in the underlying optimization problem. Meanwhile, the limiting distribution of the log empirical likelihood ratio is a scaled Chi-square distribution. The estimation of the scale constant will affect the overall performance of the plug-in EL method. To improve the efficiency of the existing inferential methods, this dissertation first proposes kernel estimators for LIP, LC and GLC, respectively. Then the cross-validation method is proposed to choose bandwidth for the kernel estimators. These kernel estimators are proved to have asymptotic normality. The smoothed jackknife empirical likelihood (SJEL) for LIP, LC and GLC are defined. Then the log-jackknife empirical likelihood ratio statistics are proved to follow the standard Chi-square distribution. Extensive simulation studies are conducted to evaluate the kernel estimators in terms of Mean Square Error and Asymptotic Relative Efficiency. Next, the SJEL-based confidence intervals and the smoothed bootstrap-based confidence intervals are proposed. The coverage probability and interval length for the proposed confidence intervals are calculated and compared with the normal approximation-based intervals. The proposed kernel estimators are found to be competitive estimators, and the proposed inferential methods are observed to have better finite-sample performance. All inferential methods are illustrated through real examples.
42

Jackknife Emperical Likelihood Method and its Applications

Yang, Hanfang 01 August 2012 (has links)
In this dissertation, we investigate jackknife empirical likelihood methods motivated by recent statistics research and other related fields. Computational intensity of empirical likelihood can be significantly reduced by using jackknife empirical likelihood methods without losing computational accuracy and stability. We demonstrate that proposed jackknife empirical likelihood methods are able to handle several challenging and open problems in terms of elegant asymptotic properties and accurate simulation result in finite samples. These interesting problems include ROC curves with missing data, the difference of two ROC curves in two dimensional correlated data, a novel inference for the partial AUC and the difference of two quantiles with one or two samples. In addition, empirical likelihood methodology can be successfully applied to the linear transformation model using adjusted estimation equations. The comprehensive simulation studies on coverage probabilities and average lengths for those topics demonstrate the proposed jackknife empirical likelihood methods have a good performance in finite samples under various settings. Moreover, some related and attractive real problems are studied to support our conclusions. In the end, we provide an extensive discussion about some interesting and feasible ideas based on our jackknife EL procedures for future studies.
43

Jackknife Empirical Likelihood for the Accelerated Failure Time Model with Censored Data

Bouadoumou, Maxime K 15 July 2011 (has links)
Kendall and Gehan estimating functions are used to estimate the regression parameter in accelerated failure time (AFT) model with censored observations. The accelerated failure time model is the preferred survival analysis method because it maintains a consistent association between the covariate and the survival time. The jackknife empirical likelihood method is used because it overcomes computation difficulty by circumventing the construction of the nonlinear constraint. Jackknife empirical likelihood turns the statistic of interest into a sample mean based on jackknife pseudo-values. U-statistic approach is used to construct the confidence intervals for the regression parameter. We conduct a simulation study to compare the Wald-type procedure, the empirical likelihood, and the jackknife empirical likelihood in terms of coverage probability and average length of confidence intervals. Jackknife empirical likelihood method has a better performance and overcomes the under-coverage problem of the Wald-type method. A real data is also used to illustrate the proposed methods.
44

Estimation of Pareto distribution functions from samples contaminated by measurement errors

Lwando Orbet Kondlo January 2010 (has links)
<p>The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo / s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available.</p>
45

Estimation of discretely sampled continuous diffusion processes with application to short-term interest rate models

Van Appel, Vaughan 13 October 2014 (has links)
M.Sc. (Mathematical Statistics) / Stochastic Differential Equations (SDE’s) are commonly found in most of the modern finance used today. In this dissertation we use SDE’s to model a random phenomenon known as the short-term interest rate where the explanatory power of a particular short-term interest rate model is largely dependent on the description of the SDE to the real data. The challenge we face is that in most cases the transition density functions of these models are unknown and therefore, we need to find reliable and accurate alternative estimation techniques. In this dissertation, we discuss estimating techniques for discretely sampled continuous diffusion processes that do not require the true transition density function to be known. Moreover, the reader is introduced to the following techniques: (i) continuous time maximum likelihood estimation; (ii) discrete time maximum likelihood estimation; and (iii) estimating functions. We show through a Monte Carlo simulation study that the parameter estimates obtained from these techniques provide a good approximation to the estimates obtained from the true transition density. We also show that the bias in the mean reversion parameter can be reduced by implementing the jackknife bias reduction technique. Furthermore, the data analysis carried out on South-African interest rate data indicate strongly that single factor models do not explain the variability in the short-term interest rate. This may indicate the possibility of distinct jumps in the South-African interest rate market. Therefore, we leave the reader with the notion of incorporating jumps into a SDE framework.
46

Estimation of Pareto distribution functions from samples contaminated by measurement errors

Kondlo, Lwando Orbet January 2010 (has links)
Magister Scientiae - MSc / The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo;s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available. / South Africa
47

Estimation of Pareto Distribution Functions from Samples Contaminated by Measurement Errors

Kondlo, Lwando Orbet January 2010 (has links)
>Magister Scientiae - MSc / Estimation of population distributions, from samples that are contaminated by measurement errors, is a common problem. This study considers the problem of estimating the population distribution of independent random variables Xi, from error-contaminated samples ~i (.j = 1, ... , n) such that Yi = Xi + f·.i, where E is the measurement error, which is assumed independent of X. The measurement error ( is also assumed to be normally distributed. Since the observed distribution function is a convolution of the error distribution with the true underlying distribution, estimation of the latter is often referred to as a deconvolution problem. A thorough study of the relevant deconvolution literature in statistics is reported. We also deal with the specific case when X is assumed to follow a truncated Pareto form. If observations are subject to Gaussian errors, then the observed Y is distributed as the convolution of the finite-support Pareto and Gaussian error distributions. The convolved probability density function (PDF) and cumulative distribution function (CDF) of the finite-support Pareto and Gaussian distributions are derived. The intention is to draw more specific connections bet.ween certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher's information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available. Simulated data are used to validate the methodology. A real-life application of the methodology is illustrated by fitting convolved distributions to astronomical data
48

Jackknife stability of articulated tractor semitrailer vehicles with high-output brakes and jackknife detection on low coefficient surfaces

Dunn, Ashley L. 14 October 2003 (has links)
No description available.
49

物種個數的估計與寫作風格的探討

李蕙帆 Unknown Date (has links)
在生態學及生物學的研究中,「物種個數」(Number of Species)通常是「生物多樣性」(Species Diversity)的重要測量值,物種個數的多寡、分布與多樣性有相當的關聯。「物種」的概念不侷限於生物,舉凡網路搜尋引擎(Search Engine)使用的關鍵字詞、圖書館分類的數目種類、國際疾病代碼等,都可視為物種。 本文著眼於寫作風格的比較,研究中國知名小說「紅樓夢」,主要探討前八十回與後四十回是否為同一個作者,以估計物種個數的觀點作為寫作風格的比較標準,並以金庸的武俠小說為對照組,驗證分析的結果。本文除了使用除了Efron and Thisted的隨機模型,也考慮藉由區塊抽樣估計母體種類數之Jackknife、Bootstrap、Chao(1992)等估計方法。研究發現Efron and Thisted的模型的估計量容易呈現不穩定的震盪,可能會有無法收歛的問題;而Bootstrap、Jackknife與Chao(1992)則會有高估母體種類數的現象。利用涵蓋機率的概念發現Jackknife與Chao皆在抽出特定比例的樣本數時,估計值涵蓋母體種類數之機率值非常接近1。 / The number of species is frequently used to measure the species diversity of a population in studying ecology and biology. There are such relationships between numbers of species and its diversities. The idea of species diversity is not restricted to biology, it receives more applications in recent years. For example, the applications also include key words in search engines, classification's numbers in a library, and disease types in Measuring health. This article studies the well-known Chinese novel “The Dream of Red Chamber”, and the goal is to study whether the first 80 and last 40 chapters are from the same author. In particular, methods related the number of species are used to evaluate the goal of study. Also, some Chinese martial novels, by the famous writer Jin Yong, are used as the control group for the methods used. Methods considered in this study include Efron and Thisted’s Model, Jackknife, Bootstrap, estimation method from Chao (1992). We found that Efron and Thisted’s estimates tend to be less stable and slow in convergence. On the other hand, the estimates of Jackknife, Bootstrap, and Chao are likely to be over-biased. However, after some modifications, we found that the Jackknife and Chao’s estimates can be used to provide reliable predictions for the number of species of a finite population, given that part of the population is observed.
50

Modelo de regressão para dados com censura intervalar e dados de sobrevivência grupados / Regression model for interval-censored data and grouped survival data

Hashimoto, Elizabeth Mie 04 February 2009 (has links)
Neste trabalho foi proposto um modelo de regressão para dados com censura intervalar utilizando a distribuição Weibull-exponenciada, que possui como característica principal a função de taxa de falha que assume diferentes formas (unimodal, forma de banheira, crescente e decrescente). O atrativo desse modelo de regressão é a sua utilização para discriminar modelos, uma vez que o mesmo possui como casos particulares os modelos de regressão Exponencial, Weibull, Exponencial-exponenciada, entre outros. Também foi estudado um modelo de regressão para dados de sobrevivência grupados na qual a abordagem é fundamentada em modelos de tempo discreto e em tabelas de vida. A estrutura de regressão representada por uma probabilidade é modelada adotando-se diferentes funções de ligação, tais como, logito, complemento log-log, log-log e probito. Em ambas as pesquisas, métodos de validação dos modelos estatísticos propostos são descritos e fundamentados na análise de sensibilidade. Para detectar observações influentes nos modelos propostos, foram utilizadas medidas de diagnóstico baseadas na deleção de casos, denominadas de influência global e medidas baseadas em pequenas perturbações nos dados ou no modelo proposto, denominada de influência local. Para verificar a qualidade de ajuste do modelo e detectar pontos discrepantes foi realizada uma análise de resíduos nos modelos propostos. Os resultados desenvolvidos foram aplicados a dois conjuntos de dados reais. / In this study, a regression model for interval-censored data were developed, using the Exponentiated- Weibull distribution, that has as main characteristic the hazard function which assumes different forms (unimodal, bathtub shape, increase, decrease). A good feature of that regression model is their use to discriminate models, that have as particular cases, the models of regression: Exponential, Weibull, Exponential-exponentiated, amongst others. Also a regression model were studied for grouped survival data in which the approach is based in models of discrete time and in life tables, the regression structure represented by a probability is modeled through the use of different link function, logit, complementary log-log, log-log or probit. In both studies, validation methods for the statistical models studied are described and based on the sensitivity analysis. To find influential observations in the studied models, diagnostic measures were used based on case deletion, denominated as global influence and measures based on small perturbations on the data or in the studied model, denominated as local influence. To verify the goodness of fitting of the model and to detect outliers it was performed residual analysis for the proposed models. The developed results were applied to two real data sets.

Page generated in 0.0429 seconds