61 |
A Bootstrap Application in Adjusting Asymptotic Distribution for Interval-Censored DataChung, Yun-yuan 20 June 2007 (has links)
Comparison of two or more failure time distributions based on interval-censored data is tested by extension of log-rank test proposed by Sun (1996, 2001, 2004). Furthermore, Chang (2004) verified that the proposed test statistics are approximately chi-cquare with degrees of freedom p-1 after constants factor adjustment which can be obtained from simulations. In this paper we approach in a different way to estimate the adjustment factor of a given interval-censored data by applying the bootstrap technique to the test statistics. Simulation results indicate that the bootstrap technique performs well on those test statistics except the one proposed in 1996. By using chi-square goodness of fit test, we found that Sun's test in 1996 is significantly far from any chi-square.
|
62 |
Bootstrapping in a high dimensional but very low sample size problemSong, Juhee 16 August 2006 (has links)
High Dimension, Low Sample Size (HDLSS) problems have received much attention
recently in many areas of science. Analysis of microarray experiments is one
such area. Numerous studies are on-going to investigate the behavior of genes by
measuring the abundance of mRNA (messenger RiboNucleic Acid), gene expression.
HDLSS data investigated in this dissertation consist of a large number of data sets
each of which has only a few observations.
We assume a statistical model in which measurements from the same subject
have the same expected value and variance. All subjects have the same distribution
up to location and scale. Information from all subjects is shared in estimating this
common distribution.
Our interest is in testing the hypothesis that the mean of measurements from a
given subject is 0. Commonly used tests of this hypothesis, the t-test, sign test and
traditional bootstrapping, do not necessarily provide reliable results since there are
only a few observations for each data set.
We motivate a mixture model having C clusters and 3C parameters to overcome
the small sample size problem. Standardized data are pooled after assigning each
data set to one of the mixture components. To get reasonable initial parameter estimates
when density estimation methods are applied, we apply clustering methods
including agglomerative and K-means.
Bayes Information Criterion (BIC) and a new criterion, WMCV (Weighted Mean
of within Cluster Variance estimates), are used to choose an optimal number of clusters.
Density estimation methods including a maximum likelihood unimodal density
estimator and kernel density estimation are used to estimate the unknown density.
Once the density is estimated, a bootstrapping algorithm that selects samples from
the estimated density is used to approximate the distribution of test statistics. The
t-statistic and an empirical likelihood ratio statistic are used, since their distributions
are completely determined by the distribution common to all subject. A method to
control the false discovery rate is used to perform simultaneous tests on all small data
sets.
Simulated data sets and a set of cDNA (complimentary DeoxyriboNucleic Acid)
microarray experiment data are analyzed by the proposed methods.
|
63 |
Power Analysis of Bootstrap Methods for Testing Homogeneity of Variances with Small SampleShih, Chiang-Ming 23 July 2008 (has links)
Several classical tests are investigated for testing the homogeneity of variances. However, in case of homoscedasticity statistics do not perform well with small sample size. In this article we discuss the use of bootstrap technique for the problem of testing equality of variances with small samples. Two important features of the proposed resampling method are their flexibility and robustness. Both £\ levels and power of our new proposed procedure is compared with the other classical methods discussed here.
|
64 |
Evaluating Performance for Network Equipment Manufacturing FirmsLin, Hong-jia 08 July 2009 (has links)
none
|
65 |
Estimating the large-scale structure of the universe using QSO carbon IV absorbers /Loh, Ji Meng. January 2001 (has links)
Thesis (Ph. D.)--University of Chicago, Department of Statistics, August 2001. / Includes bibliographical references. Also available on the Internet.
|
66 |
Essays in multiple comparison testing /Williams, Elliot. January 2003 (has links)
Thesis (Ph. D.)--University of California, San Diego, 2003. / Vita. Includes bibliographical references (leaves 106-109).
|
67 |
Resampling algorithms for improved classification and estimationSoleymani, Mehdi. January 2011 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
|
68 |
On exact algorithms for small-sample bootstrap iterations and their applicationsChan, Yuen-fai., 陳遠輝. January 2000 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
|
69 |
Kasta gris : En strategi för att maximera den förväntade poängsumman i en kastomgångBing, Mia, Sundling, Lisa, Holmström, Åsa January 2013 (has links)
Kasta gris är ett spel där spelarna tävlar om att komma först till 100 poäng. Två grisformade tärningar kastas och beroende på hur de landar ger de olika poäng, alternativt förlust av poäng. För en spelare som har samlade poäng i en kastomgång innebär ytterligare ett kast en chans att erhålla en högre poängsumma men också en risk att förlora den redan samlade. I denna uppsats vill vi ta reda på vid vilken högsta poängsumma i en kastomgång som spelaren bör välja att fortsätta kasta. Eftersom tärningarna är grisformade och alltså inte symmetriska är sannolikheterna olika för de möjliga utfallen. Att sannolikheterna därtill är okända omöjliggör att beräkna den sökta poängsumman exakt. Vi har genomfört ett eget försök med 10 517 kast uppdelade på tre gristärningspar. Med hjälp av insamlad data och metoder inom sannolikhetslära har vi kunnat skatta de okända sannolikheterna och därmed den sökta poängsumman. För att få ett mått på osäkerheten i vår skattning av den senare har vi använt två metoder inom inferensteorin, Deltametoden och bootstrap. I vårt resultat fann vi att 21, med åtminstone 75 procents säkerhet, är den högsta poängsumma för vilken en spelare bör fortsätta sin kastomgång. Resultatet ger en spelare möjlighet att maximera sin förväntade poäng i en kastomgång men att använda detta som en spelstrategi genom hela spelet är dock ingen garanti för vinst.
|
70 |
HYPOTHESIS TESTING IN FINITE SAMPLES WITH TIME DEPENDENT DATA: APPLICATIONS IN BANKINGAllen, Jason, 1974- 26 September 2007 (has links)
This thesis is concerned with hypothesis testing in models where data exhibits
time dependence. The focus is on two cases where the dependence of observations
across time leads to non-standard hypothesis testing techniques.
This thesis first considers models estimated by Generalized Method of Moments
(GMM, Hansen (1982)) and the approach to inference. The main problem with
standard tests are size distortions in the test statistics. An innovative resampling
method, which we label Empirical Likelihood Block Bootstrapping, is proposed. The
first-order asymptotic validity of the proposed procedure is proven, and a series of
Monte Carlo experiments show it may improve test sizes over conventional block
bootstrapping. Also staying in the context of GMM this thesis shows that the testcorrection
given in Hall (2000) which improves power, can distort size with time
dependent data. In this case it is of even greater importance to use a bootstrap that
can have good size in finite samples.
The empirical likelihood is applied to a multifactor model of U.S. bank risk estimated
by GMM. The approach to inference is found to be important to the overall
conclusion about bank risk. The results suggest U.S. bank stock returns are sensitive
to movements in market and liquidity risk.
In the context of panel data, this thesis is the first to my knowledge to consider
the estimation of cost-functions as well as conduct inference taking into account the
strong dependence of data across time. This thesis shows that standard approaches
to estimating cost-functions for a set of Canadian banks lead to a downward bias in
the estimated coefficients and therefore an upward bias in the measure of economies
of scale. When non-stationary panel techniques are applied results suggest economies
of scale of around 6 per cent in Canadian banking as well as cost-efficiency differences
across banks that are correlated with size. / Thesis (Ph.D, Economics) -- Queen's University, 2007-09-24 17:25:22.212
|
Page generated in 0.0325 seconds