• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 47
  • 21
  • 17
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Confidence intervals in life-testing

Karch, Angela Irene 03 June 2011 (has links)
The purpose of the study was to develop a sequential test method for obtaining a confidence interval in life-testing. The problem of using a maximum likelihood estimator based upon grouped data was considered. Life-times that were investigated are described by the exponential distribution. The sequential test used the length of the confidence interval as a stopping rule.The test method and necessary calculations were described. The results of using different length values as a stopping rule were compared using a computer simulation. Results are indicated in two categories: percent of time the estimate contained the true parameter value, and average number of data collection times needed to obtain the estimate. It was concluded that the test method was accurate and efficient. The length value was a considerable factor in deriving good results from the test method. It was recommended that research be continued to establish a method of choosing the best length value to be used.Ball State UniversityMuncie, IN 47306

A Study of the Mean Residual Life Function and Its Applications

Mbowe, Omar B 12 June 2006 (has links)
The mean residual life (MRL) function is an important function in survival analysis, actuarial science, economics and other social sciences and reliability for characterizing lifetime. Different methods have been proposed for doing inference on the MRL but their coverage probabilities for small sample sizes are not good enough. In this thesis we apply the empirical likelihood method and carry out a simulation study of the MRL function using different statistical distributions. The simulation study does a comparison of the empirical likelihood method and the normal approximation method. The comparisons are based on the average lengths of confidence intervals and coverage probabilities. We also did comparisons based on median lengths of confidence intervals for the MRL. We found that the empirical likelihood method gives better coverage probability and shorter confidence intervals than the normal approximation method for almost all the distributions that we considered. Applying the two methods to real data we also found that the empirical likelihood method gives thinner pointwise confidence bands.

Empirical Likelihood Confidence Intervals for the Sensitivity of a Continuous-Scale Diagnostic Test

Davis, Angela Elaine 04 May 2007 (has links)
Diagnostic testing is essential to distinguish non-diseased individuals from diseased individuals. More accurate tests lead to improved treatment and thus reduce medical mistakes. The sensitivity and specificity are two important measurements for the diagnostic accuracy of a diagnostic test. When the test results are continuous, it is of interest to construct a confidence interval for the sensitivity at a fixed level of specificity for the test. In this thesis, we propose three empirical likelihood intervals for the sensitivity. Simulation studies are conducted to compare the empirical likelihood based confidence intervals with the existing normal approximation based confidence interval. Our studies show that the new intervals had better coverage probability than the normal approximation based interval in most simulation settings.

Efficiency based adaptive tests for censored survival data /

Pecková, Monika. January 1997 (has links)
Thesis (Ph. D.)--University of Washington, 1997. / Vita. Includes bibliographical references (leaves [122]-125).

Extensions and application of the modified large-sample approach for constructing confidence intervals on functions of variance components /

Gilder, Kye M. January 2003 (has links)
Thesis (Ph. D.)--University of Rhode Island, 2003. / Typescript. Includes bibliographical references (leaves 154-158).

Constructing confidence regions for the locations of putative trait loci using data from affected sib-pair designs

Papachristou, Charalampos. January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Document formatted into pages; contains xv, 122 p.; also includes graphics. Includes bibliographical references (p. 117-122). Available online via OhioLINK's ETD Center

Nonparametric Confidence Intervals for the Reliability of Real Systems Calculated from Component Data

Spooner, Jean 01 May 1987 (has links)
A methodology which calculates a point estimate and confidence intervals for system reliability directly from component failure data is proposed and evaluated. This is a nonparametric approach which does not require the component time to failures to follow a known reliability distribution. The proposed methods have similar accuracy to the traditional parametric approaches, can be used when the distribution of component reliability is unknown or there is a limited amount of sample component data, are simpler to compute, and use less computer resources. Depuy et al. (1982) studied several parametric approaches to calculating confidence intervals on system reliability. The test systems employed by them are utilized for comparison with published results. Four systems with sample sizes per component of 10, 50, and 100 were studied. The test systems were complex systems made up of I components, each component has n observed (or estimated) times to failure. An efficient method for calculating a point estimate of system reliability is developed based on counting minimum cut sets that cause system failures. Five nonparametric approaches to calculate the confidence intervals on system reliability from one test sample of components were proposed and evaluated. Four of these were based on the binomial theory and the Kolomogorov empirical cumulative distribution theory. 600 Monte Carlo simulations generated 600 new sets of component failure data from the population with corresponding point estimates of system reliability and confidence intervals. Accuracy of these confidence intervals was determined by determining the fraction that included the true system reliability. The bootstrap method was also studied to calculate confidence interval from one sample. The bootstrap method is computer intensive and involves generating many sets of component samples using only the failure data from the initial sample. The empirical cumulative distribution function of 600 bootstrapped point estimates were examined to calculate the confidence intervals for 68, 80, 90 95 and 99 percent confidence levels. The accuracy of the bootstrap confidence intervals was determined by comparison with the distribution of 600 point estimates of system reliability generated from the Monte Carlo simulations. The confidence intervals calculated from the Kolomogorov empirical distribution function and the bootstrap method were very accurate. Sample sizes of 10 were not always sufficient for systems with reliabilities close to one.

Local Distance Correlation: An Extension of Local Gaussian Correlation

Hamdi, Walaa Ahmed 06 August 2020 (has links)
No description available.

Improved confidence intervals for a small area mean under the Fay-Herriot model

Shiferaw, Yegnanew Alem January 2016 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the Degree of Doctor of Philosophy. Johannesburg, August 2016. / There is a growing demand for small area estimates for policy and decision making, local planning and fund distribution. Surveys are generally designed to give representative estimates at national or regional level, but estimates of variables of interest are often also needed at the small area levels. These cannot be reliably obtained from the survey data as the sample sizes at these levels are too small. This problem is addressed by using small area estimation techniques. The main aim of this thesis is to develop confidence intervals (CIs) which are accurate to terms O(m–3/2 ) under the FH model using the Taylor series expansion. Rao (2003a), among others, notes that there is a situation in mixed model estimation that the estimates of the variance component of the random effect, A, can take negative values. In this case, Prasad and Rao (1990) consider ˆA = 0. Under this situation, the contribution of the mean squared error (MSE) estimate, assuming all parameters are known, becomes zero. As a solution, Rao (2003a) among others proposed a weighted estimator with fixed weights (i.e., wi = 12 ). In addition, if the MSE estimate is negative, we cannot construct CIs based on the empirical best linear unbiased predictor (EBLUP) estimates. Datta, Kubokawa, Molina and Rao (2011) derived the MSE estimator for the weighted estimator with fixed weights which is always positive. We use their MSE estimator to derive CIs based on this estimator to overcome the above difficulties. The other criticism of the MSE estimator is that it is not area-specific since it does not involve the direct estimator in its expression. Following Rao (2001), we propose area specific MSE estimators and use them to construct CIs. The performance of the proposed CIs are investigated via simulation studies and compared with the Cox (1975) and Prasad and Rao (1990) methods. Our simulation results show that the proposed CIs have higher coverage probabilities. These methods are applied to standard poverty and percentage of food expenditure measures estimated from the 2010/11 Household Consumption Expenditure survey and the 2007 census data sets. Keywords: Small area estimation, Weighted estimator with fixed weights, EBLUP, FH model, MSE, CI, Poverty, percentage of food expenditure / LG2017

Performance of bootstrap confidence intervals for L-moments and ratios of L-moments.

Glass, Suzanne 06 May 2000 (has links) (PDF)
L-moments are defined as linear combinations of expected values of order statistics of a variable.(Hosking 1990) L-moments are estimated from samples using functions of weighted means of order statistics. The advantages of L-moments over classical moments are: able to characterize a wider range of distributions; L-moments are more robust to the presence of outliers in the data when estimated from a sample; and L-moments are less subject to bias in estimation and approximate their asymptotic normal distribution more closely. Hosking (1990) obtained an asymptotic result specifying the sample L-moments have a multivariate normal distribution as n approaches infinity. The standard deviations of the estimators depend however on the distribution of the variable. So in order to be able to build confidence intervals we would need to know the distribution of the variable. Bootstrapping is a resampling method that takes samples of size n with replacement from a sample of size n. The idea is to use the empirical distribution obtained with the subsamples as a substitute of the true distribution of the statistic, which we ignore. The most common application of bootstrapping is building confidence intervals without knowing the distribution of the statistic. The research question dealt with in this work was: How well do bootstrapping confidence intervals behave in terms of coverage and average width for estimating L-moments and ratios of L-moments? Since Hosking's results about the normality of the estimators of L-moments are asymptotic, we are particularly interested in knowing how well bootstrap confidence intervals behave for small samples. There are several ways of building confidence intervals using bootstrapping. The most simple are the standard and percentile confidence intervals. The standard confidence interval assumes normality for the statistic and only uses bootstrapping to estimate the standard error of the statistic. The percentile methods work with the (α/2)th and (1-α/2)th percentiles of the empirical sampling distribution. Comparing the performance of the three methods was of interest in this work. The research question was answered by doing simulations in Gauss. The true coverage of the nominal 95% confidence interval for the L-moments and ratios of L-moments were found by simulations.

Page generated in 0.1598 seconds