• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 27
  • 21
  • 19
  • 16
  • 15
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Significant or Not : What Does the "Magic" P-Value Tell Us?

Nelson, Mary January 2016 (has links)
The use of the p-value in determination of statistical significance—and by extension in decision making—is widely taught and frequently used.  It is not, however, without limitations, and its use as a primary marker of a worthwhile conclusion has recently come under increased scrutiny.  This paper attempts to explain some lesser-known properties of the p-value, including its distribution under the null and alternative hypotheses, and to clearly present its limitations and some straightforward alternatives.
2

The robustness of confidence intervals for effect size in one way designs with respect to departures from normality

Hembree, David January 1900 (has links)
Master of Science / Department of Statistics / Paul Nelson / Effect size is a concept that was developed to bridge the gap between practical and statistical significance. In the context of completely randomized one way designs, the setting considered here, inference for effect size has only been developed under normality. This report is a simulation study investigating the robustness of nominal 0.95 confidence intervals for effect size with respect to departures from normality in terms of their coverage rates and lengths. In addition to the normal distribution, data are generated from four non-normal distributions: logistic, double exponential, extreme value, and uniform. The report discovers that the coverage rates of the logistic, double exponential, and extreme value distributions drop as effect size increases, while, as expected, the coverage rate of the normal distribution remains very steady at 0.95. In an interesting turn of events, the uniform distribution produced higher than 0.95 coverage rates, which increased with effect size. Overall, in the scope of the settings considered, normal theory confidence intervals for effect size are robust for small effect size and not robust for large effect size. Since the magnitude of effect size is typically not known, researchers are advised to investigate the assumption of normality before constructing normal theory confidence intervals for effect size.
3

Jackknife Empirical Likelihood Inference for the Absolute Mean Deviation

meng, xueping 15 July 2013 (has links)
In statistics it is of interest to find a better interval estimator of the absolute mean deviation. In this thesis, we focus on using the jackknife, the adjusted and the extended jackknife empirical likelihood methods to construct confidence intervals for the mean absolute deviation of a random variable. The empirical log-likelihood ratio statistics is derived whose asymptotic distribution is a standard chi-square distribution. The results of simulation study show the comparison of the average length and coverage probability by using jackknife empirical likelihood methods and normal approximation method. The proposed adjusted and extended jackknife empirical likelihood methods perform better than other methods for symmetric and skewed distributions. We use real data sets to illustrate the proposed jackknife empirical likelihood methods.
4

Improved interval estimation of comparative treatment effects

Van Krevelen, Ryne Christian 01 May 2015 (has links)
Comparative experiments, in which subjects are randomized to one of two treatments, are performed often. There is no shortage of papers testing whether a treatment effect exists and providing confidence intervals for the magnitude of this effect. While it is well understood that the object and scope of inference for an experiment will depend on what assumptions are made, these entities are not always clearly presented. We have proposed one possible method, which is based on the ideas of Jerzy Neyman, that can be used for constructing confidence intervals in a comparative experiment. The resulting intervals, referred to as Neyman-type confidence intervals, can be applied in a wide range of cases. Special care is taken to note which assumptions are made and what object and scope of inference are being investigated. We have presented a notation that highlights which parts of a problem are being treated as random. This helps ensure the focus on the appropriate scope of inference. The Neyman-type confidence intervals are compared to possible alternatives in two different inference settings: one in which inference is made about the units in the sample and one in which inference is made about units in a fixed population. A third inference setting, one in which inference is made about a process distribution, is also discussed. It is stressed that certain assumptions underlying this third type of inference are unverifiable. When these assumptions are not met, the resulting confidence intervals may cover their intended target well below the desired rate. Through simulation, we demonstrate that the Neyman-type intervals have good coverage properties when inference is being made about a sample or a population. In some cases the alternative intervals are much wider than necessary on average. Therefore, we recommend that researchers consider using our Neyman-type confidence intervals when carrying out inference about a sample or a population as it may provide them with more precise intervals that still cover at the desired rate.
5

Accuracy of Computer Simulations that use Common Pseudo-random Number Generators

Dusitsin, Krid, Kosbar, Kurt 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / In computer simulations of communication systems, linear congruential generators and shift registers are typically used to model noise and data sources. These generators are often assumed to be close to ideal (i.e. delta correlated), and an insignificant source of error in the simulation results. The samples generated by these algorithms have non-ideal autocorrelation functions, which may cause a non-uniform distribution in the data or noise signals. This error may cause the simulation bit-error-rate (BER) to be artificially high or low. In this paper, the problem is described through the use of confidence intervals. Tests are performed on several pseudo-random generators to access which ones are acceptable for computer simulation.
6

A Bayesian method to improve sampling in weapons testing

Floropoulos, Theodore C. 12 1900 (has links)
Approved for public release; distribution is unlimited / This thesis describes a Bayesian method to determine the number of samples needed to estimate a proportion or probability with 95% confidence when prior bounds are placed on that proportion. It uses the Uniform [a,b] distribution as the prior, and develops a computer program and tables to find the sample size. Tables and examples are also given to compare these results with other approaches for finding sample size. The improvement that can be obtained with this method is fewer samples, and consequently less cost in Weapons Testing is required to meet a desired confidence size for a proportion or probability. / http://archive.org/details/bayesianmethodto00flor / Lieutenant Commander, Hellenic Navy
7

The performance and robustness of confidence intervals for the median of a symmetric distribution constructed assuming sampling from a Cauchy distribution

Cao, Jennifer Yue January 1900 (has links)
Master of Science / Department of Statistics / Paul Nelson / Trimmed means are robust estimators of location for distributions having heavy tails. Theory and simulation indicate that little efficiency is lost under normality when using appropriately trimmed means and that their use with data from distributions with heavy tails can result in improved performance. This report uses the principle of equivariance applied to trimmed means sampled from a Cauchy distribution to form a discrepancy function of the data and parameters whose distribution is free of the unknown median and scale parameter. Quantiles of this discrepancy function are estimated via asymptotic normality and simulation and used to construct confidence intervals for the median of a Cauchy distribution. A nonparametric approach based on the distribution of order statistics is also used to construct confidence intervals. The performance of these intervals in terms of coverage rate and average length is investigated via simulation when the data are actually sampled from a Cauchy distribution and when sampling is from normal and logistic distributions. The intervals based on simulation estimation of the quantiles of the discrepancy function are shown to perform well across a range of sample sizes and trimming proportions when the data are actually sampled from a Cauchy distribution and to be relatively robust when sampling is from the normal and logistic distributions.
8

Empirical Likelihood Confidence Intervals for ROC Curves with Missing Data

An, Yueheng 25 April 2011 (has links)
The receiver operating characteristic, or the ROC curve, is widely utilized to evaluate the diagnostic performance of a test, in other words, the accuracy of a test to discriminate normal cases from diseased cases. In the biomedical studies, we often meet with missing data, which the regular inference procedures cannot be applied to directly. In this thesis, the random hot deck imputation is used to obtain a 'complete' sample. Then empirical likelihood (EL) confidence intervals are constructed for ROC curves. The empirical log-likelihood ratio statistic is derived whose asymptotic distribution isproved to be a weighted chi-square distribution. The results of simulation study show that the EL confidence intervals perform well in terms of the coverage probability and the average length for various sample sizes and response rates.
9

A Study of FM-Band Radio Wave Propagation Prediction Curves and the Broadcasting Service Criterion in Taiwan

Hsieh, Chi-Hsuan 15 June 2000 (has links)
The field strength prediction chart is a set of statistical curves obtained through the analysis of huge amount of field strength measurement data of the specific radio band in some area. It reflects the natural or artificial effects such as geography, atmospheric condition and buildings, etc. that affect the radio wave propagation. One advantage is that we can predict the rough relationship between the field strength and distance easily. As a result, we don¡¦t have to perform simulation field measurement in every radio planning. With prediction chart and field strength interference /protection ratio standard, we can suggest a minimum distance separation criterion between co-channel and adjacent channel broadcasting stations. It also provides a reference to authority to examine the broadcasting service application. The FCC develops the F(50,50) charts and minimum separation between radio stations base on data collected in the U.S.. Presently, the regulations concerning the broadcasting applications in Taiwan still follow the FCC¡¦s suggestion. In general, the field strength distribution is affected by two main factors: geography and atmospheric condition, which can be different from those in the U.S.. With the acquisition of digital terrain data of Taiwan, the terrain profile for a given path can be generated. In this thesis, we¡¦ll use Deygout model and the database of existed broadcasting stations to generate field strength distribution database for each station and analyze the database to develop the prediction chart that is suitable for the propagation environment in Taiwan. When combine with the field strength interference /protection ratio standard, we¡¦ll provide a minimum distance separation criterion of co-channel and adjacent channel in the FM band broadcasting stations. Our study can help the authority to achieve the most effective spectrum management in FM band.
10

Semi-Parametric Inference for the Partial Area Under the ROC Curve

Sun, Fangfang 19 November 2008 (has links)
Diagnostic tests are central in the field of modern medicine. One of the main factors for interpreting a diagnostic test is the discriminatory accuracy. For a continuous-scale diagnostic test, the area under the receiver operating characteristic (ROC) curve, AUC, is a useful one-number summary index for the diagnostic accuracy of the test. When only a particular region of the ROC curve would be of interest, the partial AUC (pAUC) is a more appropriate index for the diagnostic accuracy. In this thesis, we develop seven confidence intervals for the pAUC under the semi-parametric models for the diseased and non-diseased populations by using the normal approximation, bootstrap and empirical likelihood methods. In addition, we conduct simulation studies to compare the finite sample performance of the proposed confidence intervals for the pAUC. A real example is also used to illustrate the application of the recommended intervals.

Page generated in 0.1026 seconds