• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 22
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Significant or Not : What Does the "Magic" P-Value Tell Us?

Nelson, Mary January 2016 (has links)
The use of the p-value in determination of statistical significance—and by extension in decision making—is widely taught and frequently used.  It is not, however, without limitations, and its use as a primary marker of a worthwhile conclusion has recently come under increased scrutiny.  This paper attempts to explain some lesser-known properties of the p-value, including its distribution under the null and alternative hypotheses, and to clearly present its limitations and some straightforward alternatives.
2

The robustness of confidence intervals for effect size in one way designs with respect to departures from normality

Hembree, David January 1900 (has links)
Master of Science / Department of Statistics / Paul Nelson / Effect size is a concept that was developed to bridge the gap between practical and statistical significance. In the context of completely randomized one way designs, the setting considered here, inference for effect size has only been developed under normality. This report is a simulation study investigating the robustness of nominal 0.95 confidence intervals for effect size with respect to departures from normality in terms of their coverage rates and lengths. In addition to the normal distribution, data are generated from four non-normal distributions: logistic, double exponential, extreme value, and uniform. The report discovers that the coverage rates of the logistic, double exponential, and extreme value distributions drop as effect size increases, while, as expected, the coverage rate of the normal distribution remains very steady at 0.95. In an interesting turn of events, the uniform distribution produced higher than 0.95 coverage rates, which increased with effect size. Overall, in the scope of the settings considered, normal theory confidence intervals for effect size are robust for small effect size and not robust for large effect size. Since the magnitude of effect size is typically not known, researchers are advised to investigate the assumption of normality before constructing normal theory confidence intervals for effect size.
3

Improved interval estimation of comparative treatment effects

Van Krevelen, Ryne Christian 01 May 2015 (has links)
Comparative experiments, in which subjects are randomized to one of two treatments, are performed often. There is no shortage of papers testing whether a treatment effect exists and providing confidence intervals for the magnitude of this effect. While it is well understood that the object and scope of inference for an experiment will depend on what assumptions are made, these entities are not always clearly presented. We have proposed one possible method, which is based on the ideas of Jerzy Neyman, that can be used for constructing confidence intervals in a comparative experiment. The resulting intervals, referred to as Neyman-type confidence intervals, can be applied in a wide range of cases. Special care is taken to note which assumptions are made and what object and scope of inference are being investigated. We have presented a notation that highlights which parts of a problem are being treated as random. This helps ensure the focus on the appropriate scope of inference. The Neyman-type confidence intervals are compared to possible alternatives in two different inference settings: one in which inference is made about the units in the sample and one in which inference is made about units in a fixed population. A third inference setting, one in which inference is made about a process distribution, is also discussed. It is stressed that certain assumptions underlying this third type of inference are unverifiable. When these assumptions are not met, the resulting confidence intervals may cover their intended target well below the desired rate. Through simulation, we demonstrate that the Neyman-type intervals have good coverage properties when inference is being made about a sample or a population. In some cases the alternative intervals are much wider than necessary on average. Therefore, we recommend that researchers consider using our Neyman-type confidence intervals when carrying out inference about a sample or a population as it may provide them with more precise intervals that still cover at the desired rate.
4

Accuracy of Computer Simulations that use Common Pseudo-random Number Generators

Dusitsin, Krid, Kosbar, Kurt 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / In computer simulations of communication systems, linear congruential generators and shift registers are typically used to model noise and data sources. These generators are often assumed to be close to ideal (i.e. delta correlated), and an insignificant source of error in the simulation results. The samples generated by these algorithms have non-ideal autocorrelation functions, which may cause a non-uniform distribution in the data or noise signals. This error may cause the simulation bit-error-rate (BER) to be artificially high or low. In this paper, the problem is described through the use of confidence intervals. Tests are performed on several pseudo-random generators to access which ones are acceptable for computer simulation.
5

A Bayesian method to improve sampling in weapons testing

Floropoulos, Theodore C. 12 1900 (has links)
Approved for public release; distribution is unlimited / This thesis describes a Bayesian method to determine the number of samples needed to estimate a proportion or probability with 95% confidence when prior bounds are placed on that proportion. It uses the Uniform [a,b] distribution as the prior, and develops a computer program and tables to find the sample size. Tables and examples are also given to compare these results with other approaches for finding sample size. The improvement that can be obtained with this method is fewer samples, and consequently less cost in Weapons Testing is required to meet a desired confidence size for a proportion or probability. / http://archive.org/details/bayesianmethodto00flor / Lieutenant Commander, Hellenic Navy
6

A Study of FM-Band Radio Wave Propagation Prediction Curves and the Broadcasting Service Criterion in Taiwan

Hsieh, Chi-Hsuan 15 June 2000 (has links)
The field strength prediction chart is a set of statistical curves obtained through the analysis of huge amount of field strength measurement data of the specific radio band in some area. It reflects the natural or artificial effects such as geography, atmospheric condition and buildings, etc. that affect the radio wave propagation. One advantage is that we can predict the rough relationship between the field strength and distance easily. As a result, we don¡¦t have to perform simulation field measurement in every radio planning. With prediction chart and field strength interference /protection ratio standard, we can suggest a minimum distance separation criterion between co-channel and adjacent channel broadcasting stations. It also provides a reference to authority to examine the broadcasting service application. The FCC develops the F(50,50) charts and minimum separation between radio stations base on data collected in the U.S.. Presently, the regulations concerning the broadcasting applications in Taiwan still follow the FCC¡¦s suggestion. In general, the field strength distribution is affected by two main factors: geography and atmospheric condition, which can be different from those in the U.S.. With the acquisition of digital terrain data of Taiwan, the terrain profile for a given path can be generated. In this thesis, we¡¦ll use Deygout model and the database of existed broadcasting stations to generate field strength distribution database for each station and analyze the database to develop the prediction chart that is suitable for the propagation environment in Taiwan. When combine with the field strength interference /protection ratio standard, we¡¦ll provide a minimum distance separation criterion of co-channel and adjacent channel in the FM band broadcasting stations. Our study can help the authority to achieve the most effective spectrum management in FM band.
7

Empirical Likelihood Confidence Intervals for the Difference of Two Quantiles with Right Censoring

Yau, Crystal Cho Ying 21 November 2008 (has links)
In this thesis, we study two independent samples under right censoring. Using a smoothed empirical likelihood method, we investigate the difference of quantiles in the two samples and construct the pointwise confidence intervals from it as well. The empirical log-likelihood ratio is proposed and its asymptotic limit is shown as a chi-squared distribution. In the simulation studies, in terms of coverage accuracy and average length of confidence intervals, we compare the empirical likelihood and the normal approximation method. It is concluded that the empirical likelihood method has a better performance. At last, a real clinical trial data is used for the purpose of illustration. Numerical examples to illustrate the efficacy of the method are presented.
8

A Review of Uncertainty Quanitification of Estimation of Frequency Response Functions

Majba, Christopher 11 October 2012 (has links)
No description available.
9

Calculating confidence intervals for the cumulative incidence function while accounting for competing risks: comparing the Kalbfleisch-Prentice method and the Counting Process method

Iljon, Tzvia 10 1900 (has links)
<p>Subjects enrolled in a clinical trial may experience a competing risk event which alters the risk of the primary event of interest. This differs from when subject information is censored, which is non-informative. In order to calculate the cumulative incidence function (CIF) for the event of interest, competing risks and censoring must be treated appropriately; otherwise estimates will be biased. There are two commonly used methods of calculating a confidence interval (CI) for the CIF for the event of interest which account for censoring and competing risk: the Kalbfleisch-Prentice (KP) method and the Counting Process (CP) method. The goal of this paper is to understand the variances associated with the two methods to improve our understanding of the CI. This will allow for appropriate estimation of the CIF CI for a single-arm cohort study that is currently being conducted. Previous work has failed to address this question because researchers typically focus on comparing two treatment arms using statistical tests that compare cause-specific hazard functions and do not require a CI for the CIF. The two methods were compared by calculating CIs for the CIF using data from a previous related study, using bootstrapping, and a simulation study with varying event rates and competing risk rates. The KP method usually estimated a larger CIF and variance than the CP method. When event rates were low (5%), the CP method is recommended as it yields more consistent results than the KP method. The CP method is recommended for the proposed study since event rates are expected to be moderate (5-10%).</p> / Master of Science (MS)
10

Confidence Intervals on Cost Estimates When Using a Feature-based Approach

Iacianci, Bryon C. January 2012 (has links)
No description available.

Page generated in 0.0432 seconds