21 |
Exact unconditional tests for 2x2 contingency tablesSuissa, Samy Salomon, January 1982 (has links)
Thesis (Ph. D.)--University of Florida, 1982. / Description based on print version record. Typescript. Vita. Includes bibliographical references (leaves 98-100).
|
22 |
Hypothesis testing problems with the alternative restricted by a number of inequalities,Schaafsma, Willem. January 1966 (has links)
Proefschrift--Groningen University. / Bibliography: p. 134.
|
23 |
Studies in inferential techniques for model buildingBailey, Steven Paul. January 1979 (has links)
Thesis--University of Wisconsin--Madison. / Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
|
24 |
Statistical depth functions and depth-based robustness diagnosisLok, Wing-sze. January 2005 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
|
25 |
An Investigation of False Discovery Rates in Multiple Testing under DependenceTao, Hui January 2005 (has links) (PDF)
No description available.
|
26 |
Blessing of Dependence and Distribution-Freeness in Statistical Hypothesis TestingDeb, Nabarun January 2022 (has links)
Statistical hypothesis testing is one of the most powerful and interpretable tools for arriving at real-world conclusions from empirical observations. The classical set-up for testing goes as follows: the practitioner is given a sequence of 𝑛 independent and identically distributed data with the goal being to test the null hypothesis as to whether the observations are drawn from a particular family of distributions, say 𝐹, or otherwise. This is achieved by constructing a test statistic, say 𝑇_n (which is a function of the independent and identically distributed observations) and rejecting the null hypothesis if 𝑇_n is larger than some resampling/permutation-based, often asymptotic, threshold. In this thesis, we will deviate from this standard framework in the following two ways:
1. Often, in real-world applications, observations are not expected to be independent and identically distributed. This is particularly relevant in network data, where the dependence between observations is governed by an underlying graph. In Chapters 1 and 2, the focus is on a widely popular network-based model for binary outcome data, namely the Ising model, which has also attracted significant attention from the Statistical Physics community. We obtain precise estimates for the intractable normalizing constants in this model, which in turn enables us to study new weak laws and fluctuations that exhibit a certain \emph{sharp phase-transition} behavior. From a testing viewpoint, we address a structured signal detection problem in the context of Ising models. Our findings illustrate that the presence of network dependence can indeed be a \emph{blessing} for inference. I particular, we show that at the sharp phase-transition point, it is possible to detect much weaker signals compared to the case when data were drawn independent of one another.
2. While accepting/rejecting hypotheses, using resampling-based, or asymptotic thresholds can be unsatisfactory because it either requires recomputing the test statistic for every set of resampled observations or it only gives asymptotic validity of the type I error. In Chapters 3 and 4, the goal is to do away with these shortcomings. We propose a general strategy to construct exactly distribution-free tests for two celebrated nonparametric multivariate testing problems: (a) two-sample and (b) independence testing. Having distribution-freeness ensures that one can get rejection thresholds that do not rely on resampling but still yield exact finite sample type I error guarantees. Our proposal relies on the construction of a notion of multivariate ranks using the theory of optimal transport. These tests proceed without any moment assumptions (making them attractive for heavy-tailed data) and are more robust to outliers. Under some structural assumptions, we also prove that these tests can be more efficient for a broad class of alternatives than other popular tests which are not distribution-free.
From a mathematical standpoint, the proofs rely on Stein's method of exchangeable pairs for concentrations and (non) normal approximations, large deviation and correlation-decay type arguments, convex analysis, Le Cam's regularity theory and change of measures via contiguity, to name a few.
|
27 |
The analysis of two-way cross-classified unbalanced data /Bartlett, Sheryl Anne. January 1980 (has links)
No description available.
|
28 |
A new method of testing hypotheses in linear models.January 1996 (has links)
by Tsz-Kit Keung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaf 81). / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Testing Testable Hypotheses in Linear Models --- p.8 / Chapter 2.1 --- A General Theory --- p.9 / Chapter 2.2 --- The Method of Peixoto --- p.17 / Chapter 2.3 --- The Method of Chan and Li --- p.23 / Chapter Chapter 3 --- A New Method of Obtaining Equivalent Hypotheses --- p.32 / Chapter Chapter 4 --- Constrained Linear Models --- p.44 / Chapter 4.1 --- Hypothesis Testing in Constrained Linear Models --- p.44 / Chapter 4.2 --- Linear Models with Missing Observations --- p.50 / Chapter Chapter 5 --- Conclusions --- p.71 / Appendix --- p.74 / References --- p.81
|
29 |
A comparison of the power of the Wilcoxon test to that of the t-test under Lehmann's alternativesHwang, Chern-Hwang January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
30 |
Sharpening the Boundaries of the Sequential Probability Ratio TestKrantz, Elizabeth 01 May 2012 (has links)
In this thesis, we present an introduction to Wald’s Sequential Probability Ratio Test (SPRT) for binary outcomes. Previous researchers have investigated ways to modify the stopping boundaries that reduce the expected sample size for the test. In this research, we investigate ways to further improve these boundaries. For a given maximum allowable sample size, we develop a method intended to generate all possible sets of boundaries. We then find the one set of boundaries that minimizes the maximum expected sample size while still preserving the nominal error rates. Once the satisfying boundaries have been created, we present the results of simulation studies conducted on these boundaries as a means for analyzing both the expected number of observations and the amount of variability in the sample size required to make a decision in the test.
|
Page generated in 0.1198 seconds