• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6867
  • 727
  • 652
  • 593
  • 427
  • 427
  • 427
  • 427
  • 427
  • 424
  • 342
  • 133
  • 119
  • 111
  • 108
  • Tagged with
  • 13129
  • 2380
  • 2254
  • 2048
  • 1772
  • 1657
  • 1447
  • 1199
  • 1066
  • 904
  • 858
  • 776
  • 760
  • 741
  • 739
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

DETERMINING A SUFFICIENT LEVEL OF INTER-RATER RELIABILITY (POWER ANALYSIS, MISCLASSIFICATION, SAMPLE SIZE)

Unknown Date (has links)
The reliability of a test or measurement procedure is, generally speaking, an index of the consistency of its results. Inter-rater reliability assesses the consistency of judgements among a set of raters. We model the observation taken on a subject by an unreliable procedure as the sum of a true score with mean (mu) and variance (sigma)(,T)('2) and an error term with mean 0 and variance (sigma)(,E)('2). The reliability coefficient then is (rho) = (sigma)(,T)('2)/((sigma)(,T)('2) + (sigma)(,E)('2)). / The reliability of an instrument or rating procedure is generally evaluated in an initial experiment (or series of experiments) known as a "reliability study." Once an instrument is established as having some degree of reliability, it is then used as a measurement tool in subsequent research, known as "decision studies." / An unreliable procedure measures imperfectly. The impact of the error in measurement is investigated as it relates to three broad areas of statistical procedures: estimation, hypothesis testing, and decision-making. / An unreliable measurement decreases the precision of estimates. The effect of an unreliable measurement on the width of a confidence interval for the population mean is examined. Also, an expression is developed to facilitate estimation of the reliability of a test or measurement in a decision study when the populations of interest may differ from those in the reliability study. / An unreliable instrument weakens hypothesis tests. The extent to which lack of reliability attenuates the power of the two-sample t-test, the F-test in the analysis of variance, and the t-test for statistically significant correlation between two variables is investigated. / An unreliable measurement engenders false classifications. A dichotomous decision is considered, and expressions for the probability of misclassifying a subject by a rating procedure with a given reliability are developed. Overall as well as directional misclassification rates are found under the model of true scores and errors distributed as independent normals. Effects of departures from this model, by heavy-tailed and skewed true score and error distributions, and by errors whose variance is a function of the true score, are considered. A general expression for this misclassification probability is found. A confidence interval for the misclassification probability is developed. / These results provide tools for a researcher better to make decisions concerning the design of an experiment. They permit the costs of increased reliability to be more knowledgeably compared with the consequences of using an unreliable measurement procedure in a given situation. / Source: Dissertation Abstracts International, Volume: 45-04, Section: B, page: 1232. / Thesis (Ph.D.)--The Florida State University, 1984.
32

Generalized Pearson-Fisher chi-square goodness of fit tests, with applications to models with life history data

Unknown Date (has links)
Suppose that $X\sb1,\...,X\sb{n}$ are i.i.d. $\sim$ F, and we wish to test the null hypothesis that F is a member of the parametric family ${\cal F}=\{F\sb\theta(x);$ $\theta\in\Theta\}$ where $\Theta\subset\IR\sp{q}.$ The classical Pearson-Fisher chi-square test involves partitioning the real axis into k cells $I\sb1,\...,I\sb{k}$ and forming the chi-square statistic $X\sp2=\Sigma\sbsp{i=1}{k}$ $(O\sb{i} - nF\sb{\\theta}(I\sb{i}))\sp2/nF\sb{\\theta}(I\sb{i}),$ where $O\sb{i}$ is the number of observations falling into cell i and $\\theta$ is the value of $\theta$ minimizing $\Sigma\sbsp{i=1}{k}$ $(O\sb{i} - nF\sb\theta(I\sb{i}))\sp2/nF\sb\theta(I\sb{i}).$ We obtain a generalization of this test to any situation for which there is available a nonparametric estimator F of F for which $n\sp{1\over2}(\{F} - F){d\atop\to}W$ where W is a continuous zero mean Gaussian process satisfying a mild regularity condition. We allow the cells to be data dependent. Essentially, we estimate $\theta$ by the value $\\theta$ that minimizes a "distance" between the vectors $(\{F}(I\sb1),\...,\{F}(I\sb{k}))$ and $(F\sb\theta(I\sb1),\...,F\sb\theta(I\sb{k})),$ where distance is measured through an arbitrary positive definite quadratic form, and then form a chi-square type test statistic based on the difference between $(\{F}(I\sb1),\...,\{F}(I\sb{k}))$ and $(F\sb{\\theta}(I\sb1),\...,F\sb{\\theta}(I\sb{k})).$ We prove that this test statistic has asymptotically a chi-square distribution with $k-q-1$ degrees of freedom, and point out some errors in the literature on chi-square tests in survival analysis. Our procedure is very general and applies to a number of well-known models in survival analysis, such as right censoring and left truncation. We apply our method to deal with questions of model selection in the problem of estimating the distribution of the length of the incubation / period of the AIDS virus using the CDC's data on blood-transfusion related AIDS. Our analysis suggests some models that seem to fit better than those used in the literature. / Source: Dissertation Abstracts International, Volume: 53-07, Section: B, page: 3576. / Major Professor: Hani Doss. / Thesis (Ph.D.)--The Florida State University, 1992.
33

Individual Patient-Level Data Meta-Analysis: A Comparison of Methods for the Diverse Populations Collaboration Data Set

Unknown Date (has links)
DerSimonian and Laird define meta-analysis as "the statistical analysis of a collection of analytic results for the purpose of integrating their findings. One alternative to classical meta-analytic approaches in known as Individual Patient-Level Data, or IPD, meta-analysis. Rather than depending on summary statistics calculated for individual studies, IPD meta-analysis analyzes the complete data from all included studies. Two potential approaches to incorporating IPD data into the meta-analytic framework are investigated. A two-stage analysis is first conducted, in which individual models are fit for each study and summarized using classical meta-analysis procedures. Secondly, a one-stage approach that singularly models the data and summarizes the information across studies is investigated. Data from the Diverse Populations Collaboration data set are used to investigate the differences between these two methods in a specific example. The bootstrap procedure is used to determine if the two methods produce statistically different results in the DPC example. Finally, a simulation study is conducted to investigate the accuracy of each method in given scenarios. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Ph.D.. / Degree Awarded: Spring Semester, 2011. / Date of Defense: December 2, 2010. / Individual Patient-level Data, IPD meta-analysis, Meta-analysis / Includes bibliographical references. / Daniel McGee, Professor Directing Dissertation; Betsy Becker, University Representative; Xufeng Niu, Committee Member; Jinfeng Zhang, Committee Member.
34

RANKING AND SELECTION PROCEDURES FOR EXPONENTIAL POPULATIONS WITH CENSORED OBSERVATIONS

Unknown Date (has links)
Let (PI)(,1), (PI)(,2), ..., (PI)(,k) be k exponential populations. The problem of the ranking and selection for these k populations is formulated in order to accommodate censored observations. The data under study are assumed to be generated from three types of censoring mechanisms--Type-I, Type-II and random censoring. / Let X(,i{1}) be the minimum order statistic in the sample of size n from the population (PI)(,i), i = 1, 2, ..., k. A selection procedure for selecting the largest location parameter, (lamda)(,{k}), under Type-I censoring is defined in terms of a set of minima Y(,i) = min(X(,i{1}), T), i = 1, 2, ..., k, where T is a fixed time. A procedure with respect to the largest location parameter under Type-II censoring is proposed based on X(,i{1}). These two procedures are shown to be asymptotically equivalent. / The ranking and selection for scale parameters based on Type-II censored data are investigated under two formulations, i.e., Bechhofer's indifference zone approach and Gupta's subset selection approach. The selection rule proposed under Gupta's formulation is the same as the procedure studied independently by Huang and Huang (1980, Proc. of Conference on Recent Developments in Statistical Methods and Applications. Academia Sinica, Taipei, Taiwan). It is noted that this procedure is equivalent to the procedure investigated by Gupta (1963, Ann. Inst. Statist. Math. 14, 199-216) for gamma populations with the complete data. / The scale parameter problem, subjected to Type-I censoring, is also examined. We introduce the idea of using the total time on test (TTOT) statistic as the selection statistic. The exact distribution of the TTOT statistic is found and several properties of the section rule proposed for Type-I censored data are discussed. / Finally, the selection problem under random censorship is studied. The maximum likelihood estimate (MLE) T(,i) of the scale parameter (theta)(,i) is obtained from the randomly censored data. A selection procedure is proposed based on T(,i), i = 1, 2, ..., k. Under certain conditions we show that Gupta's (1963) constants can be used in the rule proposed under the random censoring model. The bound on P* probability below which the procedure is well-defined is given. / Source: Dissertation Abstracts International, Volume: 43-07, Section: B, page: 2260. / Thesis (Ph.D.)--The Florida State University, 1982.
35

Minimax Tests for Nonparametric Alternatives with Applications to High Frequency Data

Unknown Date (has links)
We present a general methodology for developing an asymptotically distribution-free, asymptotic minimax tests. The tests are constructed via a nonparametric density-quantile function and the limiting distribution is derived by a martingale approach. The procedure can be viewed as a novel parametric extension of the classical parametric likelihood ratio test. The proposed tests are shown to be omnibus within an extremely large class of nonparametric global alternatives characterized by simple conditions. Furthermore, we establish that the proposed tests provide better minimax distinguishability. The tests have much greater power for detecting high-frequency nonparametric alternatives than the existing classical tests such as Kolmogorov-Smirnov and Cramer-von Mises tests. The good performance of the proposed tests is demonstrated by Monte Carlo simulations and applications in High Energy Physics. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Degree Awarded: Summer Semester, 2006. / Date of Defense: April 24, 2006. / Nonparametric Alternatives, Nonparametric Likelihood Ratio, Minimaxity, Kullback-Leibler / Includes bibliographical references. / Kai-Sheng Song, Professor Directing Dissertation; Jack Quine Professor, Outside Committee Member; Fred Huffer Professor, Committee Member; Dan McGee Professor, Committee Member.
36

PIECEWISE GEOMETRIC ESTIMATION OF A SURVIVAL FUNCTION AND SOME RESULTS IN TOTAL POSITIVITY ORDERINGS (NONPARAMETRIC, PERCENTILE, CONFIDENCE INTERVALS)

Unknown Date (has links)
In the first topic we describe a procedure that uses incomplete data to estimate failure rate and survival functions. Although the procedure is designed for discrete distributions, it applies in the continuous case also. / The procedure is based on the assumption of a piecewise constant failure rate. The resultant survival function estimator is a piecewise geometric function, denoted the Piecewise Geometric Estimator (PEGE). It is the discrete version of the piecewise exponential estimators proposed independently by Kitchin, Langberg and Proschan (1983) and Whittemore and Keller (1983), and it generalizes Umholtz's (1984) estimator designed for complete Exponential data. / The PEGE is attractive to users because it is computationally simple and realistic in that it decreases at every possible failure time: it therefore not only has the appearance of a survival function, but also provides realistic estimates of the failure rate function and the percentiles of the underlying distribution. The widely used Kaplan-Meier estimator (KME), being a step function, is not suited to estimating these quantities. / The PEGE is consistent and asymptotically normal under conditions more general than those of the standard model of random censorship. Although the PEGE and the KME are asymptotically equivalent, simulation studies show that in small samples the PEGE compares favourably with the KME in terms of efficiency but not in terms of bias. (Since the two estimators generally interlace, the PEGE's bias is not a disadvantage in practice). A variant of the PEGE is less biased than it and is even more efficient. The geometric percentile estimators perform better than do the Kaplan-Meier counterparts in terms of both bias and efficiency. A pilot study indicates that the small sample behaviour of bootstrap confidence interval procedures for both survival probabilities and percentiles is considerably improved when a geometric estimator is used instead of the KME. / In the second topic we study preservation of total positivity orderings under integration in general, and, more specifically, under convolution, mixing and the formation of coherent systems. / Source: Dissertation Abstracts International, Volume: 47-02, Section: B, page: 0689. / Thesis (Ph.D.)--The Florida State University, 1985.
37

THE COMPARISON OF SENSITIVITIES OF EXPERIMENTS (MAXIMUM LIKELIHOOD, RANDOM, FIXED, ANALYSIS OF VARIANCE)

Unknown Date (has links)
The sensitivity of a measurement technique is defined to be its ability to detect differences among the treatments in a fixed effects design, or the presence of a between treatments component of variance in a random effects design. Consider an experiment, consisting of two identical subexperiments, designed specifically for the purpose of comparing two measurement techniques. It is assumed that the techniques of analysis of variance are applicable in analyzing the data obtained from the two measurement techniques. The subexperiments may have either fixed or random treatment effects in either one-way or general block designs. It is assumed that the experiment yields bivariate observations from the two measurement methods which may or may not be independent. Likelihood ratio tests are used in the various settings of this dissertation to both extend current techniques and provide alternative methods for comparing the sensitivities of experiments. / Source: Dissertation Abstracts International, Volume: 46-09, Section: B, page: 3117. / Thesis (Ph.D.)--The Florida State University, 1985.
38

NONPARAMETRIC TESTS FOR BIASED COIN DESIGNS (RANDOMIZATION)

Unknown Date (has links)
Consider a clinical trial where treatments A and B are assigned to n patients via Efron's (1971) biased coin design. Randomization tests of the null hypothesis H(,0) of no treatment difference are studied. We derive a recursion procedure for obtaining the exact randomization distribution for each member of a class of test statistics. This enables one to perform exact tests of H(,0). In accordance with Cox' (1982) suggestion, the randomization distributions of the test statistics are conditional on the terminal imbalance of the treatment allocation. Letting T(,1), ..., T(,n) be the treatment assignment variables, with T(,i) = 1 if the i('th) patient receives treatment B, = 0 if the i('th) patient receives treatment A, conditional distributional properties of these variables are obtained. Recursive procedures for computing the conditional exact and approximate moments of T(,1), ..., T(,n) are also derived. Based on these results, test statistics are proposed for use in the randomization tests when the sample size is large. The adequacy of the normal approximations to the conditional randomization distributions of these statistics are ascertained via a computer simulation. / Methods for constructing asymptotic simultaneous confidence bands for the survival function under the proportional hazards model of random right-censorship are developed. These bands are based on the maximum likelihood estimator (MLE) of the survival function, rather than the well-known product limit estimator (PLE). In the case where the censoring parameter, denoted by (beta), is known the bands are asymptotically exact, while when (beta) is unknown the bands are asymptotically conservative. For the (beta) unknown case, the proposed bands are shown to be narrower than those proposed by Cheng and Chang (1985). The idea of Csorgo and Horvath (1986) of mixing bands is also employed to obtain even narrower bands. As one would expect, under the more structured model, the PLE-based band of Gillespie and Fisher (1979) is shown to be inferior to the MLE-based bands, and this inferiority is more marked as the degree of censoring increases. / Source: Dissertation Abstracts International, Volume: 47-08, Section: B, page: 3423. / Thesis (Ph.D.)--The Florida State University, 1986.
39

Testing for the Equality of Two Distributions on High Dimensional Object Spaces and Nonparametric Inference for Location Parameters

Unknown Date (has links)
Our view is that while some of the basic principles of data analysis are going to remain unchanged, others are to be gradually replaced with Geometry and Topology methods. Linear methods are still making sense for functional data analysis, or in the context of tangent bundles of object spaces. Complex nonstandard data is represented on object spaces. An object space admitting a manifold stratification may be embedded in an Euclidean space. One defines the extrinsic energy distance associated with two probability measures on an arbitrary object space embedded in a numerical space, and one introduces an extrinsic energy statistic to test for homogeneity of distributions of two random objects (r.o.'s) on such an object space. This test is validated via a simulation example on the Kendall space of planar k-ads with a Veronese-Whitney (VW) embedding. One considers an application to medical imaging, to test for the homogeneity of the distributions of Kendall shapes of the midsections of the Corpus Callosum in a clinically normal population vs a population of ADHD diagnosed individuals. Surprisingly, due to the high dimensionality, these distributions are not significantly different, although they are known to have highly significant VW-means. New spread and location parameters are to be added to reflect the nontrivial topology of certain object spaces. TDA is going to be adapted to object spaces, and hypothesis testing for distributions is going to be based on extrinsic energy methods. For a random point on an object space embedded in an Euclidean space, the mean vector cannot be represented as a point on that space, except for the case when the embedded space is convex. To address this misgiving, since the mean vector is the minimizer of the expected square distance, following Frechet (1948), on an embedded compact object space, one may consider both minimizers and maximizers of the expected square distance to a given point on the embedded object space as mean, respectively anti-mean of the random point. Of all distances on an object space, one considers here the chord distance associated with the embedding of the object space, since for such distances one can give a necessary and sufficient condition for the existence of a unique Frechet mean (respectively Frechet anti-mean). For such distributions these location parameters are called extrinsic mean (respectively extrinsic anti-mean), and the corresponding sample statistics are consistent estimators of their population counterparts. Moreover around the extrinsic mean ( anti-mean ) located at a smooth point, one derives the limit distribution of such estimators. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2017. / June 14, 2017. / Includes bibliographical references. / Vic Patrangenaru, Professor Directing Dissertation; Washington Mio, University Representative; Adrian Barbu, Committee Member; Jonathan Bradley, Committee Member.
40

Regression Methods for Skewed and Heteroscedastic Response with High-Dimensional Covariates

Unknown Date (has links)
The rise of studies with high-dimensional potential covariates has invited a renewed interest in dimension reduction that promotes more parsimonious models, ease of interpretation and computational tractability. However, current variable selection methods restricted to continuous response often assume Gaussian response for methodological as well as theoretical developments. In this thesis, we consider regression models that induce sparsity, gain prediction power, and accommodates response distributions beyond Gaussian with common variance. The first part of this thesis is a transform-both-side Bayesian variable selection model (TBS) which allows skewness, heteroscedasticity and extreme heavy tailed responses. Our method develops a framework which facilitates computationally feasible inference in spite of inducing non-local priors on the original regression coefficients. Even if the transformed conditional mean is no longer linear with respect to covariates, we still prove the consistency of our Bayesian TBS estimators. Simulation studies and real data analysis demonstrate the advantages of our methods. Another main part of this thesis deals the above challenges from a frequentist standpoint. This model incorporates a penalized likelihood to accommodate skewed response, arising from an epsilon-skew-normal (ESN) distribution. With suitable optimization techniques to handle this two-piece penalized likelihood, our method demonstrates substantial gains in sensitivity and specificity even under high-dimensional settings. We conclude this thesis with a novel Bayesian semi-parametric modal regression method along with its implementation and simulation studies. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2017. / June 9, 2017. / Includes bibliographical references. / Debajyoti Sinha, Professor Directing Dissertation; Miles Taylor, University Representative; Debdeep Pati, Committee Member; Yiyuan She, Committee Member; Yun Yang, Committee Member.

Page generated in 0.0975 seconds