• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 2
  • 2
  • 1
  • Tagged with
  • 23
  • 23
  • 13
  • 9
  • 6
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Empirical Likelihood Inference for Two-Sample Problems

Yan, Ying January 2010 (has links)
In this thesis, we are interested in empirical likelihood (EL) methods for two-sample problems, with focus on the difference of the two population means. A weighted empirical likelihood method (WEL) for two-sample problems is developed. We also consider a scenario where sample data on auxiliary variables are fully observed for both samples but values of the response variable are subject to missingness. We develop an adjusted empirical likelihood method for inference of the difference of the two population means for this scenario where missing values are handled by a regression imputation method. Bootstrap calibration for WEL is also developed. Simulation studies are conducted to evaluate the performance of naive EL, WEL and WEL with bootstrap calibration (BWEL) with comparison to the usual two-sample t-test in terms of power of the tests and coverage accuracies. Simulation for the adjusted EL for the linear regression model with missing data is also conducted.
2

Empirical Likelihood Inference for Two-Sample Problems

Yan, Ying January 2010 (has links)
In this thesis, we are interested in empirical likelihood (EL) methods for two-sample problems, with focus on the difference of the two population means. A weighted empirical likelihood method (WEL) for two-sample problems is developed. We also consider a scenario where sample data on auxiliary variables are fully observed for both samples but values of the response variable are subject to missingness. We develop an adjusted empirical likelihood method for inference of the difference of the two population means for this scenario where missing values are handled by a regression imputation method. Bootstrap calibration for WEL is also developed. Simulation studies are conducted to evaluate the performance of naive EL, WEL and WEL with bootstrap calibration (BWEL) with comparison to the usual two-sample t-test in terms of power of the tests and coverage accuracies. Simulation for the adjusted EL for the linear regression model with missing data is also conducted.
3

A Two Sample Test of the Reliability Performance of Equipment Components

Coleman, Miki Lynne 01 May 1972 (has links)
The purpose of this study was to develop a test which can be used to compare the reliability performances of two types of equipment components to determine whether or not the new component satisfies a given feasibility criterion. Two types of tests were presented and compared: the fixed sample size test and the truncated sequential probability ratio test. Both of these tests involve use of a statistic which is approximately distributed as F. This study showed that the truncated sequential probability ratio test has good potential as a means of comparing two component types to see whether or not the reliability of the new component is at least a certain number of times greater than the reliability of the old component.
4

Power Studies of Multivariate Two-Sample Tests of Comparison

Siluyele, Ian John January 2007 (has links)
Masters of Science / The multivariate two-sample tests provide a means to test the match between two multivariate distributions. Although many tests exist in the literature, relatively little is known about the relative power of these procedures. The studies reported in the thesis contrasts the effectiveness, in terms of power, of seven such tests with a Monte Carlo study. The relative power of the tests was investigated against location, scale, and correlation alternatives. Samples were drawn from bivariate exponential, normal and uniform populations. Results from the power studies show that there is no single test which is the most powerful in all situations. The use of particular test statistics is recommended for specific alternatives. A possible supplementary non-parametric graphical procedure, such as the Depth-Depth plot, can be recommended for diagnosing possible differences between the multivariate samples, if the null hypothesis is rejected. As an example of the utility of the procedures for real data, the multivariate two-sample tests were applied to photometric data of twenty galactic globular clusters. The results from the analyses support the recommendations associated with specific test statistics.
5

On two-sample data analysis by exponential model

Choi, Sujung 01 November 2005 (has links)
We discuss two-sample problems and the implementation of a new two-sample data analysis procedure. The proposed procedure is based on the concepts of mid-distribution, design of score functions, components, comparison distribution, comparison density and exponential model. Assume that we have a random sample X1, . . . ,Xm from a continuous distribution F(y) = P(Xi y), i = 1, . . . ,m and a random sample Y1, . . . ,Yn from a continuous distribution G(y) = P(Yi y), i = 1, . . . ,n. Also assume independence of the two samples. The two-sample problem tests homogeneity of two samples and formally can be stated as H0 : F = G. To solve the two-sample problem, a number of tests have been proposed by statisticians in various contexts. Two typical tests are the two-sample t?test and the Wilcoxon's rank sum test. However, since they are testing differences in locations, they do not extract more information from the data as well as a test of the homogeneity of the distribution functions. Even though the Kolmogorov-Smirnov test statistic or Anderson-Darling tests can be used for the test of H0 : F = G, those statistics give no indication of the actual relation of F to G when H0 : F = G is rejected. Our goal is to learn why it was rejected. Our approach gives an answer using graphical tools which is a main property of our approach. Our approach is functional in the sense that the parameters to be estimated are probability density functions. Compared with other statistical tools for two-sample problems such as the t-test or the Wilcoxon rank-sum test, density estimation makes us understand the data more fully, which is essential in data analysis. Our approach to density estimation works with small sample sizes, too. Also our methodology makes almost no assumptions on two continuous distributions F and G. In that sense, our approach is nonparametric. Our approach gives graphical elements in two-sample problem where exist not many graphical elements typically. Furthermore, our procedure will help researchers to make a conclusion as to why two populations are different when H0 is rejected and to give an explanation to describe the relation between F and G in a graphical way.
6

Estimation of Hazard Function for Right Truncated Data

Jiang, Yong 27 April 2011 (has links)
This thesis centers on nonparametric inferences of the cumulative hazard function of a right truncated variable. We present three variance estimators for the Nelson-Aalen estimator of the cumulative hazard function and conduct a simulation study to investigate their performances. A close match between the sampling standard deviation and the estimated standard error is observed when an estimated survival probability is not close to 1. However, the problem of poor tail performance exists due to the limitation of the proposed variance estimators. We further analyze an AIDS blood transfusion sample for which the disease latent time is right truncated. We compute three variance estimators, yielding three sets of confidence intervals. This work provides insights of two-sample tests for right truncated data in the future research.
7

Bayesian Methods for Two-Sample Comparison

Soriano, Jacopo January 2015 (has links)
<p>Two-sample comparison is a fundamental problem in statistics. Given two samples of data, the interest lies in understanding whether the two samples were generated by the same distribution or not. Traditional two-sample comparison methods are not suitable for modern data where the underlying distributions are multivariate and highly multi-modal, and the differences across the distributions are often locally concentrated. The focus of this thesis is to develop novel statistical methodology for two-sample comparison which is effective in such scenarios. Tools from the nonparametric Bayesian literature are used to flexibly describe the distributions. Additionally, the two-sample comparison problem is decomposed into a collection of local tests on individual parameters describing the distributions. This strategy not only yields high statistical power, but also allows one to identify the nature of the distributional difference. In many real-world applications, detecting the nature of the difference is as important as the existence of the difference itself. Generalizations to multi-sample comparison and more complex statistical problems, such as multi-way analysis of variance, are also discussed.</p> / Dissertation
8

Procedures for identifying and modeling time-to-event data in the presence of non--proportionality

Zhu, Lei 22 January 2016 (has links)
For both randomized clinical trials and prospective cohort studies, the Cox regression model is a powerful tool for evaluating the effect of a treatment or an explanatory variable on time-to-event outcome. This method assumes proportional hazards over time. Systematic approaches to efficiently evaluate non-proportionality and to model data in the presence of non-proportionality are investigated. Six graphical methods are assessed to verify the proportional hazards assumption based on characteristics of the survival function, cumulative hazard, or the feature of residuals. Their performances are empirically evaluated with simulations by checking their ability to be consistent and sensitive in detecting proportionality or non-proportionality. Two-sample data are generated in three scenarios of proportional hazards and five types of alternatives (that is, non-proportionality). The usefulness of these graphical assessment methods depends on the event rate and type of non-proportionality. Three numerical (statistical testing) methods are compared via simulation studies to investigate the proportional hazards assumption. In evaluating data for proportionality versus non-proportionality, the goal is to test a non-zero slope in a regression of the variable or its residuals on a specific function of time, or a Kolmogorov-type supremum test. Our simulation results show that statistical test performance is affected by the number of events, event rate, and degree of divergence of non-proportionality for a given hazards scenario. Determining which test will be used in practice depends on the specific situation under investigation. Both graphical and numerical approaches have benefits and costs, but they are complementary to each other. Several approaches to model and summarize non-proportionality data are presented, including non-parametric measurements and testing, semi-parametric models, and a parametric approach. Some illustrative examples using simulated data and real data are also presented. In summary, we present a systemic approach using both graphical and numerical methods to identify non-proportionality, and to provide numerous modeling strategies when proportionality is violated in time-to-event data.
9

Rank-sum test for two-sample location problem under order restricted randomized design

Sun, Yiping 22 June 2007 (has links)
No description available.
10

Statistical Methods for In-session Hemodialysis Monitoring

Xu, Yunnan 17 June 2020 (has links)
Motivated by real-time monitoring of dialysis, we aim at detecting difference between groups of Raman spectra generated from dialyzates at different time in one session. Baseline correction being a critical procedure in use of Raman Spectra, existing methods may not perform well on dialysis spectra due to nature of dialyzates, which contain numerous chemicals compounds. We first developed a new baseline correction method, Iterative Smoothing-spline with Root Error Adjustment (ISREA), which automatically adjusts intensities and employs smoothing-spline to produce a baseline in each iteration, providing better performance on dialysis spectra than a popular method Goldindec, and better accuracy regardless of types of samples. We proposed a two sample hypothesis testing on groups of baseline-corrected Raman spectra with ISREA. The uniqueness of the test lies in nature of the tested data. Instead of using Raman spectra as curves, we also consider a vector whose elements are peak intensities of biomarkers, meaning the data is regarded as mixed data and that a spectrum curve and a vector compose one observation. Our method tests on equality of the means of the two groups of mixed data. This method is based on asymptotic properties of the covariance of mixed data and FPCA. Simulation studies shows that our method is applicable to small sample size with proper power and size control. Meanwhile, to locate regions that contribute most to significant difference between two groups of univariate functional data, we developed a method to estimate the a sparse coefficient function by using a L1 norm penalty in functional logistic regression, and compared its performance with other methods. / Doctor of Philosophy / In U.S., there are more than 709,501 patients with End-Stage Renal Disease (ESRD). For those patients, dialysis is a standard treatment. While dialysis is time-consuming, expensive, and uncomfortable, it requires patients to take three sessions every week in facilities, and each session lasts for four hours regardless of patients' condition. An affordable, fast, and widely-applied technique called Raman spectroscopy draws attention. Spectral data from used dialysate samples collected at different time in one session can give information on the dialysis process and thus make real-time monitoring possible. With spectral data, we want to develop a statistical method that helps real-time monitoring on dialysis. This method can provide physicians with statistical evidence on dialysis process to improve their decision making, therefore increases efficiency of dialysis and better serve patients. On the other hand, Raman spectroscopy demands preprocessing called baseline correction on the raw spectra. A baseline is generated because of the nature of Raman technique and its instrumentation, which adds complexity to the spectra and interfere with analysis. Despite popularity of this technique and many existing baseline correction method, we found performance on dialysate spectra under expectation. Hence, we proposed a baseline correction method called Iterative Smoothing-spline with Root Error Adjustment (ISREA) and ISREA can provide better performance than existing methods. In addition, we come up with a method that is able to detect difference between the two groups of ISREA baseline-corrected spectra from dialysate collected at different time. Furthermore, we proposed and applied sparse functional logistic regression on two groups to locate regions where the significant difference comes from.

Page generated in 0.0513 seconds