41 |
Measurement Error and Misclassification in Interval-Censored Life History DataWhite, Bethany Joy Giddings January 2007 (has links)
In practice, data are frequently incomplete in one way or another. It can be a significant challenge to make valid inferences about the parameters of interest in this situation. In this thesis, three
problems involving such data are addressed. The first two problems involve interval-censored life history data with mismeasured
covariates. Data of this type are incomplete in two ways. First, the exact event times are unknown due to censoring. Second, the true covariate is missing for most, if not all, individuals. This work
focuses primarily on the impact of covariate measurement error in progressive multi-state models with data arising from panel (i.e., interval-censored) observation. These types of problems arise frequently in clinical settings (e.g. when disease progression is of interest and patient information is collected during irregularly spaced clinic visits). Two and three state models are considered in this thesis. This work is motivated by a research program on psoriatic arthritis (PsA) where the effects of error-prone covariates on rates of disease progression are of interest and patient information is collected at clinic visits (Gladman et al. 1995; Bond et al. 2006). Information regarding the error distributions were available based on results from a separate study conducted to evaluate the reliability of clinical measurements that are used in PsA treatment and follow-up (Gladman et al. 2004). The asymptotic bias of covariate effects obtained ignoring error in covariates is investigated and shown to be substantial in some settings. In a series of simulation studies, the performance of corrected likelihood methods and methods based on a simulation-extrapolation (SIMEX) algorithm (Cook \& Stefanski 1994) were investigated to address covariate measurement error. The methods implemented were shown to result in much smaller empirical biases and empirical coverage probabilities which were closer to the nominal levels.
The third problem considered involves an extreme case of interval censoring known as current status data. Current status data arise when individuals are observed only at a single point in time and it is then determined whether they have experienced the event of interest. To complicate matters, in the problem considered here, an unknown proportion of the population will never experience the event of interest. Again, this type of data is incomplete in two ways. One assessment is made on each individual to determine whether or not an event has occurred. Therefore, the exact event times are unknown for those who will eventually experience the event. In addition, whether or not the individuals will ever experience the event is unknown for those who have not experienced the event by the assessment time. This problem was motivated by a series of orthopedic trials looking at the effect of blood thinners in hip and knee replacement surgeries. These blood thinners can cause a negative serological response in some patients. This response was the outcome of interest and the only available information regarding it was the seroconversion time under current status observation. In this thesis, latent class models with parametric, nonparametric and piecewise constant forms of the seroconversion time distribution are described. They account for the fact that only a proportion of the population will experience the event of interest. Estimators based on an EM algorithm were evaluated via simulation and the orthopedic surgery data were analyzed based on this methodology.
|
42 |
Measurement Error and Misclassification in Interval-Censored Life History DataWhite, Bethany Joy Giddings January 2007 (has links)
In practice, data are frequently incomplete in one way or another. It can be a significant challenge to make valid inferences about the parameters of interest in this situation. In this thesis, three
problems involving such data are addressed. The first two problems involve interval-censored life history data with mismeasured
covariates. Data of this type are incomplete in two ways. First, the exact event times are unknown due to censoring. Second, the true covariate is missing for most, if not all, individuals. This work
focuses primarily on the impact of covariate measurement error in progressive multi-state models with data arising from panel (i.e., interval-censored) observation. These types of problems arise frequently in clinical settings (e.g. when disease progression is of interest and patient information is collected during irregularly spaced clinic visits). Two and three state models are considered in this thesis. This work is motivated by a research program on psoriatic arthritis (PsA) where the effects of error-prone covariates on rates of disease progression are of interest and patient information is collected at clinic visits (Gladman et al. 1995; Bond et al. 2006). Information regarding the error distributions were available based on results from a separate study conducted to evaluate the reliability of clinical measurements that are used in PsA treatment and follow-up (Gladman et al. 2004). The asymptotic bias of covariate effects obtained ignoring error in covariates is investigated and shown to be substantial in some settings. In a series of simulation studies, the performance of corrected likelihood methods and methods based on a simulation-extrapolation (SIMEX) algorithm (Cook \& Stefanski 1994) were investigated to address covariate measurement error. The methods implemented were shown to result in much smaller empirical biases and empirical coverage probabilities which were closer to the nominal levels.
The third problem considered involves an extreme case of interval censoring known as current status data. Current status data arise when individuals are observed only at a single point in time and it is then determined whether they have experienced the event of interest. To complicate matters, in the problem considered here, an unknown proportion of the population will never experience the event of interest. Again, this type of data is incomplete in two ways. One assessment is made on each individual to determine whether or not an event has occurred. Therefore, the exact event times are unknown for those who will eventually experience the event. In addition, whether or not the individuals will ever experience the event is unknown for those who have not experienced the event by the assessment time. This problem was motivated by a series of orthopedic trials looking at the effect of blood thinners in hip and knee replacement surgeries. These blood thinners can cause a negative serological response in some patients. This response was the outcome of interest and the only available information regarding it was the seroconversion time under current status observation. In this thesis, latent class models with parametric, nonparametric and piecewise constant forms of the seroconversion time distribution are described. They account for the fact that only a proportion of the population will experience the event of interest. Estimators based on an EM algorithm were evaluated via simulation and the orthopedic surgery data were analyzed based on this methodology.
|
43 |
The Comparison of Parameter Estimation with Application to Massachusetts Health Care Panel Study (MHCPS) DataHuang, Yao-wen 03 June 2004 (has links)
In this paper we propose two simple algorithms to estimate parameters £] and baseline survival function in Cox proportional hazard model with application to Massachusetts Health Care Panel Study (MHCPS) (Chappell, 1991) data which is a left truncated and interval censored data. We find that, in the estimation of £] and baseline survival function, Kaplan and Meier algorithm is uniformly better than the Empirical algorithm. Also, Kaplan and Meier algorithm is uniformly more powerful than the Empirical algorithm in testing whether two groups of survival functions are the same. We also define a distance measure D and compare the performance of these two algorithms through £] and D.
|
44 |
Parameter estimation in proportional hazard model with interval censored dataChang, Shih-hsun 24 June 2006 (has links)
In this paper, we estimate the parameters $S_0(t)$ and $ eta$ in Cox proportional hazard model when data are all interval-censored. For the application of this model, data should be either exact or right-censored, therefore we
transform interval-censored data into exact data by three di®erent methods and then apply Nelson-Aalen estimate to obtain $S_0(t)$ and $ eta$. The test statistic
$hat{ eta}^2I(hat{ eta})$ is not approximately distributed as $chi^2_{(1)}$ but $chi^2_{(1)}$ times a
constant c.
|
45 |
Effect Of Estimation In Goodness-of-fit TestsEren, Emrah 01 September 2009 (has links) (PDF)
In statistical analysis, distributional assumptions are needed to apply parametric procedures. Assumptions about underlying distribution should be true for accurate statistical inferences. Goodness-of-fit tests are used for checking the validity of the distributional assumptions. To apply some of the goodness-of-fit tests, the unknown population parameters are estimated. The null distributions of test statistics become complicated or depend on the unknown parameters if population parameters are replaced by their estimators. This will restrict the use of the test. Goodness-of-fit statistics which are invariant to parameters can be used if the distribution under null hypothesis is a location-scale distribution. For location and scale invariant goodness-of-fit tests, there is no need to estimate the unknown population parameters. However, approximations are used in some of those tests. Different types of estimation and approximation techniques are used in this study to compute goodness-of-fit statistics for complete and censored samples from univariate distributions as well as complete samples from bivariate normal distribution. Simulated power properties of the goodness-of-fit tests against a broad range of skew and symmetric alternative distributions are examined to identify the estimation effects in goodness-of-fit tests. The main aim of this thesis is to modify goodness-of-fit tests by using different estimators or approximation techniques, and finally see the effect of estimation on the power of these tests.
|
46 |
A generalization of rank tests based on interval-censored failure time data and its application to AIDS studies.Kuo, Yu-Yu 11 July 2000 (has links)
In this paper we propose a generalized rank test based on discrete interval-censored failure time data to determine whether two lifetime populations come from the same distribution. It
reduces to the Logrank test or Wilcoxon test when one has exact or right-censored data. Simulation shows that the proposed test performs pretty satisfactory. An example is presented
to demonstrate how the proposed test can be applied in AIDS study.
|
47 |
The estimation of the truncation ratio and an algorithm for the parameter estimation in the random interval truncation model.Zhu, Huang-Xu 01 August 2003 (has links)
For interval-censored and truncated failure time data, the truncation ratio is unknown. In this paper, we propose an algorithm, similar to Turnbull's, to estimate the parameters. The truncation ratio for the interval-censored and truncated failure time data can also be estimated by the convergence result of the algorithm. A simulation study is proposed to compare with Turnbull (1976). Our algorithm seems to have better result.
|
48 |
Nonparametric tests for interval-censored failure time data via multiple imputationHuang, Jin-long 26 June 2008 (has links)
Interval-censored failure time data often occur in follow-up studies where subjects can only be followed periodically and the failure time can only be known to lie in an interval. In this paper we consider the problem of comparing two or more interval-censored samples. We propose a multiple imputation method for discrete interval-censored data to impute exact failure times from interval-censored observations and then apply existing test for exact data, such as the log-rank test, to imputed exact data. The test statistic and covariance matrix are calculated by our proposed multiple imputation technique. The formula of covariance matrix estimator is similar to the estimator used by Follmann, Proschan and Leifer (2003) for clustered data. Through simulation studies we find that the performance of the proposed log-rank type test is comparable to that of the test proposed by Finkelstein (1986), and is better than that of the two existing log-rank type tests proposed by Sun (2001) and Zhao and Sun (2004) due to the differences in the method of multiple imputation and the covariance matrix estimation. The proposed method is illustrated by means of an example involving patients with breast cancer. We also investigate applying our method to the other two-sample comparison tests for exact data, such as Mantel's test (1967) and the integrated weighted difference test.
|
49 |
Distributed Detection Using Censoring Schemes with an Unknown Number of NodesHsu, Ming-Fong 04 September 2008 (has links)
The energy efficiency issue, which is subjected to an energy constraint, is important for the applications in wireless sensor network. For the distributed detection problem considered in this thesis, the sensor makes a local decision based on its observation and transmits a one-bit message to the fusion center. We consider the local sensors employing a censoring scheme, where the sensors are silent and transmit nothing to fusion center if their observations are not very informative. The goal of this thesis is to achieve an energy efficiency design when the distributed detection employs the censoring scheme. Simulation results show that we can have the same error probabilities of decision fusion while conserving more energy simultaneously as compared with the detection without using censoring schemes. In this thesis, we also demonstrate that the error probability of decision fusion is a convex function of the censoring probability.
|
50 |
Testing For Normality of Censored DataAndersson, Johan, Burberg, Mats January 2015 (has links)
In order to make statistical inference, that is drawing conclusions from a sample to describe a population, it is crucial to know the correct distribution of the data. This paper focused on censored data from the normal distribution. The purpose of this paper was to answer whether we can test if data comes from a censored normal distribution. This by using normality tests and tests designed for censored data and investigate if we got correct size of these tests. This has been carried out with simulations in the program R for left censored data. The results indicated that with increasing censoring normality tests failed to accept normality in a sample. On the other hand the censoring tests met the requirements with increasing censoring level, which was the most important conclusion in this paper.
|
Page generated in 0.0703 seconds