21 |
Measurement Error and Misclassification in Interval-Censored Life History DataWhite, Bethany Joy Giddings January 2007 (has links)
In practice, data are frequently incomplete in one way or another. It can be a significant challenge to make valid inferences about the parameters of interest in this situation. In this thesis, three
problems involving such data are addressed. The first two problems involve interval-censored life history data with mismeasured
covariates. Data of this type are incomplete in two ways. First, the exact event times are unknown due to censoring. Second, the true covariate is missing for most, if not all, individuals. This work
focuses primarily on the impact of covariate measurement error in progressive multi-state models with data arising from panel (i.e., interval-censored) observation. These types of problems arise frequently in clinical settings (e.g. when disease progression is of interest and patient information is collected during irregularly spaced clinic visits). Two and three state models are considered in this thesis. This work is motivated by a research program on psoriatic arthritis (PsA) where the effects of error-prone covariates on rates of disease progression are of interest and patient information is collected at clinic visits (Gladman et al. 1995; Bond et al. 2006). Information regarding the error distributions were available based on results from a separate study conducted to evaluate the reliability of clinical measurements that are used in PsA treatment and follow-up (Gladman et al. 2004). The asymptotic bias of covariate effects obtained ignoring error in covariates is investigated and shown to be substantial in some settings. In a series of simulation studies, the performance of corrected likelihood methods and methods based on a simulation-extrapolation (SIMEX) algorithm (Cook \& Stefanski 1994) were investigated to address covariate measurement error. The methods implemented were shown to result in much smaller empirical biases and empirical coverage probabilities which were closer to the nominal levels.
The third problem considered involves an extreme case of interval censoring known as current status data. Current status data arise when individuals are observed only at a single point in time and it is then determined whether they have experienced the event of interest. To complicate matters, in the problem considered here, an unknown proportion of the population will never experience the event of interest. Again, this type of data is incomplete in two ways. One assessment is made on each individual to determine whether or not an event has occurred. Therefore, the exact event times are unknown for those who will eventually experience the event. In addition, whether or not the individuals will ever experience the event is unknown for those who have not experienced the event by the assessment time. This problem was motivated by a series of orthopedic trials looking at the effect of blood thinners in hip and knee replacement surgeries. These blood thinners can cause a negative serological response in some patients. This response was the outcome of interest and the only available information regarding it was the seroconversion time under current status observation. In this thesis, latent class models with parametric, nonparametric and piecewise constant forms of the seroconversion time distribution are described. They account for the fact that only a proportion of the population will experience the event of interest. Estimators based on an EM algorithm were evaluated via simulation and the orthopedic surgery data were analyzed based on this methodology.
|
22 |
The Comparison of Parameter Estimation with Application to Massachusetts Health Care Panel Study (MHCPS) DataHuang, Yao-wen 03 June 2004 (has links)
In this paper we propose two simple algorithms to estimate parameters £] and baseline survival function in Cox proportional hazard model with application to Massachusetts Health Care Panel Study (MHCPS) (Chappell, 1991) data which is a left truncated and interval censored data. We find that, in the estimation of £] and baseline survival function, Kaplan and Meier algorithm is uniformly better than the Empirical algorithm. Also, Kaplan and Meier algorithm is uniformly more powerful than the Empirical algorithm in testing whether two groups of survival functions are the same. We also define a distance measure D and compare the performance of these two algorithms through £] and D.
|
23 |
Parameter estimation in proportional hazard model with interval censored dataChang, Shih-hsun 24 June 2006 (has links)
In this paper, we estimate the parameters $S_0(t)$ and $ eta$ in Cox proportional hazard model when data are all interval-censored. For the application of this model, data should be either exact or right-censored, therefore we
transform interval-censored data into exact data by three di®erent methods and then apply Nelson-Aalen estimate to obtain $S_0(t)$ and $ eta$. The test statistic
$hat{ eta}^2I(hat{ eta})$ is not approximately distributed as $chi^2_{(1)}$ but $chi^2_{(1)}$ times a
constant c.
|
24 |
Nonparametric tests for interval-censored failure time data via multiple imputationHuang, Jin-long 26 June 2008 (has links)
Interval-censored failure time data often occur in follow-up studies where subjects can only be followed periodically and the failure time can only be known to lie in an interval. In this paper we consider the problem of comparing two or more interval-censored samples. We propose a multiple imputation method for discrete interval-censored data to impute exact failure times from interval-censored observations and then apply existing test for exact data, such as the log-rank test, to imputed exact data. The test statistic and covariance matrix are calculated by our proposed multiple imputation technique. The formula of covariance matrix estimator is similar to the estimator used by Follmann, Proschan and Leifer (2003) for clustered data. Through simulation studies we find that the performance of the proposed log-rank type test is comparable to that of the test proposed by Finkelstein (1986), and is better than that of the two existing log-rank type tests proposed by Sun (2001) and Zhao and Sun (2004) due to the differences in the method of multiple imputation and the covariance matrix estimation. The proposed method is illustrated by means of an example involving patients with breast cancer. We also investigate applying our method to the other two-sample comparison tests for exact data, such as Mantel's test (1967) and the integrated weighted difference test.
|
25 |
Testing For Normality of Censored DataAndersson, Johan, Burberg, Mats January 2015 (has links)
In order to make statistical inference, that is drawing conclusions from a sample to describe a population, it is crucial to know the correct distribution of the data. This paper focused on censored data from the normal distribution. The purpose of this paper was to answer whether we can test if data comes from a censored normal distribution. This by using normality tests and tests designed for censored data and investigate if we got correct size of these tests. This has been carried out with simulations in the program R for left censored data. The results indicated that with increasing censoring normality tests failed to accept normality in a sample. On the other hand the censoring tests met the requirements with increasing censoring level, which was the most important conclusion in this paper.
|
26 |
Empirical Likelihood Confidence Intervals for ROC Curves Under Right CensorshipYang, Hanfang 16 September 2010 (has links)
In this thesis, we apply smoothed empirical likelihood method to investigate confidence intervals for the receiver operating characteristic (ROC) curve with right censoring. As a particular application of comparison of distributions from two populations, the ROC curve is constructed by the combination of cumulative distribution function and quantile function. Under mild conditions, the smoothed empirical likelihood ratio converges to chi-square distribution, which is the well-known Wilks's theorem. Furthermore, the performances of the empirical likelihood method are also illustrated by simulation studies in terms of coverage probability and average length of confidence intervals. Finally, a primary biliary cirrhosis data is used to illustrate the proposed empirical likelihood procedure.
|
27 |
Empirical Likelihood Inference for the Accelerated Failure Time Model via Kendall Estimating EquationLu, Yinghua 17 July 2010 (has links)
In this thesis, we study two methods for inference of parameters in the accelerated failure time model with right censoring data. One is the Wald-type method, which involves parameter estimation. The other one is empirical likelihood method, which is based on the asymptotic distribution of likelihood ratio. We employ a monotone censored data version of Kendall estimating equation, and construct confidence intervals from both methods. In the simulation studies, we compare the empirical likelihood (EL) and the Wald-type procedure in terms of coverage accuracy and average length of confidence intervals. It is concluded that the empirical likelihood method has a better performance. We also compare the EL for Kendall’s rank regression estimator with the EL for other well known estimators and find advantages of the EL for Kendall estimator for small size sample. Finally, a real clinical trial data is used for the purpose of illustration.
|
28 |
SOME CONTRIBUTIONS TO THE CENSORED EMPIRICAL LIKELIHOOD WITH HAZARD-TYPE CONSTRAINTSHu, Yanling 01 January 2011 (has links)
Empirical likelihood (EL) is a recently developed nonparametric method of statistical inference. Owen’s 2001 book contains many important results for EL with uncensored data. However, fewer results are available for EL with right-censored data. In this dissertation, we first investigate a right-censored-data extension of Qin and Lawless (1994). They studied EL with uncensored data when the number of estimating equations is larger than the number of parameters (over-determined case). We obtain results similar to theirs for the maximum EL estimator and the EL ratio test, for the over-determined case, with right-censored data. We employ hazard-type constraints which are better able to handle right-censored data. Then we investigate EL with right-censored data and a k-sample mixed hazard-type constraint. We show that the EL ratio test statistic has a limiting chi-square distribution when k = 2. We also study the relationship between the constrained Kaplan-Meier estimator and the corresponding Nelson-Aalen estimator. We try to prove that they are asymptotically equivalent under certain conditions. Finally we present simulation studies and examples showing how to apply our theory and methodology with real data.
|
29 |
Predicting drug residue depletion to establish a withdrawal period with data below the limit of quantitation (LOQ)McGowan, Yan January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Christopher Vahl / Veterinary drugs are used extensively for disease prevention and treatment in food producing animals. The residues of these drugs and their metabolites can pose risks for human health. Therefore, a withdrawal time is established to ensure consumer safety so that tissue, milk or eggs from treated animals cannot be harvested for human consumption until enough time has elapsed for the residue levels to decrease to safe concentrations. Part of the process to establish a withdrawal time involves a linear regression to model drug residue depletion over time. This regression model is used to calculate a one-sided, upper tolerance limit for the amount of drug residue remaining in target tissue as a function of time. The withdrawal period is then determined by finding the smallest time so that the upper tolerance limit falls below the maximum residue limit. Observations with measured residue levels at or below the limit of quantitation (LOQ) of the analytical method present a special challenge in the estimation of the tolerance limit. Because values observed below the LOQ are thought to be unreliable, they add in an additional source of uncertainty and, if dealt with improperly or ignored, can introduce bias in the estimation of the withdrawal time. The U.S. Food and Drug Administration (FDA) suggests excluding such data while the European Medicine Agency (EMA) recommends replacing observations below the LOQ with a fixed number, specifically half the value of the LOQ. However, observations below LOQ are technically left censored and these methods are do not effectively address this fact. As an alternative, a regression method accounting for left-censoring is proposed and implemented in order to adequately model residue depletion over time. Furthermore, a method based on generalized (or fiducial) inference is developed to compute a tolerance limit with results from the proposed regression method. A simulation study is then conducted to compare the proposed withdrawal time calculation procedure to the current FDA and EMA approaches. Finally, the proposed procedures are applied to real experimental data.
|
30 |
Análise de influência local nos modelos de riscos múltiplos / Influence diagnostics for polyhazard models in the presence of covariatesJuliana Betini Fachini 06 February 2007 (has links)
Neste trabalho, é apresentado vários métodos de diagnóstico para modelos de riscos múltiplos. A vantagem desse modelo é sua flexibilidade em relação aos modelos de risco simples, como, os modelos Weibull e log-logístico, pois acomoda uma grande classe de funções de risco, função de risco não-monótona, por exemplo, forma de "banheira" e curvas multimodal. Alguns métodos de influência, assim como, a influência local, influência local total de um indivíduo são calculadas, analizadas e discutidas. Uma discussão computacional do método do afastamento da verossimilhança, bem como da curvatura normal em influência local são apresentados. Finalmente, um conjunto de dados reais é usado para ilustrar a teoria estudada. Uma análise de resíduo é aplicada para a seleção do modelo apropriado. / In this paperwork is present various diagnostic methods for polyhazard models. Polyhazard models are a flexible family for fitting lifetime data. Their main advantage over the single hazard models, such as the Weibull and the log-logistic models, is to include a large amount of nonmonotone hazard shapes, as bathtub and multimodal curves. Some influence methods, such as the local influence, total local influence of an individual are derived, analyzed and discussed. A discussion of the computation of the likelihood displacement as well as the normal curvature in the local influence method are presented. Finally, an example with real data is given for illustration. A residual analysis is performed in order to select an appropriate model.
|
Page generated in 0.0875 seconds