• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 43
  • 11
  • 9
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Prediction Performance of Survival Models

Yuan, Yan January 2008 (has links)
Statistical models are often used for the prediction of future random variables. There are two types of prediction, point prediction and probabilistic prediction. The prediction accuracy is quantified by performance measures, which are typically based on loss functions. We study the estimators of these performance measures, the prediction error and performance scores, for point and probabilistic predictors, respectively. The focus of this thesis is to assess the prediction performance of survival models that analyze censored survival times. To accommodate censoring, we extend the inverse probability censoring weighting (IPCW) method, thus arbitrary loss functions can be handled. We also develop confidence interval procedures for these performance measures. We compare model-based, apparent loss based and cross-validation estimators of prediction error under model misspecification and variable selection, for absolute relative error loss (in chapter 3) and misclassification error loss (in chapter 4). Simulation results indicate that cross-validation procedures typically produce reliable point estimates and confidence intervals, whereas model-based estimates are often sensitive to model misspecification. The methods are illustrated for two medical contexts in chapter 5. The apparent loss based and cross-validation estimators of performance scores for probabilistic predictor are discussed and illustrated with an example in chapter 6. We also make connections for performance.
22

Testing an Assumption of Non-Differential Misclassification in Case-Control Studies

Hui, Qin 01 August 2011 (has links)
One of the issues regarding the misclassification in case-control studies is whether the misclassification error rates are the same for both cases and controls. Currently, a common practice is to assume that the rates are the same (“non-differential” assumption). However, it is suspicious that this assumption is valid in many case-control studies. Unfortunately, no test is available so far to test the validity of the assumption of non-differential misclassification when the validation data are not available. We propose the first such method to test the validity of non-differential assumption in a case-control study with 2 × 2 contingency table. First, the Exposure Operating Characteristic curve is defined. Next, two non-parametric methods are applied to test the assumption of non-differential misclassification. Three examples from practical applications are used to illustrate the methods and a comparison is made.
23

The Effect of Diagnostic Misclassification on Spatial Statistics for Regional Data

Scott, Christopher 01 1900 (has links)
Spatial epidemiological studies which assume perfect health status information can be biased if imperfect diagnostic tests have been used to obtain the health status of individuals in a population. This study investigates the effect of diagnostic misclassification on the spatial statistical methods commonly used to analyze regional health status data in spatial epidemiology. The methods considered here are: Moran's I to assess clustering in the data, a Gaussian random field model to estimate prevalence and the range and sill parameters of the semivariogram, and Kulldorff's spatial scan test to identify clusters. Various scenarios of diagnostic misclassification were simulated from a West Nile virus dead-bird surveillance program, and the results were evaluated. It was found that non-differential misclassification added random noise to the spatial pattern in observed data which created bias in the statistical results. However, when regional sample sizes were doubled, the effect from misclassification bias on the spatial statistics decreased.
24

Response Adaptive Designs in the Presence of Mismeasurement

LI, XUAN January 2012 (has links)
Response adaptive randomization represents a major advance in clinical trial methodology that helps balance the benefits of the collective and the benefits of the individual and improves efficiency without undermining the validity and integrity of the clinical research. Response adaptive designs use information so far accumulated from the trial to modify the randomization procedure and deliberately bias treatment allocation in order to assign more patients to the potentially better treatment. No attention has been paid to incorporating the problem of errors-in-variables in adaptive clinical trials. In this work, some important issues and methods of response adaptive design of clinical trials in the presence of mismeasurement are examined. We formulate response adaptive designs when the dichotomous response may be misclassified. We consider the optimal allocations under various objectives, investigate the asymptotically best response adaptive randomization procedure, and discuss effects of misclassification on the optimal allocation. We derive explicit expressions for the variance-penalized criterion with misclassified binary responses and propose a new target proportion of treatment allocation under the criterion. A real-life clinical trial and some related simulation results are also presented.
25

Response Adaptive Designs in the Presence of Mismeasurement

LI, XUAN January 2012 (has links)
Response adaptive randomization represents a major advance in clinical trial methodology that helps balance the benefits of the collective and the benefits of the individual and improves efficiency without undermining the validity and integrity of the clinical research. Response adaptive designs use information so far accumulated from the trial to modify the randomization procedure and deliberately bias treatment allocation in order to assign more patients to the potentially better treatment. No attention has been paid to incorporating the problem of errors-in-variables in adaptive clinical trials. In this work, some important issues and methods of response adaptive design of clinical trials in the presence of mismeasurement are examined. We formulate response adaptive designs when the dichotomous response may be misclassified. We consider the optimal allocations under various objectives, investigate the asymptotically best response adaptive randomization procedure, and discuss effects of misclassification on the optimal allocation. We derive explicit expressions for the variance-penalized criterion with misclassified binary responses and propose a new target proportion of treatment allocation under the criterion. A real-life clinical trial and some related simulation results are also presented.
26

Misclassification Probabilities through Edgeworth-type Expansion for the Distribution of the Maximum Likelihood based Discriminant Function

Umunoza Gasana, Emelyne January 2021 (has links)
This thesis covers misclassification probabilities via an Edgeworth-type expansion of the maximum likelihood based discriminant function. When deriving misclassification errors, first the expectation and variance in the population are assumed to be known where the variance is the same across populations and thereafter we consider the case where those parameters are unknown. Cumulants of the discriminant function for discriminating between two multivariate normal populations are derived. Approximate probabilities of the misclassification errors are established via an Edgeworth-type expansion using a standard normal distribution.
27

Epidemic of Lung Cancer or Artifact of Classification in the State of Kentucky?

Simo, Beatrice 05 May 2007 (has links) (PDF)
Lung cancer remains the leading cause of cancer deaths in the United States despite public health campaigns aimed at reducing its rate of mortality. Kentucky is the state with the highest lung cancer incidence and mortality. This study aims to assess the impact of misclassification of cause of death from Lung Cancer in Kentucky for the period 1979 to 2002. We will examine the potential competing classification of death for two other smoking-related diseases, Chronic Obstructive Pulmonary Disease (COPD) and Emphysema. Age-adjusted mortality rates for these diseases for white males were obtained from the National Center for Health Statistics. There was little evidence that any misclassification between COPD or Emphysema mortality rates was in agreement with the rising lung cancer rates in Kentucky. The long-term increase in lung cancer mortality in Kentucky is likely because of a combination of risk effects between smoking and other risk-factors for this disease.
28

Correction Methods, Approximate Biases, and Inference for Misclassified Data

Shieh, Meng-Shiou 01 May 2009 (has links)
When categorical data are misplaced into the wrong category, we say the data is affected by misclassification. This is common for data collection. It is well-known that naive estimators of category probabilities and coefficients for regression that ignore misclassification can be biased. In this dissertation, we develop methods to provide improved estimators and confidence intervals for a proportion when only a misclassified proxy is observed, and provide improved estimators and confidence intervals for regression coefficients when only misclassified covariates are observed. Following the introduction and literature review, we develop two estimators for a proportion , one which reduces the bias, and one with smaller mean square error. Then we will give two methods to find a confidence interval for a proportion, one using optimization techniques, and the other one using Fieller's method. After that, we will focus on developing methods to find corrected estimators for coefficients of regression with misclassified covariates, with or without perfectly measured covariates, and with a known estimated misclassification/reclassification model. These correction methods use the score function approach, regression calibration and a mixture model. We also use Fieller's method to find a confidence interval for the slope of simple regression with misclassified binary covariates. Finally, we use simulation to demonstrate the performance of our proposed methods.
29

SENSITIVITY ANALYSIS – THE EFFECTS OF GLASGOW OUTCOME SCALE MISCLASSIFICATION ON TRAUMATIC BRAIN INJURY CLINICAL TRIALS

Lu, Juan 19 April 2010 (has links)
I. EFFECTS OF GLASGOW OUTCOME SCALE MISCLASSIFICATION ON TRAUMATIC BRAIN INJURY CLINICAL TRIALS The Glasgow Outcome Scale (GOS) is the primary endpoint for efficacy analysis of clinical trials in traumatic brain injury (TBI). Accurate and consistent assessment of outcome after TBI is essential to the evaluation of treatment results, particularly in the context of multicenter studies and trials. The inconsistent measurement or interobserver variation on GOS outcome, or for that matter, on any outcome scales, may adversely affect the sensitivity to detect treatment effects in clinical trial. The objective of this study is to examine effects of nondifferential misclassification of the widely used five-category GOS outcome scale and in particular to assess the impact of this misclassification on detecting a treatment effect and statistical power. We followed two approaches. First, outcome differences were analyzed before and after correction for misclassification using a dataset of 860 patients with severe brain injury randomly sampled from two TBI trials with known differences in outcome. Second, the effects of misclassification on outcome distribution and statistical power were analyzed in simulation studies on a hypothetical 800-patient dataset. Three potential patterns of nondifferential misclassification (random, upward and downward) on the dichotomous GOS outcome were analyzed, and the power of finding treatments differences was investigated in detail. All three patterns of misclassification reduce the power of detecting the true treatment effect and therefore lead to a reduced estimation of the true efficacy. The magnitude of such influence not only depends on the size of the misclassification, but also on the magnitude of the treatment effect. In conclusion, nondifferential misclassification directly reduces the power of finding the true treatment effect. An awareness of this procedural error and methods to reduce misclassification should be incorporated in TBI clinical trials. II. IMPACT OF MISCLASSIFICATION ON THE ORDINAL GLASGOW OUTCOME SCALE IN TRAUMATIC BRIAN INJURY CLINICAL TRIALS The methods of ordinal GOS analysis are recommended to increase efficiency and optimize future TBI trials. To further explore the utility of the ordinal GOS in TBI trials, this study extends our previous investigation regarding the effect of misclassification on the dichotomous GOS to examine the impact of misclassification on the 5-point ordinal scales. The impact of nondifferential misclassification on the ordinal GOS was explored via probabilistic sensitivity analyses using TBI patient datasets contained in the IMPACT database (N=9,205). Three patterns of misclassification including random, upward and downward patterns were extrapolated, with the pre-specified outcome classification error distributions. The conventional 95% confidence intervals and the simulation intervals, which account for the misclassification only and the misclassification and random errors together, were reported. Our simulation results showed that given a specification of a minimum of 80%, modes of 85% and 95% and a maximum of 100% for both sensitivity and specificity (random pattern), or given the same trapezoidal distributed sensitivity but a perfect specificity (upward pattern), the misclassification would have caused an underestimated ordinal GOS in the observed data. In another scenario, given the same trapezoidal distributed specificity but a perfect sensitivity (downward pattern), the misclassification would have resulted in an inflated GOS estimation. Thus, the probabilistic sensitivity analysis suggests that the effect of nondifferential misclassification on the ordinal GOS is likely to be small, compared with the impact on the binary GOS situation. The results indicate that the ordinal GOS analysis may not only gain the efficiency from the nature of the ordinal outcome, but also from the relative smaller impact of the potential misclassification, compared with the conventional binary GOS analysis. Nevertheless, the outcome assessment following TBI is a complex problem. The assessment quality could be influenced by many factors. All possible aspects must be considered to ensure the consistency and reliability of the assessment and optimize the success of the trial. III. A METHOD FOR REDUCING MISCLASSIFICATION IN THE EXTENDED GLASGOW OUTCOME SCORE The eight-point extended Glasgow Outcome Scale (GOSE) is commonly used as the primary outcome measure in traumatic brain injury (TBI) clinical trials. The outcome is conventionally collected through a structured interview with the patient alone or together with a caretaker. Despite the fact that using the structured interview questionnaires helps reach agreement in GOSE assessment between raters, significant variation remains among different raters. We introduce an alternate GOSE rating system as an aid in determining GOSE scores, with the objective of reducing inter-rater variation in the primary outcome assessment in TBI trials. Forty-five trauma centers were randomly assigned to three groups to assess GOSE scores on sample cases, using the alternative GOSE rating system coupled with central quality control (Group 1), the alternative system alone (Group 2), or conventional structured interviews (Group 3). The inter-rater variation between an expert and untrained raters was assessed for each group and reported through raw agreement and with weighted kappa (k) statistics. Groups 2 and 3 without central review yielded inter-rater agreements of 83% (weighted k¼0.81; 95% CI 0.69, 0.92) and 83% (weighted k¼0.76, 95% CI 0.63, 0.89), respectively, in GOS scores. In GOSE, the groups had an agreement of 76% (weighted k¼0.79; 95% CI 0.69, 0.89), and 63% (weighted k¼0.70; 95% CI 0.60, 0.81), respectively. The group using the alternative rating system coupled with central monitoring yielded the highest inter-rater agreement among the three groups in rating GOS (97%; weighted k¼0.95; 95% CI 0.89, 1.00), and GOSE (97%; weighted k¼0.97; 95% CI 0.91, 1.00). The alternate system is an improved GOSE rating method that reduces inter-rater variations and provides for the first time, source documentation and structured narratives that allow a thorough central review of information. The data suggest that a collective effort can be made to minimize inter-rater variation.
30

Bayesian approaches for the analysis of sequential parallel comparison design in clinical trials

Yao, Baiyun 07 November 2018 (has links)
Placebo response, an apparent improvement in the clinical condition of patients randomly assigned to the placebo treatment, is a major issue in clinical trials on psychiatric and pain disorders. Properly addressing the placebo response is critical to an accurate assessment of the efficacy of a therapeutic agent. The Sequential Parallel Comparison Design (SPCD) is one approach for addressing the placebo response. A SPCD trial runs in two stages, re-randomizing placebo patients in the second stage. Analysis pools the data from both stages. In this thesis, we propose a Bayesian approach for analyzing SPCD data. Our primary proposed model overcomes some of the limitations of existing methods and offers greater flexibility in performing the analysis. We find that our model is either on par or, under certain conditions, better, in preserving the type I error and minimizing mean square error than existing methods. We further develop our model in two ways. First, through prior specification we provide three approaches to model the relationship between the treatment effects from the two stages, as opposed to arbitrarily specifying the relationship as was done in previous studies. Under proper specification these approaches have greater statistical power than the initial analysis and give accurate estimates of this relationship. Second, we revise the model to treat the placebo response as a continuous rather than a binary characteristic. The binary classification, which groups patients into “placebo-responders” or “placebo non-responders”, can lead to misclassification, which can adversely impact the estimate of the treatment effect. As an alternative, we propose to view the placebo response in each patient as an unknown continuous characteristic. This characteristic is estimated and then used to measure the contribution (or the weight) of each patient to the treatment effect. Building upon this idea, we propose two different models which weight the contribution of placebo patients to the estimated second stage treatment effect. We show that this method is more robust against the potential misclassification of responders than previous methods. We demonstrate our methodology using data from the ADAPT-A SPCD trial.

Page generated in 0.1861 seconds