SMART designs tailor individual treatment by re-randomizing patients to subsequent therapies based on their response to initial treatment. However, the classification of patients being responders/non-responders could be inaccurate and thus lead to inappropriate treatment assignment. In a two-step SMART design, by assuming equal randomization, and equal variances of misclassified patients and correctly classified patients, we evaluated misclassification effects on mean, variance, and type I error/ power of single sequential treatment outcome (SST), dynamic treatment outcome (DTRs), and overall outcome. The results showed that misclassification could introduce bias to estimates of treatment effect in all types of outcome. Though the magnitude of bias could vary according to different templates, there were a few constant conclusions: 1) for any fixed sensitivity the bias of mean of SSTs responders always approached to 0 as specificity increased to 1, and for any fixed specificity the bias of mean of SSTs non-responders always approached to 0 as sensitivity increased to 1; 2) for any fixed specificity there was monotonic nonlinear relationship between the bias of mean of SSTs responders and sensitivity, and for any fixed sensitivity there was also monotonic nonlinear relationship between the bias of mean of SSTs non-responders and specificity; 3) the bias of variance of SSTs was always non-monotone nonlinear equation; 4) the variance of SSTs under misclassification was always over-estimated; 5) the maximized absolute relative bias of variance of SSTs was always ¼ of the squared mean difference between misclassified patients and correctly classified patients divided by true variance, but it might not be observed in the range of sensitivity and specificity (0,1); 6) regarding to sensitivity and specificity, the bias of mean of DTRs or overall outcomes was always linear equation and their bias of variance was always non-monotone nonlinear equation; 7) the relative bias of mean/ variance of DTRs or overall outcomes could approach to 0 where sensitivity or specificity wasn’t necessarily to be 1. Furthermore, the results showed that the misclassification could affect statistical inference. Power could be less or bigger than planned 80% under misclassification and showed either monotonic or non-monotonic pattern as sensitivity or specificity decreased.
To mitigate these adverse effects, patient observations could be weighted by the likelihood that their response was correctly classified. We investigated both normal-mixture-model (NM) and k-nearest-neighbor (KNN) strategies to attempt to reduce bias of mean and variance and improve inference at final stage outcome. The NM estimated the early stage probabilities of being a responder for each patient through optimizing the likelihood function by EM algorithm, while KNN estimated these probabilities based upon classifications for the k nearest observations. Simulations were used to compare the performance of these approaches. The results showed that 1) KNN and NM produced modest reductions of bias of point estimates of SSTs; 2) both strategies reduced bias on point estimates of DTRs when the misclassified patients and correctly classified patients from same initial treatment had unequal means; 3) NM reduced the bias of point estimates of overall outcome more than KNN; 4) in general, there were little effect on power adjustment; 5) type I error should always be preserved at 0.05 regardless of misclassification when same response rate and same treatment effects among responders or among non-responders were assumed, but the observed type I error tended to be less than 0.05; 6) KNN preserved type I error at 0.05, but NM might increase type I error rate. Even though most of time both KNN and NM strategies improved point estimates in SMART designs while we knew misclassification might be involved, the tradeoff were increased type I error rate and little effect on power.
Our work showed that misclassification should be considered in SMART design because it introduced bias, but KNN or NM strategies at the final stage couldn’t completely reduce bias of point estimates or improve power. However, in future by adjusting with covariates, these two strategies might be used to improve the classification accuracy in the early stage outcomes.
Identifer | oai:union.ndltd.org:vcu.edu/oai:scholarscompass.vcu.edu:etd-6768 |
Date | 01 January 2018 |
Creators | He, Jun |
Publisher | VCU Scholars Compass |
Source Sets | Virginia Commonwealth University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | © The Author |
Page generated in 0.0022 seconds