Spelling suggestions: "subject:"clinical trials : istatistical methods"" "subject:"clinical trials : bystatistical methods""
11 |
Analysis of health-related quality of life data in clinical trial with non-ignorable missing based on pattern mixture model. / CUHK electronic theses & dissertations collectionJanuary 2006 (has links)
Conclusion. The missing data is a common problem in clinical trial. The methodology development is urgently needed to detect the difference of two treatments drug in patient quality of life. The modified pattern mixture model incorporating generalized estimating equation method or multiple imputation method provides a solution to tackle the non-ignorable missing data problem. Different clinical trials with various treatment schedules, missing data patterns will be formed. Further studies are needed to study the optimal choice of patterns under the methods. / Introduction. Health-related Quality of Life (HRQoL) has now been included as a major endpoint in many cancer clinical trials in addition to the traditional endpoints such as tumor response and survival. It refers to how illness or its treatment affects patients' ability to function and whether it induces symptoms. Toxicity, progression and death are common outcome affecting patient's QOL in cancer trial. Since this type of missing data are not occurred at random and are called non-ignorable missing data, conventional methods of analyses are not appropriate. It is important to develop general methods to deal with this problem so that treatment effectiveness for improving patient's QOL or those with serious side effect that is detrimental to patient's QOL can be identified. / Methods. The generalized estimating equation based on modified pattern mixture model is constructed to deal with non-ignorable missing data problem. We conducted a simulation study to examine performance of the model for different types of data. Two scenarios were examined. The first case assumes that two groups have quadratic trend but with different rates of change. The second case assumes that one group has linear trend with time while the other group has quadratic trend with time. Moreover, the second methodology is the multiple imputation based on modified pattern mixture model. The main idea is to resample the data within each pattern to create the full data set and use the standard method to analyze the data. Comparison between two methods was carried out in this study. / Recently, joint models for the QOL outcomes and the indicators of drop-outs are used in longitudinal studies to correct for non-ignorable missing. Two broad classes of joint models, selection model and pattern mixture model, were used. Most of the methodology has been developed in the selection model while the pattern mixture model has attracted less attention due to the identifiability problem. Although pattern mixture model has its own limitation, a modified version of this model incorporating Generalized Estimating Equation can be used in practice. / Result. The power of generalized estimating equation alone is higher than pattern mixture model when the missing data is missing at random. Moreover, the bias of generalized estimating equation is less than that of pattern mixture model when the missing data is missing at random. However, the pattern mixture model performs well when the missing data is missing not at random. On the other hand, the modified pattern mixture model has higher power than the standard pattern mixture model if one group has quadratic trend and other group has linear trend. However, the power of modified pattern mixture model is similar or worst than the standard when the data is both quadratic trends with different rates of change. On the other hand, the results of multiple imputation based on modified pattern mixture model were similar but the power was less than the generalized estimating equation model. / Mo Kwok Fai. / "August 2006." / Adviser: Benny Zee. / Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6051. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 91-93). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
12 |
Maximization of power in randomized clinical trials using the minimization treatment allocation techniqueMarange, Chioneso Show January 2010 (has links)
Generally the primary goal of randomized clinical trials (RCT) is to make comparisons among two or more treatments hence clinical investigators require the most appropriate treatment allocation procedure to yield reliable results regardless of whether the ultimate data suggest a clinically important difference between the treatments being studied. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the statistical efficiency in detecting treatment effect and its complexity in implementation. Methods: A SAS simulation code was designed for allocating patients into two different treatment groups. Categorical prognostic factors were used together with multi-level response variables and demonstration of how simulation of data can help to determine the power of the minimization technique was carried out using ordinal logistic regression models. Results: Several scenarios were simulated in this study. Within the selected scenarios, increasing the sample size significantly increased the power of detecting the treatment effect. This was contrary to the case when the probability of allocation was decreased. Power did not change when the probability of allocation given that the treatment groups are balanced was increased. The probability of allocation { } k P was seen to be the only one with a significant effect on treatment balance. Conclusion: Maximum power can be achieved with a sample of size 300 although a small sample of size 200 can be adequate to attain at least 80% power. In order to have maximum power, the probability of allocation should be fixed at 0.75 and set to 0.5 if the treatment groups are equally balanced.
|
13 |
Machine Learning Methods for Causal Inference with Observational Biomedical DataAveritt, Amelia Jean January 2020 (has links)
Causal inference -- the process of drawing a conclusion about the impact of an exposure on an outcome -- is foundational to biomedicine, where it is used to guide intervention. The current gold-standard approach for causal inference is randomized experimentation, such as randomized controlled trials (RCTs). Yet, randomized experiments, including RCTs, often enforce strict eligibility criteria that impede the generalizability of causal knowledge to the real world. Observational data, such as the electronic health record (EHR), is often regarded as a more representative source from which to generate causal knowledge. However, observational data is non-randomized, and therefore causal estimates from this source are susceptible to bias from confounders. This weakness complicates two central tasks of causal inference: the replication or evaluation of existing causal knowledge and the generation of new causal knowledge. In this dissertation I (i) address the feasibility of observational data to replicate existing causal knowledge and (ii) present new methods for the generation of causal knowledge with observational data, with a focus on the causal tasks of comparing an outcome between two cohorts and the estimation of attributable risks of exposures in a causal system.
|
14 |
Essays on Adaptive Experimentation: Bringing Real-World Challenges to Multi-Armed BanditsQin, Chao January 2024 (has links)
Classical randomized controlled trials have long been the gold standard for estimating treatment effects. However, adaptive experimentation, especially through multi-armed bandit algorithms, aims to improve efficiency beyond traditional randomized controlled trials. While there is a vast literature on multi-armed bandits, a simple yet powerful framework in reinforcement learning, real-world challenges can hinder the successful implementation of adaptive algorithms. This thesis seeks to bridge this gap by integrating real-world challenges into multi-armed bandits.
The first chapter examines two competing priorities that practitioners often encounter in adaptive experiments: maximizing total welfare through effective treatment assignments and swiftly conducting experiments to implement population-wide treatments. We propose a unified model that simultaneously accounts for within-experiment performance and post-experiment outcomes. We provide a sharp theory of optimal performance that not only unifies canonical results from the literature on regret minimization and best-arm identification but also uncovers novel insights. Our theory reveals that familiar algorithms, such as the recently proposed top-two Thompson sampling algorithm, can optimize a broad class of objectives if a single scalar parameter is appropriately adjusted. Furthermore, we demonstrate that substantial reductions in experiment duration can often be achieved with minimal impact on total regret.
The second chapter studies the fundamental tension between the distinct priorities of non-adaptive and adaptive experiments: robustness to exogenous variation and efficient information gathering. We introduce a novel multi-armed bandit model that incorporates nonstationary exogenous factors, and propose deconfounded Thompson sampling, a more robust variant of the prominent Thompson sampling algorithm. We provide bounds on both within-experiment and post-experiment regret of deconfounded Thompson sampling, illustrating its resilience to exogenous variation and the delicate balance it strikes between exploration and exploitation. Our proofs leverage inverse propensity weights to analyze the evolution of the posterior distribution, a departure from established methods in the literature. Hinting that new understanding is indeed necessary, we demonstrate that a deconfounded variant of the popular upper confidence bound algorithm can fail completely.
|
15 |
Reassessment of the statistical power of published controlled clinical trials. / CUHK electronic theses & dissertations collectionJanuary 2005 (has links)
Background. The randomized controlled clinical trial is currently the most scientific method for evaluating the effect of medical interventions. The sample size of a trial is crucial for reliably estimating the effect. However, many clinical trials may not be sufficiently large in size to detect the effect of interventions assessed. Previous studies of the statistical power, a relative measure of the largeness of a study, were normally small, mainly examined trials with a statistically insignificant result and were flawed because of the biased or purely hypothetical estimate of the effect for the computation of the power. By using meta-analysis, we conducted this study with improved methods for estimating the power and included a larger number of trials. / Findings. A total of 2,923,912 patients from 2,872 clinical trials from 466 systematic reviews were included in the analyses of this thesis. Of the 466 systematic reviews, 24% (113) were identified from the five journals and the remaining 76% (353) were from the Cochrane Library. 1,000 trials and 1,583,204 patients were obtained from 113 systematic reviews identified in the journals, in which 13.7% (95% C.I.: 11.6%, 15.8%) of trials had a sufficient power and the overall power was 34.0% (95% C.I.: 33.7%, 34.3%). 1,872 trials and 1,340,708 patients were obtained from 353 systematic reviews identified in the Cochrane Library, in which 16.7% (95% C.I.: 15.0%, 18.4%) of trials had a sufficient power and the overall power was 37.8% (95% C.I.: 37.6%, 38.0%). (Abstract shortened by UMI.) / Methods. We identified trials from systematic reviews of clinical trials with binary outcomes published in five medical journals and the Cochrane Database of Systematic Reviews. We analyzed the power of trials with a significant result as well as those with an insignificant result. In estimating the power, we used the combined odds ratio of the meta-analysis as the estimate of the effect for trials from systematic reviews with a statistically significant result and a relative risk reduction of 25% for trials from systematic reviews with a statistically insignificant result. In addition to use of the conventional method to estimate the power, we also developed a new "counting method" that does not need any assumption about the effect. Furthermore, the power is also expressed as a relative and absolute difference between the number of subjects required for a power of 80% and that actually recruited by the trials. / Tsoi Kam Fai. / "July 2005." / Adviser: Jin Ling Tang. / Source: Dissertation Abstracts International, Volume: 67-01, Section: B, page: 0161. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 107-113). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
16 |
Multivariate semiparametric regression models for longitudinal dataLi, Zhuokai January 2014 (has links)
Multiple-outcome longitudinal data are abundant in clinical investigations. For example, infections with different pathogenic organisms are often tested concurrently, and assessments are usually taken repeatedly over time. It is therefore natural to consider a multivariate modeling approach to accommodate the underlying interrelationship among the multiple longitudinally measured outcomes. This dissertation proposes a multivariate semiparametric modeling framework for such data. Relevant estimation and inference procedures as well as model selection tools are discussed within this modeling framework. The first part of this research focuses on the analytical issues concerning binary data. The second part extends the binary model to a more general situation for data from the exponential family of distributions. The proposed model accounts for the correlations across the outcomes as well as the temporal dependency among the repeated measures of each outcome within an individual. An important feature of the proposed model is the addition of a bivariate smooth function for the depiction of concurrent nonlinear and possibly interacting influences of two independent variables on each outcome. For model implementation, a general approach for parameter estimation is developed by using the maximum penalized likelihood method. For statistical inference, a likelihood-based resampling procedure is proposed to compare the bivariate nonlinear effect surfaces across the outcomes. The final part of the dissertation presents a variable selection tool to facilitate model development in practical data analysis. Using the adaptive least absolute shrinkage and selection operator (LASSO) penalty, the variable selection tool simultaneously identifies important fixed effects and random effects, determines the correlation structure of the outcomes, and selects the interaction effects in the bivariate smooth functions. Model selection and estimation are performed through a two-stage procedure based on an expectation-maximization (EM) algorithm. Simulation studies are conducted to evaluate the performance of the proposed methods. The utility of the methods is demonstrated through several clinical applications.
|
17 |
Statistical analysis of clinical trial data using Monte Carlo methodsHan, Baoguang 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies.
For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.
|
18 |
Joint models for longitudinal and survival dataYang, Lili 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Epidemiologic and clinical studies routinely collect longitudinal measures of multiple outcomes. These longitudinal outcomes can be used to establish the temporal order of relevant biological processes and their association with the onset of clinical symptoms. In the first
part of this thesis, we proposed to use bivariate change point models for two longitudinal outcomes with a focus on estimating the correlation between the two change points. We adopted a Bayesian approach for parameter estimation and inference. In the second part, we considered the situation when time-to-event outcome is also collected along with multiple longitudinal biomarkers measured until the occurrence of the event or censoring. Joint models for longitudinal and time-to-event data can be used to estimate the association between the characteristics of the longitudinal measures over time and survival time. We developed a maximum-likelihood method to joint model multiple longitudinal biomarkers and a time-to-event outcome. In addition, we focused on predicting conditional survival probabilities and evaluating the predictive accuracy of multiple longitudinal biomarkers in the joint modeling framework. We assessed the performance of the proposed methods in
simulation studies and applied the new methods to data sets from two cohort studies. / National Institutes of Health (NIH) Grants R01 AG019181, R24 MH080827, P30 AG10133, R01 AG09956.
|
Page generated in 0.1283 seconds