• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multiple testing problems in classical clinical trial and adaptive designs

Deng, Xuan 07 November 2018 (has links)
Multiplicity issues arise prevalently in a variety of situations in clinical trials and statistical methods for multiple testing have gradually gained importance with the increasing number of complex clinical trial designs. In general, two types of multiple testing can be performed (Dmitrienko et al., 2009): union-intersection testing (UIT) and intersection-union testing (IUT). The UIT is of the interest in this dissertation. Thus, the familywise error rate (FWER) is required to be controlled in the strong sense. A number of methods have been developed for controlling the FWER, including single-step and stepwise procedures. In single-step approaches, such as the simple Bonferroni method, the rejection decision of a hypothesis does not depend on the decision of any other hypotheses. Single-step approaches can be improved in terms of power through stepwise approaches, while also controlling for the desired error rate. Besides, it is also possible to improve those procedures by a parametric approach. In the first project, we developed a new and powerful single-step progressive parametric multiple (SPPM) testing procedure for correlated normal test statistics. Through simulation studies, we demonstrate that SPPM improves power substantially when the correlation is moderate and/or the magnitude of eect sizes are similar. Group sequential designs (GSD) are clinical trials allowing interim looks with the possibility of early terminations due to ecacy, harm or futility, which can reduce the overall costs and timelines for the development of a new drug. However, repeated looks of data also have multiplicity issues and could inflate the type I error rate. The proper treatments to the error inflation have been discussed widely (Pocock, 1977), (O'Brien and Fleming, 1979), (Wang and Tsiatis, 1987), (Lan and DeMets, 1983). Most literature about GSD focuses on a single endpoint. GSD with multiple endpoints however, has also received considerable attention. The main focus of our second project is a GSD with multiple primary endpoints, in which the trial is to evaluate whether at least one of the endpoints is statistically signicant. In this study design, multiplicity issues arise from repeated interims and multiple endpoints. Therefore, the appropriate adjustments must be made to control the Type I error rate. Our second purpose here is to show that the combination of multiple endpoint and repeated interim analyses can lead to a more powerful design. Via the multivariate normal distribution, a method that allows for simultaneously consideration of interim analyses and all clinical endpoints was proposed. The new approach is derived from the closure principle, thus it can control type I error rate strongly. We evaluate the power under dierent scenarios and show that it compares favorably to other methods when correlation among endpoints is non-zero. In the group sequential design framework, another interesting topic is multiple arm multiple stage design (MAMS), where multiple arms are involved in the trial at the beginning with the flexibility about treatment selection or stopping decisions during the interim analyses. One of major hurdles of MAMS is the computational cost with the increasing number of arms and interim looks. Various designs were implemented to overcome this diculty (Thall et al., 1988; Schaid et al., 1990; Follmann et al., 1994; Stallard and Todd, 2003; Stallard and Friede, 2008; Magirr et al., 2012; Wason et al., 2017), but also control the FWER with the potential inflation from the multiple arm comparisons and multiple interim tests. Here, we consider a more flexible drop-the-loser design allowing the safety information in the treatment selection without a pre-specied dropping-arms mechanism and it still retains reasonable high power. The two dierent types of stopping boundaries are proposed for such a design. A sample size is also adjustable if the winner arm is dropped due to the safety considerations.
2

A comparison of adaptive designs in clinical trials : when multiple treatments are tested in multiple stages

Park, Sukyung 09 October 2014 (has links)
In recent times, there has been an increasing interest in adaptive designs for clinical trials. As opposed to conventional designs, adaptive designs allow flexible design adaptation in the middle of a trial based on accumulated data. Although various models have been developed using both frequentist and Bayesian perspectives, relative statistical performances of adaptive designs are somewhat controversial and little is known about those of Bayesian adaptive designs. Most comparison studies also focused on single experimental treatment rather than multiple experimental treatments. In this report, both frequentist and Baysian adaptive designs were compared in terms of statistical power by a simulation study, assuming the situation when multiple experimental treatments are tested in multiple stages. The designs included in the current study are group sequential design (frequentist), adaptive design based on combination tests (frequentist), and Bayesian adaptive design (Bayesian). Based upon the results under multiple scenarios, the Bayesian adaptive design showed the highest power, and the design based on combination tests performed better than group sequential designs when proper interim adaptation could be conducted to increase power. / text
3

AN APPROACH FOR FINDING A GENERAL APPROXIMATION TO THE GROUP SEQUENTIAL BOOTSTRAP TEST

Ekstedt, Douglas January 2022 (has links)
Randomized experiments are regarded as the gold standard for estimating causal effects. Commonly, a single test is performed using a fixed sample size. However, observations may also be observed sequentially and because of economical and ethical reasons, it may be desirable to terminate the trial early. The group sequential design allows for interim analyses and early stopping of a trial without the need for continuous monitoring of the accumulating data. The implementation of a group sequential procedure requires that the sampling distribution of the test statistic observed at each wave of testing to have a known or asymptotically known sampling distribution. This thesis investigates an approach for finding a general approximation to the group sequential bootstrap test for test statistics with unknown or analytically intractable sampling distributions. There is currently no bootstrap version of the group sequential test. The approach implies approximating the covariance structure of the test statistics over time, but not the marginal sampling distribution, with that of a normal test statistic. The evaluation is performed with a Monte Carlo simulation study where the achieved significance level is compared to the nominal. Evidence from the Monte Carlo simulations suggests that the approach performs well for test statistics with sampling distributions close to a normal distribution.
4

Review and Extension for the O’Brien Fleming Multiple Testing procedure

Hammouri, Hanan 22 November 2013 (has links)
O'Brien and Fleming (1979) proposed a straightforward and useful multiple testing procedure (group sequential testing procedure) for comparing two treatments in clinical trials where subject responses are dichotomous (e.g. success and failure). O'Brien and Fleming stated that their group sequential testing procedure has the same Type I error rate and power as that of a fixed one-stage chi-square test, but gives the opportunity to terminate the trial early when one treatment is clearly performing better than the other. We studied and tested the O'Brien and Fleming procedure specifically by correcting the originally proposed critical values. Furthermore, we updated the O’Brien Fleming Group Sequential Testing procedure to make it more flexible via three extensions. The first extension is combining the O’Brien Fleming Group Sequential Testing procedure with the Optimal allocation, where the idea is to allocate more patients to the better treatment after each interim analysis. The second extension is combining the O’Brien Fleming Group Sequential Testing procedure with the Neyman allocation which aims to minimize the variance of the difference in sample proportions. The last extension is that we can allow for different sample weights for different stages, as opposed to equal allocation for different stages. Simulation studies showed that the O’Brien Fleming Group Sequential Testing procedure is relatively robust to the added features.

Page generated in 0.1179 seconds