• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Group sequential and adaptive methods : topics with applications for clinical trials

Öhrn, Carl Fredrik January 2011 (has links)
This thesis deals with sequential and adaptive methods for clinical trials, and how such methods can be used to achieve efficient clinical trial designs. The efficiency gains that can be achieved through non-adaptive group sequential methods are well established, while the newer adaptive methods seek to combine the best of the classical group sequential framework with an approach that gives increased flexibility. Our results show that the adaptive methods can provide some additional efficiency, as well as increased possibilities to respond to new internal and external information. Care is however needed when applying adaptive methods. While sub-optimal rules for adaptation can lead to inefficiencies, the logistical challenges can also be considerable. Efficient non-adaptive group sequential designs are often easier to implement in practice, and have for the cases we have considered been quite competitive in terms of efficiency. The four problems that are presented in this thesis are very relevant to how clinical trials are run in practice. The solutions that we present are either new approaches to problems that have not previously been solved, or methods that are more efficient than the ones currently available in the literature. Several challenging optimisation problems are solved through numerical computations. The optimal designs that are achieved can be used to benchmark new methods proposed in this thesis as well as methods available in the statistical literature. The problem that is solved in Chapter 5 can be viewed as a natural extension to the other problems. It brings together methods that we have used to the design of individual trials, to solve the more complex problem of designing a sequence of trials that are the core part of a clinical development program. The expected utility that is maximised is motivated by how the development of new medicines works in practice.
2

Multiple testing problems in classical clinical trial and adaptive designs

Deng, Xuan 07 November 2018 (has links)
Multiplicity issues arise prevalently in a variety of situations in clinical trials and statistical methods for multiple testing have gradually gained importance with the increasing number of complex clinical trial designs. In general, two types of multiple testing can be performed (Dmitrienko et al., 2009): union-intersection testing (UIT) and intersection-union testing (IUT). The UIT is of the interest in this dissertation. Thus, the familywise error rate (FWER) is required to be controlled in the strong sense. A number of methods have been developed for controlling the FWER, including single-step and stepwise procedures. In single-step approaches, such as the simple Bonferroni method, the rejection decision of a hypothesis does not depend on the decision of any other hypotheses. Single-step approaches can be improved in terms of power through stepwise approaches, while also controlling for the desired error rate. Besides, it is also possible to improve those procedures by a parametric approach. In the first project, we developed a new and powerful single-step progressive parametric multiple (SPPM) testing procedure for correlated normal test statistics. Through simulation studies, we demonstrate that SPPM improves power substantially when the correlation is moderate and/or the magnitude of eect sizes are similar. Group sequential designs (GSD) are clinical trials allowing interim looks with the possibility of early terminations due to ecacy, harm or futility, which can reduce the overall costs and timelines for the development of a new drug. However, repeated looks of data also have multiplicity issues and could inflate the type I error rate. The proper treatments to the error inflation have been discussed widely (Pocock, 1977), (O'Brien and Fleming, 1979), (Wang and Tsiatis, 1987), (Lan and DeMets, 1983). Most literature about GSD focuses on a single endpoint. GSD with multiple endpoints however, has also received considerable attention. The main focus of our second project is a GSD with multiple primary endpoints, in which the trial is to evaluate whether at least one of the endpoints is statistically signicant. In this study design, multiplicity issues arise from repeated interims and multiple endpoints. Therefore, the appropriate adjustments must be made to control the Type I error rate. Our second purpose here is to show that the combination of multiple endpoint and repeated interim analyses can lead to a more powerful design. Via the multivariate normal distribution, a method that allows for simultaneously consideration of interim analyses and all clinical endpoints was proposed. The new approach is derived from the closure principle, thus it can control type I error rate strongly. We evaluate the power under dierent scenarios and show that it compares favorably to other methods when correlation among endpoints is non-zero. In the group sequential design framework, another interesting topic is multiple arm multiple stage design (MAMS), where multiple arms are involved in the trial at the beginning with the flexibility about treatment selection or stopping decisions during the interim analyses. One of major hurdles of MAMS is the computational cost with the increasing number of arms and interim looks. Various designs were implemented to overcome this diculty (Thall et al., 1988; Schaid et al., 1990; Follmann et al., 1994; Stallard and Todd, 2003; Stallard and Friede, 2008; Magirr et al., 2012; Wason et al., 2017), but also control the FWER with the potential inflation from the multiple arm comparisons and multiple interim tests. Here, we consider a more flexible drop-the-loser design allowing the safety information in the treatment selection without a pre-specied dropping-arms mechanism and it still retains reasonable high power. The two dierent types of stopping boundaries are proposed for such a design. A sample size is also adjustable if the winner arm is dropped due to the safety considerations.
3

A comparison of adaptive designs in clinical trials : when multiple treatments are tested in multiple stages

Park, Sukyung 09 October 2014 (has links)
In recent times, there has been an increasing interest in adaptive designs for clinical trials. As opposed to conventional designs, adaptive designs allow flexible design adaptation in the middle of a trial based on accumulated data. Although various models have been developed using both frequentist and Bayesian perspectives, relative statistical performances of adaptive designs are somewhat controversial and little is known about those of Bayesian adaptive designs. Most comparison studies also focused on single experimental treatment rather than multiple experimental treatments. In this report, both frequentist and Baysian adaptive designs were compared in terms of statistical power by a simulation study, assuming the situation when multiple experimental treatments are tested in multiple stages. The designs included in the current study are group sequential design (frequentist), adaptive design based on combination tests (frequentist), and Bayesian adaptive design (Bayesian). Based upon the results under multiple scenarios, the Bayesian adaptive design showed the highest power, and the design based on combination tests performed better than group sequential designs when proper interim adaptation could be conducted to increase power. / text
4

AN APPROACH FOR FINDING A GENERAL APPROXIMATION TO THE GROUP SEQUENTIAL BOOTSTRAP TEST

Ekstedt, Douglas January 2022 (has links)
Randomized experiments are regarded as the gold standard for estimating causal effects. Commonly, a single test is performed using a fixed sample size. However, observations may also be observed sequentially and because of economical and ethical reasons, it may be desirable to terminate the trial early. The group sequential design allows for interim analyses and early stopping of a trial without the need for continuous monitoring of the accumulating data. The implementation of a group sequential procedure requires that the sampling distribution of the test statistic observed at each wave of testing to have a known or asymptotically known sampling distribution. This thesis investigates an approach for finding a general approximation to the group sequential bootstrap test for test statistics with unknown or analytically intractable sampling distributions. There is currently no bootstrap version of the group sequential test. The approach implies approximating the covariance structure of the test statistics over time, but not the marginal sampling distribution, with that of a normal test statistic. The evaluation is performed with a Monte Carlo simulation study where the achieved significance level is compared to the nominal. Evidence from the Monte Carlo simulations suggests that the approach performs well for test statistics with sampling distributions close to a normal distribution.
5

On Group-Sequential Multiple Testing Controlling Familywise Error Rate

Fu, Yiyong January 2015 (has links)
The importance of multiplicity adjustment has gained wide recognition in modern scientific research. Without it, there will be too many spurious results and reproducibility becomes an issue; with it, if overtly conservative, discoveries will be made more difficult. In the current literature on repeated testing of multiple hypotheses, Bonferroni-based methods are still the main vehicle carrying the bulk of multiplicity adjustment. There is room for power improvement by suitably utilizing both hypothesis-wise and analysis- wise dependencies. This research will contribute to the development of a natural group-sequential extension of the classical stepwise multiple testing procedures, such as Dunnett’s stepdown and Hochberg’s step-up procedures. It is shown that the proposed group-sequential procedures strongly control the familywise error rate while being more powerful than the recently developed class of group-sequential Bonferroni-Holm’s procedures. Particularly in this research, a convexity property is discovered for the distribution of the maxima of pairwise null P-values with the underlying test statistics having distributions such as bivariate normal, t, Gamma, F, or Archimedean copulas. Such property renders itself for an immediate use in improving Holm’s procedure by incorporating pairwise dependencies of P-values. The improved Holm’s procedure, as all stepdown multiple testing procedures, can also be naturally extended to group-sequential setting. / Statistics
6

Review and Extension for the O’Brien Fleming Multiple Testing procedure

Hammouri, Hanan 22 November 2013 (has links)
O'Brien and Fleming (1979) proposed a straightforward and useful multiple testing procedure (group sequential testing procedure) for comparing two treatments in clinical trials where subject responses are dichotomous (e.g. success and failure). O'Brien and Fleming stated that their group sequential testing procedure has the same Type I error rate and power as that of a fixed one-stage chi-square test, but gives the opportunity to terminate the trial early when one treatment is clearly performing better than the other. We studied and tested the O'Brien and Fleming procedure specifically by correcting the originally proposed critical values. Furthermore, we updated the O’Brien Fleming Group Sequential Testing procedure to make it more flexible via three extensions. The first extension is combining the O’Brien Fleming Group Sequential Testing procedure with the Optimal allocation, where the idea is to allocate more patients to the better treatment after each interim analysis. The second extension is combining the O’Brien Fleming Group Sequential Testing procedure with the Neyman allocation which aims to minimize the variance of the difference in sample proportions. The last extension is that we can allow for different sample weights for different stages, as opposed to equal allocation for different stages. Simulation studies showed that the O’Brien Fleming Group Sequential Testing procedure is relatively robust to the added features.

Page generated in 0.1134 seconds