• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

BLINDED EVALUATIONS OF EFFECT SIZES IN CLINICAL TRIALS: COMPARISONS BETWEEN BAYESIAN AND EM ANALYSES

Turkoz, Ibrahim January 2013 (has links)
Clinical trials are major and costly undertakings for researchers. Planning a clinical trial involves careful selection of the primary and secondary efficacy endpoints. The 2010 draft FDA guidance on adaptive designs acknowledges possible study design modifications, such as selection and/or order of secondary endpoints, in addition to sample size re-estimation. It is essential for the integrity of a double-blind clinical trial that individual treatment allocation of patients remains unknown. Methods have been proposed for re-estimating the sample size of clinical trials, without unblinding treatment arms, for both categorical and continuous outcomes. Procedures that allow a blinded estimation of the treatment effect, using knowledge of trial operational characteristics, have been suggested in the literature. Clinical trials are designed to evaluate effects of one or more treatments on multiple primary and secondary endpoints. The multiplicity issues when there is more than one endpoint require careful consideration for controlling the Type I error rate. A wide variety of multiplicity approaches are available to ensure that the probability of making a Type I error is controlled within acceptable pre-specified bounds. The widely used fixed sequence gate-keeping procedures require prospective ordering of null hypotheses for secondary endpoints. This prospective ordering is often based on a number of untested assumptions about expected treatment differences, the assumed population variance, and estimated dropout rates. We wish to update the ordering of the null hypotheses based on estimating standardized treatment effects. We show how to do so while the study is ongoing, without unblinding the treatments, without losing the validity of the testing procedure, and with maintaining the integrity of the trial. Our simulations show that we can reliably order the standardized treatment effect also known as signal-to-noise ratio, even though we are unable to estimate the unstandardized treatment effect. In order to estimate treatment difference in a blinded setting, we must define a latent variable substituting for the unknown treatment assignment. Approaches that employ the EM algorithm to estimate treatment differences in blinded settings do not provide reliable conclusions about ordering the null hypotheses. We developed Bayesian approaches that enable us to order secondary null hypotheses. These approaches are based on posterior estimation of signal-to-noise ratios. We demonstrate with simulation studies that our Bayesian algorithms perform better than existing EM algorithm counterparts for ordering effect sizes. Introducing informative priors for the latent variables, in settings where the EM algorithm has been used, typically improves the accuracy of parameter estimation in effect size ordering. We illustrate our method with a secondary analysis of a longitudinal study of depression. / Statistics
12

Sequential Adaptive Designs In Computer Experiments For Response Surface Model Fit

LAM, CHEN QUIN 29 July 2008 (has links)
No description available.
13

Randomized Clinical Trials in Oncology with Rare Diseases or Rare Biomarker-based Subtypes / Essais cliniques randomisés en oncologie dans les maladies rares ou en présence de sous-types rares identifiés par biomarqueurs

Bayar, Mohamed Amine 29 November 2019 (has links)
Le design standard des essais randomisés de phase III suppose le recrutement d'un grand nombre de patients pour assurer un risque α de 0.025 unilatéral et une puissance d'au moins 80%. Ceci s'avérer difficile dans les maladies rares, ou encore si le traitement cible une population spécifique définie par un sous-type moléculaire rare. Nous avons évalué par simulation la performance d'une série d'essais randomisés. Au terme de chaque essai, s'il est associé à une amélioration significative, le traitement expérimental devient le contrôle de l'essai suivant. Les designs ont été évalués pour différents taux de recrutement, différentes sévérités de la maladie, et différentes distributions hypothétiques des effets d'un futur traitement. Nous avons montré, que sous des hypothèses raisonnables, une série d'essais de plus petite taille et avec un risque α relâché est associée à un plus grand bénéfice à long terme que deux essais de design standard. Nous avons enrichi cette approche avec des designs plus flexibles incluant des analyses intermédiaires d'efficacité et/ou futilité, et des designs adaptatifs à trois bras avec sélection de traitement. Nous avons montré qu'une analyse intermédiaire avec une règle d'arrêt pour futilité était associé à un gain supplémentaire et à une meilleure maitrise du risque, contrairement aux règles d'arrêt pour efficacité qui ne permettent pas d'améliorer la performance. Les séries d'essais à trois bras sont systématiquement plus performants que les séries d'essais à deux bras. Dans la troisième de la thèse, nous avons étudié les essais randomisés évaluant un algorithme de traitement plutôt que l'efficacité d'un seul traitement. Le traitement expérimental est déterminé selon la mutation. Nous avons comparé deux méthodes basées sur le modèles de Cox à effets aléatoires pour l'estimation de l'effet traitement dans chaque mutation : Maximum Integrated Partial Likellihood (MIPL) en utilisant le package coxme et Maximum H-Likelihood (MHL) en utilisant le package frailtyHL. La performance de la méthode MIPL est légèrement meilleure. En présence d'un effet traitement hétérogène, les deux méthodes sousestime l'effet dans les mutations avec un large effet, et le surestime dans les mutations avec un modeste effet. / Large sample sizes are required in randomized trials designed to meet typical one-sided α-level of 0.025 and at least 80% power. This may be unachievable in a reasonable time frame even with international collaborations. It is either because the medical condition is rare, or because the trial focuses on an uncommon subset of patients with a rare molecular subtype where the treatment tested is deemed relevant. We simulated a series of two-arm superiority trials over a long research horizon (15 years). Within the series of trials, the treatment selected after each trial becomes the control treatment of the next one. Different disease severities, accrual rates, and hypotheses of how treatments improve over time were considered. We showed that compared with two larger trials with the typical one-sided α-level of 0.025, performing a series of small trials with relaxed α-levels leads on average to larger survival benefits over a long research horizon, but also to higher risk of selecting a worse treatment at the end of the research period. We then extended this framework with more 'flexible' designs including interim analyses for futility and/or efficacy, and three-arm adaptive designs with treatment selection at interim. We showed that including an interim analysis with a futility rule is associated with an additional survival gain and a better risk control as compared to series with no interim analysis. Including an interim analysis for efficacy yields almost no additional gain. Series based on three-arm trials are associated with a systematic improvement of the survival gain and the risk control as compared to series of two-arm trials. In the third part of the thesis, we examined the issue of randomized trials evaluating a treatment algorithm instead of a single drugs' efficacy. The treatment in the experimental group depends on the mutation, unlike the control group. We evaluated two methods based on the Cox frailty model to estimate the treatment effect in each mutation: Maximum Integrated Partial Likellihood (MIPL) using package coxme and Maximum H-Likelihood (MHL) using package frailtyHL. MIPL method performs slightly better. In presence of a heterogeneous treatment effect, the two methods underestimate the treatment effect in mutations where the treatment effect is large, and overestimates the treatment effect in mutations where the treatment effect is small.

Page generated in 0.0347 seconds