Spelling suggestions: "subject:"adaptive design"" "subject:"adaptive 1design""
21 |
Review and Extension for the O’Brien Fleming Multiple Testing procedureHammouri, Hanan 22 November 2013 (has links)
O'Brien and Fleming (1979) proposed a straightforward and useful multiple testing procedure (group sequential testing procedure) for comparing two treatments in clinical trials where subject responses are dichotomous (e.g. success and failure). O'Brien and Fleming stated that their group sequential testing procedure has the same Type I error rate and power as that of a fixed one-stage chi-square test, but gives the opportunity to terminate the trial early when one treatment is clearly performing better than the other. We studied and tested the O'Brien and Fleming procedure specifically by correcting the originally proposed critical values. Furthermore, we updated the O’Brien Fleming Group Sequential Testing procedure to make it more flexible via three extensions. The first extension is combining the O’Brien Fleming Group Sequential Testing procedure with the Optimal allocation, where the idea is to allocate more patients to the better treatment after each interim analysis. The second extension is combining the O’Brien Fleming Group Sequential Testing procedure with the Neyman allocation which aims to minimize the variance of the difference in sample proportions. The last extension is that we can allow for different sample weights for different stages, as opposed to equal allocation for different stages. Simulation studies showed that the O’Brien Fleming Group Sequential Testing procedure is relatively robust to the added features.
|
22 |
Response Adaptive Design using Auxiliary and Primary OutcomesSinks, Shuxian 18 November 2013 (has links)
Response adaptive designs intend to allocate more patients to better treatments without undermining the validity and the integrity of the trial. The immediacy of the primary response (e.g. deaths, remission) determines the efficiency of the response adaptive design, which often requires outcomes to be quickly or immediately observed. This presents difficulties for survival studies, which may require long durations to observe the primary endpoint. Therefore, we introduce auxiliary endpoints to assist the adaptation with the primary endpoint, where an auxiliary endpoint is generally defined as any measurement that is positively associated with the primary endpoint. Our proposed design (referred to as bivariate adaptive design) is based on the classical response adaptive design framework. The connection of auxiliary and primary endpoints is established through Bayesian method. We extend parameter space from one dimension to two dimensions, say primary and auxiliary efficacies, by implementing a conditional weigh function on the loss function of the design. The allocation ratio is updated at each stage by optimization of the loss function subject to the information provided for both the auxiliary and primary outcomes. We demonstrate several methods of joint modeling the auxiliary and primary outcomes. Through simulation studies, we show that the bivariate adaptive design is more effective in assigning patients to better treatments as compared with univariate optimal and balanced designs. As hoped, this joint-approach also reduces the expected number of patient failures and preserves the comparable power as compared with other designs.
|
23 |
Prise en compte de l'hétérogénéité de la population âgée dans le schéma des essais cliniques de phase II en oncogériatrie / Taking into account the heterogeneity of the elderly population in the design of phase II clinical trials in geriatric oncologyeCabarrou, Bastien 17 April 2019 (has links)
Le cancer du sujet âgé est un réel problème de santé publique. L’incidence du cancer augmentant avec l’âge couplée au vieillissement général de la population font que plus de la moitié des tumeurs diagnostiquées aujourd’hui le sont chez des patients de plus de 65 ans. Cependant, cette population hétérogène a longtemps été exclue des essais cliniques et le manque de données prospectives rend difficile la prise en charge de ces patients. Plusieurs publications soulignent l’importance et la complexité de réaliser des essais cliniques dans cette population. Les schémas classiques ne prenant pas en compte l’hétérogénéité, les essais de phase II spécifiques aux sujets âgés sont rares et généralement stratifiés en sous-groupes définis selon un critère gériatrique ce qui augmente le nombre de patients à inclure et donc diminue la faisabilité. L’objectif de cette thèse est de présenter, comparer et développer des schémas de phase II adaptatifs stratifiés permettant de prendre en compte l’hétérogénéité de la population âgée. L’utilisation de ce type d’approche permet de réduire le nombre de patients à inclure tout en maintenant la puissance statistique et en contrôlant le risque d’erreur de type I. Ce qui implique une diminution du coût et de la durée de l’étude et donc une augmentation de la faisabilité. Afin d’améliorer l’efficacité de la recherche clinique en oncogériatrie, il est donc primordial d’utiliser des schémas adaptatifs stratifiés prenant en compte l’hétérogénéité de la population et permettant d’identifier un sous-groupe d’intérêt susceptible de pouvoir bénéficier (ou non) de la nouvelle thérapeutique. / Elderly cancer is a real public health problem. With the overall aging population and the increased incidence of cancer, more than half of all tumors diagnosed today are in patients aged 65 years or older. However, this heterogeneous population has long been excluded from clinical trials and the lack from prospective data makes it difficult managing these patients. Many publications highlight the importance and the complexity of conducting clinical trials in this population. As classical phase II designs do not take into account the heterogeneity, elderly specific phase II clinical trials are very uncommon and generally conducted in specific subgroups defined by geriatric criteria which increases the number of patients to be included and thus reduces the feasibility. The objective of this thesis is to present, compare and develop stratified adaptive designs that address the heterogeneity of the elderly population. The use of this methodology can minimize the number of patients to be included while maintaining statistical power and controlling the type I error risk. This implies a reduction in the cost and duration of the study and thus increases the feasibility. In order to improve the efficiency of clinical research in geriatric oncology, it is essential to use stratified adaptive designs that take into account the heterogeneity of the population and make it possible to identify a subgroup of interest that might benefit (or not) from the new therapeutic.
|
24 |
Design of adaptive multi-arm multi-stage clinical trialsGhosh, Pranab Kumar 28 February 2018 (has links)
Two-arm group sequential designs have been widely used for over forty years, especially for studies with mortality endpoints. The natural generalization of such designs to trials with multiple treatment arms and a common control (MAMS designs) has, however, been implemented rarely. While the statistical methodology for this extension is clear, the main limitation has been an efficient way to perform the computations. Past efforts were hampered by algorithms that were computationally explosive. With the increasing interest in adaptive designs, platform designs, and other innovative designs that involve multiple comparisons over multiple stages, the importance of MAMS designs is growing rapidly. This dissertation proposes a group sequential approach to design MAMS trial where the test statistic is the maximum of the cumulative score statistics for each
pair-wise comparison, and is evaluated at each analysis time point with respect to efficacy and futility stopping boundaries while maintaining strong control of the family wise error rate (FWER).
In this dissertation we start with a break-through algorithm that will enable us to compute MAMS boundaries rapidly. This algorithm will make MAMS design a practical reality. For designs with efficacy-only boundaries, the computational effort increases linearly with number of arms and number of stages. For designs with both efficacy and futility boundaries the computational effort doubles with successive increases in number of stages. Previous attempts to obtain MAMS boundaries were confined to smaller problems because their computational effort grew exponentially with number of arms and number of stages.
We will next extend our proposed group sequential MAMS design to permit adaptive changes such as dropping treatment arms and increasing the sample size at each interim analysis time point. In order to control the FWER in the presence of these adaptations the early stopping boundaries must be re-computed by invoking the conditional error rate principle and the closed testing principle. This adaptive MAMS design is immensely useful in phase~2 and phase~3 settings.
An alternative to the group sequential approach for MAMS design is the p-value combination approach. This approach has been in place for the last fifteen years.This alternative MAMS approach is based on combining independent p-values from the incremental data of each stage. Strong control of the FWER for this alternative approach is achieved by closed testing. We will compare the operating characteristics of the two approaches both analytically and empirically via simulation. In this dissertation we will demonstrate that the MAMS group sequential approach dominates the traditional p-value combination approach in terms of statistical power.
|
25 |
Réingénierie des fonctions des plateformes LMS par l'analyse et la modélisation des activités d'apprentissage : application à des contextes éducatifs avec fracture numérique / Reengineering of learning management systems features by analysis and modeling of learning activities : application to educational contexts with digital divideLamago, Merlin Ferdinand 17 May 2017 (has links)
Cette thèse modélise l’activité d’apprentissage dans les plateformes LMS (Learning Management System) en vue d’optimiser l’efficacité des utilisateurs. Ce projet est né d’une préoccupation pratique, à savoir comment faciliter l’utilisation des plateformes LMS chez les enseignants et apprenants des pays en voie de développement qui font face à la fracture numérique et éducative. Cette recherche aborde le problème de l’adaptabilité des LMS et suppose deux niveaux de modélisation : l’outil d’apprentissage et le contexte d’utilisation projeté. Pour traiter cette question d’adaptabilité, nous procédons par une double approche : l’analyse fonctionnelle des outils LMS et la réingénierie des interfaces utilisateurs. La première consiste à définir une approche d’analyse de l’activité d’enseignement-apprentissage dans les plateformes LMS. Ceci passe par une modélisation des situations courantes d’apprentissage et un croisement avec les fonctionnalités offertes dans les solutions LMS existantes. Ce travail préliminaire a permis de construire et proposer un modèle d’analyse utilisationnelle des plateformes que nous désignons méthode OCAPI fondé sur cinq catégories fonctionnelles : Organiser-Collaborer-Accompagner-Produire-Informer. La seconde approche s’inspire de la recherche fondamentale menée en amont et propose une réingénierie adaptative des LMS basée sur le contexte d’utilisation. Il s’agit d’un configurateur automatique embarqué qui adapte l’environnement de travail pour chaque usage et usager. Le prototype est articulé dans l’intention manifeste d’assurer une prise en main rapide des novices et se veut le moins contraignant possible du point de vue technologique. / The present research aims to model learning processes on Learning ManagementSystems (LMS) in a bid to maximize users’ efficiency. We came about this idea whilethinking over the possible ways of facilitating the use of LMS for teachers and learnersin countries affected by the digital divide. Drawing from that, the following question hasbeen stated: in a given learning context, how can we insert a Learning ManagementSystem that provides users with both easy handling and optimal using conditions? Thisissue raises the problem of LMS adaptability and suggests two levels of modeling: thelearning tool on one hand and the planned context of use on the other. To address thisissue of adaptability, we adopt a two-pronged approach including the functionalanalysis of LMS tools and the reengineering of user interfaces. The first step is todevelop an approach for the analysis of teaching and learning processes on LMS. Thisentails modeling common learning situations and cross-checking them with thefeatures available in LMS solutions. This preliminary work enabled to build a formalismfor LMS analysis which is referred to as the OCGPI approach (Organize-Collaborate-Guide-Produce-Inform). The second step proposes an adaptive reengineering of LMSbased on the context of use. This is namely an embedded configurator which adaptsthe working environment according to each use and each user. This tool aims at givingbeginners the possibility of acquainting themselves quickly with the virtual platform.
|
26 |
De l'utilisation de méta-modèles pour la modélisation et l'analyse de la réponse radar des forêts / On the use of metamodeling for modeling and analysis of the radar response of forestsPiteros, Panagiotis 15 April 2016 (has links)
Dans ce travail, une nouvelle approche de conduite des observations radar de la forêt est proposée. Elle combine des méthodes statistiques pour l’analyse de sensibilité et les plans d’expériences numériques séquentiels et un code de calcul simulant la rétrodiffusion d’une forêt en vue de l’élaboration d’un modèle approché (métamodèle) à moindre coût numérique. L’introduction de ces outils mathématiques a pour objectif d’aider à la planification et à l’exécution des simulations radar et à l’organisation et l’analyse de leurs résultats. D’une part, les techniques de l’analyse de sensibilité sont appliquées afin de classer par ordre d’importance les paramètres d’entrée du modèle et d’identifier les paramètres de la forêt les plus significatifs ainsi que leurs effets sur le signal radar. D’autre part, la construction d’un métamodèle adaptable accélère le code de calcul, en préservant la physique du phénomène. Le cadre opérationnel de ce modèle approché sert finalement à introduire le principe du radar cognitif dans notre stratégie. Dans ce cas, une analyse rapide du signal reçu est nécessaire pour concevoir, en temps réel, le nouveau signal à émettre. De cette façon, les observations du radar simulées incluent en temps réel l’effet de l’environnement illuminé grâce aux simulations plus rapides et ciblées. / In this work, a new approach to conduct the radar observations of forests is proposed. It combines statistical methods for sensitivity analysis and adaptive design of simulation experiments and a numerical code simulating the the forest backscattering for the use of a approximate model (metamodel) with less computational cost. The introduction of these mathematical tools has as an objective to assist the design and the execution of radar simulations and at the organization and the analysis of their results. On the one hand, the sensitivity analysis techniques were applied in order to classify the input parameters by means of their importance and to identify the most significant forest parameters as well as their effects on the radar signal. On the other hand, the construction of an adaptive metamodel accelerates the simulation model, while keeping the physics of the phenomenom. The operational frame of this approximate model serves finally in the introduction of the cognitive radar principle in our strategy. In that case, a fast analysis of the received signal is necessary to design, in real time, the new signal to be emitted. That way, the simulated radar observations take into account in real time the effect of the illuminated environment, thanks to the more focused and fast simulations.
|
27 |
Advanced Designs of Cancer Phase I and Phase II Clinical TrialsCui, Ye 13 May 2013 (has links)
The clinical trial is the most import study for the development of successful novel drugs. The aim of this dissertation is to develop innovative statistical methods to overcome the three main obstacles in clinical trials: (1) lengthy trial duration and inaccurate maximum tolerated dose (MTD) in phase I trials; (2) heterogeneity in drug effect when patients are given the same prescription and same dose; and (3) high failure rates of expensive phase III confirmatory trials due to the discrepancy in the endpoints adopted in phase II and III trials. Towards overcoming the first obstacle, we originally develop a hybrid design for the time-to-event dose escalation method with overdose control using a normalized equivalent toxicity score (NETS) system. This hybrid design can substantially reduce sample size, shorten study length, and estimate accurate MTD by employing a parametric model and adaptive Bayesian approach. Toward overcoming the second obstacle, we propose a new approach to incorporate patients’ characteristic using our proposed design in phase I clinical trials which considers the personalized information for patients who participant in the trials. To conquer the third obstacle, we propose a novel two-stage screening design for phase II trials whereby the endpoint of percent change in of tumor size is used in an initial screening to select potentially effective agents within a short time interval followed by a second screening stage where progression free survival is estimated to confirm the efficacy of agents. These research projects will substantially benefit both cancer patients and researchers by improving clinical trial efficiency and reducing cost and trial duration. Moreover, they are of great practical meaning since cancer medicine development is of paramount importance to human health care.
|
28 |
Métamodèles adaptatifs pour l'optimisation fiable multi-prestations de la masse de véhicules / Adaptive surrogate models for the reliable lightweight design of automotive body structuresMoustapha, Maliki 27 January 2016 (has links)
Cette thèse s’inscrit dans le cadre des travaux menés par PSA Peugeot Citroën pour l’allègement de ses véhicules. Les optimisations masse multi-prestations réalisées sur le périmètre de la structure contribuent directement à cette démarche en recherchant une allocation d’épaisseurs de tôles à masse minimale qui respectent des spécifications physiques relatives à différentes prestations (choc, vibro-acoustique, etc.). Ces spécifications sont généralement évaluées à travers des modèles numériques à très haute-fidélité qui présentent des temps de restitution particulièrement élevés. Le recours à des fonctions de substitution, connues sous le nom de métamodèles, reste alors la seule alternative pour mener une étude d’optimisation tout en respectant les délais projet. Cependant la prestation qui nous intéresse, à savoir le choc frontal, présente quelques particularités (grande dimensionnalité, fortes non-linéarités, dispersions physique et numérique) qui rendent sa métamodélisation difficile.L’objectif de la thèse est alors de proposer une approche d’optimisation basée sur des métamodèles adaptatifs afin de dégager de nouveaux gains de masse. Cela passe par la prise en compte du choc frontal dont le caractère chaotique est exacerbé par la présence d’incertitudes. Nous proposons ainsi une méthode d’optimisation fiabiliste avec l’introduction de quantiles comme mesure de conservatisme. L’approche est basée sur des modèles de krigeage avec enrichissement adaptatif afin de réduire au mieux le nombre d’appels aux modèles éléments finis. Une application sur un véhicule complet permet de valider la méthode. / One of the most challenging tasks in modern engineering is that of keeping the cost of manufactured goods small. With the advent of computational design, prototyping for instance, a major source of expenses, is reduced to its bare essentials. In fact, through the use of high-fidelity models, engineers can predict the behaviors of the systems they design quite faithfully. To be fully realistic, such models must embed uncertainties that may affect the physical properties or operating conditions of the system. This PhD thesis deals with the constrained optimization of structures under uncertainties in the context of automotive design. The constraints are assessed through expensive finite element models. For practical purposes, such models are conveniently substituted by so-called surrogate models which stand as cheap and easy-to-evaluate proxies. In this PhD thesis, Gaussian process modeling and support vector machines are considered. Upon reviewing state-of-the-art techniques for optimization under uncertainties, we propose a novel formulation for reliability-based design optimization which relies on quantiles. The formal equivalence of this formulation with the traditional ones is proved. This approach is then coupled to surrogate modeling. Kriging is considered thanks to its built-in error estimate which makes it convenient to adaptive sampling strategies. Such an approach allows us to reduce the computational budget by running the true model only in regions that are of interest to optimization. We therefore propose a two-stage enrichment scheme. The first stage is aimed at globally reducing the Kriging epistemic uncertainty in the vicinity of the limit-state surface. The second one is performed within iterations of optimization so as to locally improve the quantile accuracy. The efficiency of this approach is demonstrated through comparison with benchmark results. An industrial application featuring a car under frontal impact is considered. The crash behavior of a car is indeed particularly affected by uncertainties. The proposed approach therefore allows us to find a reliable solution within a reduced number of calls to the true finite element model. For the extreme case where uncertainties trigger various crash scenarios of the car, it is proposed to rely on support vector machines for classification so as to predict the possible scenarios before metamodeling each of them separately.
|
29 |
Apprentissage ciblé et Big Data : contribution à la réconciliation de l'estimation adaptative et de l’inférence statistique / Targeted learning in Big Data : bridging data-adaptive estimation and statistical inferenceZheng, Wenjing 21 July 2016 (has links)
Cette thèse porte sur le développement de méthodes semi-paramétriques robustes pour l'inférence de paramètres complexes émergeant à l'interface de l'inférence causale et la biostatistique. Ses motivations sont les applications à la recherche épidémiologique et médicale à l'ère des Big Data. Nous abordons plus particulièrement deux défis statistiques pour réconcilier, dans chaque contexte, estimation adaptative et inférence statistique. Le premier défi concerne la maximisation de l'information tirée d'essais contrôlés randomisés (ECRs) grâce à la conception d'essais adaptatifs. Nous présentons un cadre théorique pour la construction et l'analyse d'ECRs groupes-séquentiels, réponses-adaptatifs et ajustés aux covariable (traduction de l'expression anglaise « group-sequential, response-adaptive, covariate-adjusted », d'où l'acronyme CARA) qui permettent le recours à des procédures adaptatives d'estimation à la fois pour la construction dynamique des schémas de randomisation et pour l'estimation du modèle de réponse conditionnelle. Ce cadre enrichit la littérature existante sur les ECRs CARA notamment parce que l'estimation des effets est garantie robuste même lorsque les modèles sur lesquels s'appuient les procédures adaptatives d'estimation sont mal spécificiés. Le second défi concerne la mise au point et l'étude asymptotique d'une procédure inférentielle semi-paramétrique avec estimation adaptative des paramètres de nuisance. A titre d'exemple, nous choisissons comme paramètre d'intérêt la différence des risques marginaux pour un traitement binaire. Nous proposons une version cross-validée du principe d'inférence par minimisation ciblée de pertes (« Cross-validated Targeted Mimum Loss Estimation » en anglais, d'où l'acronyme CV-TMLE) qui, comme son nom le suggère, marie la procédure TMLE classique et le principe de la validation croisée. L'estimateur CV-TMLE ainsi élaboré hérite de la propriété typique de double-robustesse et aussi des propriétés d'efficacité du TMLE classique. De façon remarquable, le CV-TMLE est linéairement asymptotique sous des conditions minimales, sans recourir aux conditions de type Donsker. / This dissertation focuses on developing robust semiparametric methods for complex parameters that emerge at the interface of causal inference and biostatistics, with applications to epidemiological and medical research in the era of Big Data. Specifically, we address two statistical challenges that arise in bridging the disconnect between data-adaptive estimation and statistical inference. The first challenge arises in maximizing information learned from Randomized Control Trials (RCT) through the use of adaptive trial designs. We present a framework to construct and analyze group sequential covariate-adjusted response-adaptive (CARA) RCTs that admits the use of data-adaptive approaches in constructing the randomization schemes and in estimating the conditional response model. This framework adds to the existing literature on CARA RCTs by allowing flexible options in both their design and analysis and by providing robust effect estimates even under model mis-specifications. The second challenge arises from obtaining a Central Limit Theorem when data-adaptive estimation is used to estimate the nuisance parameters. We consider as target parameter of interest the marginal risk difference of the outcome under a binary treatment, and propose a Cross-validated Targeted Minimum Loss Estimator (TMLE), which augments the classical TMLE with a sample-splitting procedure. The proposed Cross-Validated TMLE (CV-TMLE) inherits the double robustness properties and efficiency properties of the classical TMLE , and achieves asymptotic linearity at minimal conditions by avoiding the Donsker class condition.
|
30 |
L'électrophysiologie temps-réel en neuroscience cognitive : vers des paradigmes adaptatifs pour l'étude de l'apprentissage et de la prise de décision perceptive chez l'homme / Real-time electrophysiology in cognitive neuroscience : towards adaptive paradigms to study perceptual learning and decision making in humansSanchez, Gaëtan 27 June 2014 (has links)
Aujourd’hui, les modèles computationnels de l'apprentissage et de la prise de décision chez l'homme se sont raffinés et complexifiés pour prendre la forme de modèles génératifs des données psychophysiologiques de plus en plus réalistes d’un point de vue neurobiologique et biophysique. Dans le même temps, le nouveau champ de recherche des interfaces cerveau-machine (ICM) s’est développé de manière exponentielle. L'objectif principal de cette thèse était d'explorer comment le paradigme de l'électrophysiologie temps-réel peut contribuer à élucider les processus d'apprentissage et de prise de décision perceptive chez l’homme. Au niveau expérimental, j'ai étudié les décisions perceptives somatosensorielles grâce à des tâches de discrimination de fréquence tactile. En particulier, j'ai montré comment un contexte sensoriel implicite peut influencer nos décisions. Grâce à la magnétoencéphalographie (MEG), j'ai pu étudier les mécanismes neuronaux qui sous-tendent cette adaptation perceptive. L’ensemble de ces résultats renforce l'hypothèse de la construction implicite d’un a priori ou d'une référence interne au cours de l'expérience. Aux niveaux théoriques et méthodologiques, j'ai proposé une vue générique de la façon dont l'électrophysiologie temps-réel pourrait être utilisée pour optimiser les tests d'hypothèses, en adaptant le dessin expérimental en ligne. J'ai pu fournir une première validation de cette démarche adaptative pour maximiser l'efficacité du dessin expérimental au niveau individuel. Ce travail révèle des perspectives en neurosciences fondamentales et cliniques ainsi que pour les ICM / Today, psychological as well as physiological models of perceptual learning and decision-making processes have recently become more biologically plausible, leading to more realistic (and more complex) generative models of psychophysiological observations. In parallel, the young but exponentially growing field of Brain-Computer Interfaces (BCI) provides new tools and methods to analyze (mostly) electrophysiological data online. The main objective of this PhD thesis was to explore how the BCI paradigm could help for a better understanding of perceptual learning and decision making processes in humans. At the empirical level, I studied decisions based on tactile stimuli, namely somatosensory frequency discrimination. More specifically, I showed how an implicit sensory context biases our decisions. Using magnetoencephalography (MEG), I was able to decipher some of the neural correlates of those perceptual adaptive mechanisms. These findings support the hypothesis that an internal perceptual-reference builds up along the course of the experiment. At the theoretical and methodological levels, I propose a generic view and method of how real-time electrophysiology could be used to optimize hypothesis testing, by adapting the experimental design online. I demonstrated the validity of this online adaptive design optimization (ADO) approach to maximize design efficiency at the individual level. I also discussed the implications of this work for basic and clinical neuroscience as well as BCI itself
|
Page generated in 0.0542 seconds