• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Testing for Efficacy for Primary and Secondary Endpoints by Partitioning Decision Paths

Liu, Yi January 2009 (has links)
No description available.
22

Nonparametric Combination Methodology : A Better Way to Handle Composite Endpoints?

Baurne, Yvette January 2015 (has links)
Composite endpoints are widely used in clinical trials. The outcome of a clinical trial can affect many individuals and it is therefore of importance that the methods used are as effective and correct as possible. Improvements of the standard method of testing composite endpoints have been proposed and in this thesis, the alternative method using nonparametric combination methodology is compared to the standard method. Performing a simulation study, the power of three combining functions (Fisher, Tippett and the Logistic) are compared to the power of the standard method. The performances of the four methods are evaluated for different compositions of treatment effects, as well as for independent and dependent components. The results show that using the nonparametric combination methodology leads to higher power in both dependent and independent cases. The combining functions are suitable for different compositions of treatment effects, the Fisher combining function being the most versatile. The thesis is written with support from Statisticon AB.
23

Development of Artificial Intelligence-based In-Silico Toxicity Models. Data Quality Analysis and Model Performance Enhancement through Data Generation.

Malazizi, Ladan January 2008 (has links)
Toxic compounds, such as pesticides, are routinely tested against a range of aquatic, avian and mammalian species as part of the registration process. The need for reducing dependence on animal testing has led to an increasing interest in alternative methods such as in silico modelling. The QSAR (Quantitative Structure Activity Relationship)-based models are already in use for predicting physicochemical properties, environmental fate, eco-toxicological effects, and specific biological endpoints for a wide range of chemicals. Data plays an important role in modelling QSARs and also in result analysis for toxicity testing processes. This research addresses number of issues in predictive toxicology. One issue is the problem of data quality. Although large amount of toxicity data is available from online sources, this data may contain some unreliable samples and may be defined as of low quality. Its presentation also might not be consistent throughout different sources and that makes the access, interpretation and comparison of the information difficult. To address this issue we started with detailed investigation and experimental work on DEMETRA data. The DEMETRA datasets have been produced by the EC-funded project DEMETRA. Based on the investigation, experiments and the results obtained, the author identified a number of data quality criteria in order to provide a solution for data evaluation in toxicology domain. An algorithm has also been proposed to assess data quality before modelling. Another issue considered in the thesis was the missing values in datasets for toxicology domain. Least Square Method for a paired dataset and Serial Correlation for single version dataset provided the solution for the problem in two different situations. A procedural algorithm using these two methods has been proposed in order to overcome the problem of missing values. Another issue we paid attention to in this thesis was modelling of multi-class data sets in which the severe imbalance class samples distribution exists. The imbalanced data affect the performance of classifiers during the classification process. We have shown that as long as we understand how class members are constructed in dimensional space in each cluster we can reform the distribution and provide more knowledge domain for the classifier.
24

Development of artificial intelligence-based in-silico toxicity models : data quality analysis and model performance enhancement through data generation

Malazizi, Ladan January 2008 (has links)
Toxic compounds, such as pesticides, are routinely tested against a range of aquatic, avian and mammalian species as part of the registration process. The need for reducing dependence on animal testing has led to an increasing interest in alternative methods such as in silico modelling. The QSAR (Quantitative Structure Activity Relationship)-based models are already in use for predicting physicochemical properties, environmental fate, eco-toxicological effects, and specific biological endpoints for a wide range of chemicals. Data plays an important role in modelling QSARs and also in result analysis for toxicity testing processes. This research addresses number of issues in predictive toxicology. One issue is the problem of data quality. Although large amount of toxicity data is available from online sources, this data may contain some unreliable samples and may be defined as of low quality. Its presentation also might not be consistent throughout different sources and that makes the access, interpretation and comparison of the information difficult. To address this issue we started with detailed investigation and experimental work on DEMETRA data. The DEMETRA datasets have been produced by the EC-funded project DEMETRA. Based on the investigation, experiments and the results obtained, the author identified a number of data quality criteria in order to provide a solution for data evaluation in toxicology domain. An algorithm has also been proposed to assess data quality before modelling. Another issue considered in the thesis was the missing values in datasets for toxicology domain. Least Square Method for a paired dataset and Serial Correlation for single version dataset provided the solution for the problem in two different situations. A procedural algorithm using these two methods has been proposed in order to overcome the problem of missing values. Another issue we paid attention to in this thesis was modelling of multi-class data sets in which the severe imbalance class samples distribution exists. The imbalanced data affect the performance of classifiers during the classification process. We have shown that as long as we understand how class members are constructed in dimensional space in each cluster we can reform the distribution and provide more knowledge domain for the classifier.
25

Bayesian and Frequentist Approaches for the Analysis of Multiple Endpoints Data Resulting from Exposure to Multiple Health Stressors.

Nyirabahizi, Epiphanie 08 March 2010 (has links)
In risk analysis, Benchmark dose (BMD)methodology is used to quantify the risk associated with exposure to stressors such as environmental chemicals. It consists of fitting a mathematical model to the exposure data and the BMD is the dose expected to result in a pre-specified response or benchmark response (BMR). Most available exposure data are from single chemical exposure, but living objects are exposed to multiple sources of hazards. Furthermore, in some studies, researchers may observe multiple endpoints on one subject. Statistical approaches to address multiple endpoints problem can be partitioned into a dimension reduction group and a dimension preservative group. Composite scores using desirability function is used, as a dimension reduction method, to evaluate neurotoxicity effects of a mixture of five organophosphate pesticides (OP) at a fixed mixing ratio ray, and five endpoints were observed. Then, a Bayesian hierarchical model approach, as a single unifying dimension preservative method is introduced to evaluate the risk associated with the exposure to mixtures chemicals. At a pre-specied vector of BMR of interest, the method estimates a tolerable area referred to as benchmark dose tolerable area (BMDTA) in multidimensional Euclidean plan. Endpoints defining the BMDTA are determined and model uncertainty and model selection problems are addressed by using the Bayesian Model Averaging (BMA) method.
26

Generic properties of semi-Riemannian geodesic flows / Propriedades genéricas de fluxos geodésicos semi-Riemannianos

Bettiol, Renato Ghini 24 June 2010 (has links)
Let M be a possibly non compact smooth manifold. We study genericity in the C^k topology (3<=k<=+infty) of nondegeneracy properties of semi-Riemannian geodesic flows on M. Namely, we prove a new version of the Bumpy Metric Theorem for a such M and also genericity of metrics that do not possess any degenerate geodesics satisfying suitable endpoints conditions. This extends results of Biliotti, Javaloyes and Piccione for geodesics with fixed endpoints to the case where endpoints lie on a compact submanifold P of MxM that satisfies an admissibility condition. Immediate consequences are generic non conjugacy between two points and non focality between a point and a submanifold (or also between two submanifolds). / Seja M uma variedade suave possivelmente não compacta. Estuda-se a genericidade na topologia C^k (3<=k<=+infty) de propriedades de não degenerescência de fluxos geodésicos semi-Riemannianos em M. A saber, provase uma nova versão do Teorema de Métricas Bumpy para uma tal M e também a genericidade de métricas que não possuem geodésicas degeneradas cujos pontos finais satisfazem certas condições. Isso estende resultados anteriores de Biliotti, Javaloyes and Piccione para geodésicas com extremos fixos para o caso onde os extremos variam em uma subvariedade compacta P de M ×M que satisfaz uma condição de admissibilidade. Consequências imediatas são genericidade de não conjugação entre dois pontos e não focalidade entre um ponto e uma subvariedade (ou também entre duas subvariedades).
27

Impact des critères de jugement sur l’optimisation de la détermination du nombre de sujet nécessaires pour qualifier un bénéfice clinique dans des essais cliniques en cancérologie / Impact of the endpoints on the optimization of the determination of the sample size in oncology clinical trials to qualify a clinical benefit

Pam, Alhousseiny 19 December 2017 (has links)
La survie globale (SG) est considérée comme le critère de jugement principal de référence et le plus pertinent/objectif dans les essais cliniques en oncologie. Cependant, avec l’augmentation du nombre de traitements efficaces disponibles pour une majorité de cancers, un nombre plus important de patients à inclure et un suivi plus long est nécessaire afin d’avoir suffisamment de puissance statistique pour pouvoir mettre en évidence une amélioration de la SG. De ce fait les critères de survie composites, tels que la survie sans progression, sont couramment utilisés dans les essais de phase III comme critère de substitution de la SG. Leur développement est fortement influencé par la nécessité de réduire la durée des essais cliniques, avec une réduction du coût et du nombre de sujets nécessaires. Cependant, ces critères sont souvent mal définis, et leurs définitions sont très variables entre les essais, rendant difficile la comparaison entre les essais. De plus, leur capacité de substitution à la SG, c’est-à-dire la capacité à prédire un bénéfice sur la SG à partir des résultats de l’essai sur le critère d’évaluation, n’a pas toujours été rigoureusement évaluée. Le projet DATECAN-1 a permis de proposer des recommandations pour la définition et donc l’homogénéisation entre essais cliniques randomisés (ECR) de ces critères [1].De plus, la majorité des essais cliniques de phase III intègrent désormais la qualité de vie relative à la santé (QdV) comme critère de jugement afin d’investiguer le bénéfice clinique de nouvelles stratégies thérapeutiques pour le patient. Une alternative serait de considérer un co-critère de jugement principal : un critère tumoral tel que la survie sans progression et la QdV afin de s’assurer du bénéfice clinique pour le patient [2]. Bien que la QdV soit reconnue comme second critère de jugement principal par l’ASCO (American Society of Clinical Oncology) et la FDA (Food and Drug Administration) [3], elle est encore peu prise en compte comme co-critère de jugement principal dans les essais. L’évaluation, l’analyse et l’interprétation des résultats de QdV demeurent complexes, et les résultats restent encore peu pris en compte par les cliniciens du fait de son caractère subjectif et dynamique [4].Lors de la conception d’un essai clinique avec des critères de jugements principaux multiples, il est essentiel de déterminer la taille de l’échantillon appropriée pour pouvoir indiquer la signification statistique de tous les co-critères de jugements principaux tout en préservant la puissance globale puisque l’erreur de type I augmente avec le nombre de co-critères de jugements principaux. Plusieurs méthodes ont été développées pour l’ajustement du taux d’erreur de type I [5]. Elles utilisent généralement un fractionnement du taux d’erreur de type I appliqué à chacune des hypothèses testées. Toutes ces méthodes sont investiguées dans ce projet.Dans cette optique, les objectifs de mon projet de thèse sont :1) D’étudier l’influence des définitions des critères de survie issus du consensus du DATECAN-1 sur les résultats et les conclusions des essais.2) D’étudier les propriétés des critères de substitution à la SG.3) De proposer un design du calcul du nombre de sujets nécessaires pour une étude clinique de phase III avec des co-critères de jugement de type temps jusqu’à événement tels la survie sans progression et la QdV.L’objectif final de mon projet de thèse est, sur la base de publications, de développer un package R pour le calcul du nombre de sujets nécessaires avec les co-critères de jugement principal et d’étudier les critères de substitutions à la SG pour le cancer du pancréas. / In oncology clinical trial, overall survival (OS) benefit is the gold standard for the approval of new anticancer by the regulatory agencies as the FDA. The need to reduce long-term follow-up, sample size and cost of clinical trials led to use some intermediate endpoint for OS to assess earlier efficacy of treatments. The intermediate endpoints are often used as primary endpoints because they can be assessed earlier and most of the time, these endpoints are composite endpoints because they combine different events. Nevertheless, composite endpoints suffer from important limitations specially the variability of their definitions, which is recognized as a major methodological problem. In this context, the DATECAN-1 project has been developed to provide recommendations and to standardize definitions of time-to-event composite endpoints for each specific diseases and at each specific stage by use of a formal consensus methodology. To validate surrogate endpoints, Buyse and colleagues developed a method based on individual-data meta-analysis, which assesses “individual-level” surrogacy and “trial-level” surrogacy, which is considered as the gold standard.Phase III cancer clinical trials investigators employ more and more two co-primary endpoints. However, an important challenge, in the conception of clinical trials is the sample size calculation according to the main objective(s) and the ability to manage multiple co-primary endpoints. The determination of sample size is fundamental and critical elements in the design of phase III clinical trial. If the sample size is too small, important effects may be go unnoticed. If it is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. The statistical power depends on the total number of events rather than on total sample size.The objectives of my thesis project are:1) To study the impact of the definitions of time-to-event endpoint from the DATECAN-1 consensus on the results and conclusions of the trials published in pancreatic cancer.2) To study the properties of the potential surrogate to the overall survival.3) To propose a design for the determination of sample size necessary for a phase III clinical study with co-primary time-to-event such as progression-free survival and time to quality of life deterioration.The final objective of my thesis project is to develop an R package for the calculation of the number of subjects needed with the co-primary time-to-event in phase III clinical trials.
28

Time to Diagnosis of Second Primary Cancers among Patients with Breast Cancer

Irobi, Edward Okezie 01 January 2016 (has links)
Many breast cancer diagnoses and second cancers are associated with BRCA gene mutations. Early detection of cancer is necessary to improve health outcomes, particularly with second cancers. Little is known about the influence of risk factors on time to diagnosis of second primary cancers after diagnosis with BRCA-related breast cancer. The purpose of this cohort study was to examine the risk of diagnosis of second primary cancers among women diagnosed with breast cancer after adjusting for BRCA status, age, and ethnicity. The study was guided by the empirical evidence supporting the mechanism of action in the mutation of BRCA leading to the development of cancer. Composite endpoint was used to define second primary cancer occurrences, and Kaplan-Meier survival curves were used to compare the median time-to-event among comparison groups and BRCA gene mutation status. Cox proportional hazards was used to examine the relationships between age at diagnosis, ethnicity, BRCA gene mutation status, and diagnosis of a second primary cancer. The overall median time to event for diagnosis of second primary cancers was 14 years. The hazard ratios for BRCA2 = 1.47, 95% CI [1.03 - 2.11], White = 1.511, 95% CI [1.18 - 1.94], and American Indian/Hawaiian = 1.424, 95% CI [1.12 -1.81] showing positive significant associations between BRCA2 mutation status and risk of diagnosis of second primary colorectal, endometrial, cervical, kidney, thyroid, and bladder cancers. Data on risk factors for development of second cancers would allow for identification of appropriate and timely screening procedures, determining the best course of action for prevention and treatment, and improving quality of life among breast cancer survivors.
29

Generic properties of semi-Riemannian geodesic flows / Propriedades genéricas de fluxos geodésicos semi-Riemannianos

Renato Ghini Bettiol 24 June 2010 (has links)
Let M be a possibly non compact smooth manifold. We study genericity in the C^k topology (3<=k<=+infty) of nondegeneracy properties of semi-Riemannian geodesic flows on M. Namely, we prove a new version of the Bumpy Metric Theorem for a such M and also genericity of metrics that do not possess any degenerate geodesics satisfying suitable endpoints conditions. This extends results of Biliotti, Javaloyes and Piccione for geodesics with fixed endpoints to the case where endpoints lie on a compact submanifold P of MxM that satisfies an admissibility condition. Immediate consequences are generic non conjugacy between two points and non focality between a point and a submanifold (or also between two submanifolds). / Seja M uma variedade suave possivelmente não compacta. Estuda-se a genericidade na topologia C^k (3<=k<=+infty) de propriedades de não degenerescência de fluxos geodésicos semi-Riemannianos em M. A saber, provase uma nova versão do Teorema de Métricas Bumpy para uma tal M e também a genericidade de métricas que não possuem geodésicas degeneradas cujos pontos finais satisfazem certas condições. Isso estende resultados anteriores de Biliotti, Javaloyes and Piccione para geodésicas com extremos fixos para o caso onde os extremos variam em uma subvariedade compacta P de M ×M que satisfaz uma condição de admissibilidade. Consequências imediatas são genericidade de não conjugação entre dois pontos e não focalidade entre um ponto e uma subvariedade (ou também entre duas subvariedades).
30

Multiplicité des tests, et calculs de taille d'échantillon en recherche clinique / Multiplicity of tests, and sample size determination of clinical trials

Riou, Jérémie 11 December 2013 (has links)
Ce travail a eu pour objectif de répondre aux problématiques inhérentes aux tests multiples dans le contexte des essais cliniques. A l’heure actuelle un nombre croissant d’essais cliniques ont pour objectif d’observer l’effet multifactoriel d’un produit, et nécessite donc l’utilisation de co-critères de jugement principaux. La significativité de l’étude est alors conclue si et seulement si nous observons le rejet d’au moins r hypothèses nulles parmi les m hypothèses nulles testées. Dans ce contexte, les statisticiens doivent prendre en compte la multiplicité induite par cette pratique. Nous nous sommes consacrés dans un premier temps à la recherche d’une correction exacte pour l’analyse des données et le calcul de taille d’échantillon pour r = 1. Puis nous avons travaillé sur le calcul de taille d’´echantillon pour toutes valeurs de r, quand les procédures en une étape, ou les procédures séquentielles sont utilisées. Finalement nous nous sommes intéressés à la correction du degré de signification engendré par la recherche d’un codage optimal d’une variable explicative continue dans un modèle linéaire généralisé / This work aimed to meet multiple testing problems in clinical trials context. Nowadays, in clinical research it is increasingly common to define multiple co-primary endpoints in order to capture a multi-factorial effect of the product. The significance of the study is concluded if and only if at least r null hypotheses are rejected among the m null hypotheses. In this context, statisticians need to take into account multiplicity problems. We initially devoted our work on exact correction of the multiple testing for data analysis and sample size computation, when r = 1. Then we worked on sample size computation for any values of r, when stepwise and single step procedures are used. Finally we are interested in the correction of significance level generated by the search for an optimal coding of a continuous explanatory variable in generalized linear model.

Page generated in 0.0394 seconds