• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 35
  • 20
  • 12
  • 11
  • 10
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluating an increased spill regime as a management tool to improve downstream passage of salmon smolt / Utvärdering av ökat spill som en förvaltningsåtgärd för att förbättra nedströms passage för laxsmolt

Hansson, Mattias January 2022 (has links)
The anadromous Atlantic salmon (Salmo salar) is dependent on river connectivity to reach spawning and rearing habitats in rivers. Most rivers are today fragmented by artificial barriers such as hydroelectric powerplants (HEP) which impede this migratory movement. As an effort to mitigate river fragmentation, upstream passages have been built at some impediments, but passage facilitating downstream migration has been commonly disregarded until recently. The increased mortality associated with downstream movement through hydroelectric turbines have caused population declines and extirpation in some areas. A common first measure to improve downstream passage is to increase spill discharge during critical time periods. In this study we aim to evaluate the efficiency of increased spill as a management tool and investigate the effect on environmental and physiological factors on fish passage in River Mörrumsån, Sweden. This was done by tracking salmon smolt using high resolution acoustic telemetry and time-to-event models. The results were then further explored in relation to 2D-hydraulic models of different spill scenarios. The results show that increasing the spill at Upper Hemsjö HEP resulted in a relatively high impediment passage efficiency. Spill gate passage was best explained by the proportion of spill in relation to intake channel discharge and diel period. Increased spill and nights without ambient lights where positively associated with spill gate passage rate. Increased levels of spill seamed to increase the attraction to the spill gate passage zone which could be explained by the increased area of flow directed toward the spill gate observed in the hydraulic model, showing the usefulness of hydraulic models as an additional tool in evaluation and planning of remedial measures. The result in this study shows that increased spill can be an effective way to ameliorate river fragmentation, however this should be put in relation to the goal of the mitigation measure as increased spill only creates a temporary solution for a permanent problem. / Den anadroma Atlantlaxen (Salmo salar) är beroende av fria vandringsvägar mellan hav och älv, då de nyttjar olika habitat under olika delar av livscykeln. Laxen vandra mellan dess tillväxt område i havet och deras parnings samt uppväxt område i älvar. Det flesta vattendrag är i dag fragmenterade av artificiella barriärer som tillexempel vattenkraftverk. För att minska de negativa effekterna av dessa har uppströmspassager konstruerats, men passage för nedströmsvandring har ofta åsidosatts. Den ökade dödligheten associerad med nedströmspassage genom vattenkraftturbiner har haft kraftig påverkan på många populationer. En vanlig första åtgärd för att förbättra nedströmspassagen är att öka mängden vatten som spills under kritiska perioder, som laxsmoltens nedströmsvandring under våren. Målet med denna studie är att utvärdera ökat spill som en förvaltningsåtgärd och analyserar hur olika miljöfaktorer och fysiologiska faktorer påverkar passageeffektiviteten för nedströmsmigrerande smolt. Detta undersöktes genom att använda högupplöst hydroakustisk telemetri för att spåra fisken och genom att analysera rörelsen med hjälp av time-to-event modeller. Tvådimensionella hydrauliska modeller skapades för att ytterligare utforska resultaten i förhållande till olika spill regimer. Den ökade spill regimen resulterade i relativt hög passage effektivitet förbi hindret. Passage genom spill porten kunde bäst förklaras av period på dygnet samt proportionen av spill i förhållande till flödet genom intagskanalen. Ökad passage effektivitet kunde associeras med passage under natten när belysningen på dammen var avstängd samt ökad andel spill i förhållande till flödet genom intagaskanalen. Ökat spill verkade resultera i ökad attraktion till området framför spill porten, vilket skulle kunna förklaras av den ökade ytan av vatten som rör sig mot spill porten som observerades i den hydrauliska modellen. Dessa resultat visar hur hydrauliskmodellering kan användas som ytterligare ett förvaltningsverktyg för att utvärdera eller planera förbättringsåtgärder. Resultaten Vidare indikerar studien att ökat spill är en metod som kan förbättrar laxsmoltens möjligheter till nedströms passage, detta bör dock sättas i relation till vad som är målet med åtgärden, ökat spill gynnar ett fåtal arter under en begränsad tid på året. Således kvarstår problemet med begränsad konnektivitet i vattendraget under resterande delar av året.
12

Joint Modeling the Relationship between Longitudinal and Survival Data Subject to Left Truncation with Applications to Cystic Fibrosis

VanderWyden Piccorelli, Annalisa January 2010 (has links)
No description available.
13

Hawkes Process Models for Unsupervised Learning on Uncertain Event Data

Haghdan, Maysam January 2017 (has links)
No description available.
14

Improving the accuracy of statistics used in de-identification and model validation (via the concordance statistic) pertaining to time-to-event data

Caetano, Samantha-Jo January 2020 (has links)
Time-to-event data is very common in medical research. Thus, clinicians and patients need analysis of this data to be accurate, as it is often used to interpret disease screening results, inform treatment decisions, and identify at-risk patient groups (ie, sex, race, gene expressions, etc.). This thesis tackles three statistical issues pertaining to time-to-event data. The first issue was incurred from an Institute for Clinical and Evaluative Sciences lung cancer registry data set, which was de-identified by censoring patients at an earlier date. This resulted in an underestimate of the observed times of censored patients. Five methods were proposed to account for the underestimation incurred by de-identification. A subsequent simulation study was conducted to compare the effectiveness of each method in reducing bias, and mean squared error as well as improving coverage probabilities of four different KM estimates. The simulation results demonstrated that situations with relatively large numbers of censored patients required methodology with larger perturbation. In these scenarios, the fourth proposed method (which perturbed censored times such that they were censored in the final year of study) yielded estimates with the smallest bias, mean squared error, and largest coverage probability. Alternatively, when there were smaller numbers of censored patients, any manipulation to the altered data set worsened the accuracy of the estimates. The second issue arises when investigating model validation via the concordance (c) statistic. Specifically, the c-statistic is intended for measuring the accuracy of statistical models which assess the risk associated with a binary outcome. The c-statistic estimates the proportion of patient pairs where the patient with a higher predicted risk had experienced the event. The definition of a c-statistic cannot be uniquely extended to time-to-event outcomes, thus many proposals have been made. The second project developed a parametric c-statistic which assumes to the true survival times are exponentially distributed to invoke the memoryless property. A simulation study was conducted which included a comparative analysis of two other time-to-event c-statistics. Three different definitions of concordance in the time-to-event setting were compared, as were three different c-statistics. The c-statistic developed by the authors yielded the smallest bias when censoring is present in data, even when the exponentially distributed parametric assumptions do not hold. The c-statistic developed by the authors appears to be the most robust to censored data. Thus, it is recommended to use this c-statistic to validate prediction models applied to censored data. The third project in this thesis developed and assessed the appropriateness of an empirical time-to-event c-statistic that is derived by estimating the survival times of censored patients via the EM algorithm. A simulation study was conducted for various sample sizes, censoring levels and correlation rates. A non-parametric bootstrap was employed and the mean and standard error of the bias of 4 different time-to-event c-statistics were compared, including the empirical EM c-statistic developed by the authors. The newly developed c-statistic yielded the smallest mean bias and standard error in all simulated scenarios. The c-statistic developed by the authors appears to be the most appropriate when estimating concordance of a time-to-event model. Thus, it is recommended to use this c-statistic to validate prediction models applied to censored data. / Thesis / Doctor of Philosophy (PhD)
15

Pharmakokinetische und pharmakodynamische Populationsanalyse von Cariporide in der Therapie der koronaren Herz-Erkrankung unter Bypass-Operation

Harnisch, Lutz 20 January 2003 (has links)
Die Beurteilung der Wirkung von Cariporide auf dieEreignis-Wahrscheinlichkeit eines Herzinfarktes oder des Todes imRahmen einer Bypass-Operation ist Gegenstand der Arbeit. DasNHE-Austauschersystem in der Herzmuskelzelle induziert den unterIschämie durch den intrazellulären Protonenüberschusshervorgerufenen Na(+)- und Ca(2+)-Einstrom. Cariporide ist einNHE-Inhibitor, der den unter Ischämie durch die Ca(2+)-Überladunginduzierten Herzmuskelzelltod verzögern soll. In einer kombinierten Phase-II/III-Studie (GUARDIAN, n=11590) war derEinfluss verschiedener intravenöser Dosen von Cariporide auf dieHäufigkeit von Herzinfarkt oder Tod in ACS/NQMI, PTCA undCABG-Patientenkollektiven untersucht worden. Nur die höchstdosierteCABG-Gruppe zeigte eine signifikante Reduktion der Ereignisrate um24,7% (p=0,027) gegenüber Placebo. Diese schwacheDosis-Wirkungs-Beziehung konnte durch eine pharmakokinetische undpharmakodynamische Populationsanalyse in eineKonzentrations-Wirkungs-Beziehung überführt werden. Zur Entwicklungdes Populationsmodells waren verschiedene Submodelle notwendig: 1. Modell für den Zeitverlauf der Ereignisrate: Durch Kombination zweier Weibull-Verteilungen ist es möglich, die beobachteten Daten als Überlebenszeitfunktion nach CABG zu beschreiben. Ein akutes, unmittelbar auf die CABG-Operation zurückzuführendes Risiko wird hier von einem chronischen Risiko unterschieden. 2. Pharmakokinetisches Modell: Ein multiexponentielles populationspharmakokinetisches Modell ist notwendig zur Beschreibung der PK nach iv-Applikation von Cariporide bei Probanden und Patienten. 3. Pharmakodynamisches Modell: Über ein empirisches logistisches Modell wird die Reduktion des akuten Risikos mit der mittleren Cariporide Plasmakonzentration unter der Bypass-Operation verknüpft. In einer Substudie der GUARDIAN-Hauptstudie konnte daspopulationspharmakokinetische Modell aus der früherenPhase-I-Entwicklung mit Probanden für die Patienten validiert werden.Die mit Hilfe der individüllen Dosierung, der demographischen Datenund dem Populationsmodell für die Periode mit dem höchsten Risikowährend der CABG-Operation vorhergesagten mittlerenPlasmakonzentrationen flossen in die Analyse derKonzentrations-Zeit-Abhängigkeit der Ereignis-Wahrscheinlichkeit ein. Eine untere Schwellenkonzentration (0,5mg/l), unterhalb der mitkeinem Effekt zu rechnen ist, wurde bestimmt. Die Daten erlaubten dieSchätzung des maximalen Effekts nur unzureichend. Die maximaleRisikoreduktion von 60% wurde mit einem Konfidenzintervall von29% bis 100% geschätzt. Unter Einsatz einer linearen Näherungdes Hill-Modells wurde eine obere Schwellenkonzentration bei 0,9mg/lbestimmt. Nur 37% aller Patienten der 80mg-Dosisgruppe erreichtenmittlere Konzentrationen oberhalb der unteren Schwellenkonzentration,in der 120mg-Dosisgruppe waren es immerhin schon 75% allerPatienten. Die Infusion von 120mg Cariporide über eine Stunde gefolgt voneiner Erhaltungsdosis von 20mg/h für weitere 47 Stunden sollte bei95% der Patienten während der CABG-Operation zu mittlerenKonzentrationen über der minimal effektiven Konzentration von0,5mg/l führen. Eine auf diese Weise mittels Simulationenoptimierte Dosierungsregel sollte während der CABG-Operation zu einemerhöhten Schutz der Patienten gegen die Folgen ischämischerEreignisse führen. Eine weitere Erhöhung der Erhaltungsdosis aufbis zu 40mg/h mit einer entsprechenden Anpassung der Initialdosissollte 95% der Patienten sogar über die bisher nur unsicher zubestimmende obere Grenzkonzentration von 0,9mg/l bringen. Solltenkeine dosislimitierenden Nebenwirkungen auftreten, kann dieseErhöhung sowohl der Initialdosis als auch der Erhaltungsdosis zueiner weiteren Verbesserung während der Risikoperiode führen undeinen weiteren potentiellen klinischen Vorteil für Cariporideerbringen. / Subject of this analysis is the assessment of the effect of cariporideon the event probability of a myocardial infarction (MI) or death inthe scope of a coronary artery bypass graft. Thesodium-hydrogen-exchange system (NHE) in the myocardial cell inducesthe sodium and calcium influx caused by an ischaemia induced hydrogenoverload. Cariporide is a NHE-inhibitor which is seen to be delayingthe necrosis of myocardial cells caused by the ischaemia inducedcalcium influx. The influence of different intravenous doses of cariporide on thefrequency of MI and death in ACS/NQMI, PTCA, and CABG patients hadbeen investigated in a combined phase II/III trial (GUARDIAN,n=11590). Only the highest dosed CABG-subgroup showed a significantreduction of the event-rate compared to placebo of 24.7% (p=0.027).This weak dose-effect-relationship could be translated into aconcentration-effect relationship by using a populationpharmacokinetic/pharmacodynamic (PK/PD) analysis. To develop thispopulation model a series of sub-models were established: 1) Model for the time-to-event progression: using a combination of two Weibull-distributions, it was possible to describe the observed data following the CABG procedure by means of a survival-function. An acute risk, likely to be related to the CABG-procedure could be discriminated from a chronic risk. 2) Pharmacokinetic model: a multi-exponential population PK model was necessary to describe the PK after iv-application of cariporide in volunteers as well patients. 3) Pharmacodynamic model: using an empirical logistic model the reduction of the acute risk was linked to the cariporide plasma-concentrations. In a sub-study of the GUARDIAN-main study, the population PK model ofthe phase I development in volunteers had been be validated. Usingthe individual dosing, the individual demographic information and thepopulation PK model mean concentrations were calculated for the periodof the highest risk during the CABG procedure. Those concentrationswere then introduced into the analysis of the concentration timedependency of the event probability. A lower threshold concentration (0.5mg/l) was estimated beneath thatno effect would be expected. The data permitted the estimation of themaximum effect only insufficiently. A maximum risk reduction of 60%was estimated with a confidence interval from 29% to 100%. Using thelinear approximation of the Hill-model an upper thresholdconcentration of 0.9mg/l could be determined. Mean concentrationsunder risk were reached above the lower threshold concentration inonly 37% of all patients in the 80mg dose group, whereas in the 120mgdose-group already 75% of the patients exceeded the lower thresholdconcentration. The infusion of 120mg cariporide for an hour followed by a maintenancedose of 20mg/h for further 47 hours should maintain in 95% of thepatients during the CABG-procedure mean concentrations above theminimal effective concentration of 0.5mg/l. A dose regimen optimisedin this sense by means of simulations should lead to an increasedprotection against ischemic events during and after theCABG-operation. A further increase of the maintenance dose up to40mg/h with a corresponding adaptation of the initial dose shouldshift at least 95% of the patients above the so far impreciseestimated upper threshold concentration of 0.9mg. If no dose limitingside-effects occur, this increase of both the initial dose and themaintenance dose may lead to a further improvement during the riskperiod and may result in a further potential clinical advantage forcariporide.
16

Pharmacometric Models to Improve Treatment of Tuberculosis

Svensson, Elin M January 2016 (has links)
Tuberculosis (TB) is the world’s most deadly infectious disease and causes enormous public health problems. The comorbidity with HIV and the rise of multidrug-resistant TB strains impede successful therapy through drug-drug interactions and the lack of efficient second-line treatments. The aim of this thesis was to support the improvement of anti-TB therapy through development of pharmacometric models, specifically focusing on the novel drug bedaquiline, pharmacokinetic interactions and methods for pooled population analyses. A population pharmacokinetic model of bedaquiline and its metabolite M2, linked to semi-mechanistic models of body weight and albumin concentrations, was developed and used for exposure-response analysis. Treatment response was quantified by measurements of mycobacterial load and early bedaquiline exposure was found to significantly impact the half-life of bacterial clearance. The analysis represents the first successful characterization of a concentration-effect relationship for bedaquiline. Single-dose Phase I studies investigating potential interactions between bedaquiline and efavirenz, nevirapine, ritonavir-boosted lopinavir, rifampicin and rifapentine were analyzed with a model-based approach. Substantial effects were detected in several cases and dose-adjustments mitigating the impact were suggested after simulations. The interaction effects of nevirapine and ritonavir-boosted lopinavir were also confirmed in patients with multidrug-resistant TB on long-term treatment combining the antiretrovirals and bedaquiline. Furthermore, the outcomes from model-based analysis were compared to results from conventional non-compartmental analysis in a simulation study. Non-compartmental analysis was found to consistently underpredict the interaction effect when most of the concentration-time profile was not observed, as commonly is the case for compounds with very long terminal half-life such as bedaquiline. To facilitate pooled analyses of individual patient data from multiple sources a structured development procedure was outlined and a fast diagnostic tool for extensions of the stochastic model components was developed. Pooled analyses of nevirapine and rifabutin pharmacokinetics were performed; the latter generating comprehensive dosing recommendations for combined administration of rifabutin and antiretroviral protease inhibitors. The work presented in this thesis demonstrates the usefulness of pharmacometric techniques to improve treatment of TB and especially contributes evidence to inform optimized dosing regimens of new and old anti-TB drugs in various clinical contexts.
17

Adaptation des designs de phase II en cancérologie à un critère de jugement censuré / Adaptation of phase II oncology designs to a time-to-event endpoint

Belin, Lisa 30 May 2016 (has links)
La phase II d’un essai clinique représente une étape importante de l’évaluation d’une thérapeutique. Il s’agit d’une étape de sélection ayant pour objectif d’identifier les traitements efficaces, qui seront évalués de manière comparative en phase III, et ceux qui, jugés inefficaces seront abandonnés. Le choix du critère de jugement et la règle de décision sont les éléments essentiels de cette étape de l’essai clinique. En cancérologie, le critère de jugement est principalement de nature binaire (réponse au traitement). Cependant, depuis quelques années, les essais de phase II considèrent également les délais d’événement censurés ou non (délai de survie sans progression). Dans le cadre de ce travail de thèse, nous nous sommes intéressés à ces deux types de critères dans le cas de plans d’expérience à deux étapes comportant une possibilité d’arrêt précoce pour futilité. Les conditions réelles des phases II voient souvent la réalisation pratique de l'essai s'écarter du protocole défini a priori. Les cas les plus fréquents sont l'impossibilité d'évaluer le patient (liée par exemple à un événement intercurrent) ou la réalisation d’un suivi trop court. L’objectif de ce travail de thèse était d'évaluer les répercussions entrainées par ces modifications et de proposer des solutions alternatives. Ce travail comporte deux parties principales. Dans la première partie, nous traitons d’un critère de jugement binaire avec patients non évaluables ; dans la seconde d’un critère censuré avec un suivi plus court. Lorsque le critère de jugement est binaire, le plan d’expérience dit « de Simon » est le plus souvent utilisé. Mais ce plan d’expérience nécessite que la réponse de tous les sujets inclus soit connue au temps d’intérêt clinique choisi. En conséquence, comment devons-nous analyser les résultats de l’essai lorsque certains patients sont non évaluables et quelles sont les conséquences sur les caractéristiques opérationnelles de ce plan ? Plusieurs stratégies ont été envisagées (exclusion, remplacement ou considération des patients non évaluables comme des échecs), l’étude de simulation que nous avons réalisée dans le cadre de ce travail montre qu’aucune de ces stratégies ne permet de respecter les risques de première et de seconde espèce fixés a priori. Pour pallier à ces défaillances, nous avons proposé une stratégie dite de « sauvetage » qui repose sur une reconstruction du plan de Simon à partir des probabilités de réponse sous les hypothèses nulle et alternative sachant que le patient est évaluable. Les résultats de nos études de simulations montrent que cette stratégie permet de minimiser les déviations aux risques de première et de seconde espèce alors qu’il y a moins d’information que planifiée dans l’essai. Depuis la dernière décennie, de nombreux schémas considérant les délais d’événement ont été développés. En pratique, l’approche naïve estime la survie sans progression par un taux brut à un temps fixé et utilise un schéma de Simon pour établir une règle de décision. Case et Morgan ont proposé en 2003 un plan d’expérience comparant les taux de survie sans progression estimés à un temps choisi. Considérant l’ensemble du suivi, Kwak et al. (2013) ont proposé d’utiliser la statistique dite du one-sample log-rank pour comparer la courbe de survie sans progression observée à une courbe théorique. Ce dernier schéma permet d’intégrer le maximum d’information disponible et ainsi d’inclure moins de patients pour tester les mêmes hypothèses. Ce plan d’expérience nécessite cependant un suivi de tous les patients de leur inclusion jusqu’à la fin de l’essai. Cette hypothèse semble peu réaliste ; en conséquence, nous avons proposé une nouvelle stratégie basée sur une modification du schéma de Kwak pour un suivi réduit. (...) / Phase II clinical trials are a key stage in drug development. This screening stage goal is to identify the active drugs which deserve further confirmatory studies in phase III trials from the inactive ones whose development will be stopped. The choice of the endpoint and the decision rules are major elements in phase II trials. In oncology, the endpoint is usually binary (response to treatment). For few years, phase II trials have considered time-to-event data as primary endpoint e.g. progression-free survival. In this work, we studied two-stage designs with futility stop with binary or time-to-event endpoints. In real life, phase II trials deviate from the protocol when patient evaluation is no longer feasible (because of intercurrent event) or when the follow-up is too short. The goal of this work is to evaluate the consequences of these deviations on the planned design and to propose some alternative ways to analyze or plan the trials. This work has two main parts. The first one deals with binary endpoints when some unevaluable patients occur and the second one studies time-to-event endpoints with a reduced follow-up. With binary endpoints, the Simon’s plan is the most often used design in oncology. In Simon’s plans, response at a clinical time point should be available for all the included patients. Therefore, should be analyzed the trial when some patients are unevaluable? What are the consequences of these unevaluable patients on the operating characteristics of the design? Several strategies have been studied (exclusion, replacement, use unevaluable patients as treatment failures), our simulations show that none of these strategies preserve type I and type II error rates planned in the protocol. So, a « rescue » strategy has been proposed by computing the stopping boundaries from the conditional probability of responding for an evaluable patient under null and alternative hypotheses. Although there is less information than required by the protocol, simulations show that the “rescue” strategy minimizes the deviations of type I and type II error rates. For the last decades, several time-to-event designs have been developed. The naive approach established a Simon’s plan with progression-free survival rates estimated by crude rates at clinical time-point. In 2005, Case and Morgan proposed a design comparing progression-free survival rates calculated by Nelson and Aalen estimates. Failure times were incorporated in the estimates but the comparison was defined at a pre-specified time point. Considering the whole follow-up, Kwak et al. proposed to use one-sample log-rank test to compare an observed survival curve to a theoretical survival curve. The maximum amount of information is integrated into this design. Therefore, the design reduces the sample size. In the Kwak’s design, patients must be followed from their inclusions to the end of the trial (no loss to follow-up). In some cases (good prognosis, long duration trial), this hypothesis seems unrealistic and a modification of the Kwak’s design integrating reduced follow-up has been proposed. This restriction has been compared to the original design of Kwak in several censoring scenario. The two new methods have been illustrated using some phase II clinical trials planned in the Institut Curie. They demonstrate the interest of these methods in real life.
18

Customer based time-to-event models for cancellation behavior: a revenue management integrated approach

Iliescu, Dan Cristian 17 November 2008 (has links)
Low-cost carriers and escalading fuel costs are placing increased pressure on US legacy carriers to reposition traditional revenue management techniques towards more customer-centric approaches. In this context, recent critiques of revenue management models question the validity of assumptions used to describe passenger cancellation and no-show behavior. Since forecasts of cancellation and no-shows are used to determine overbooking levels, i.e, authorization levels in excess of capacity, concerns related to possible missed revenue opportunities are justifiable. The goal of this research is to explore the impact of time-to-event forecasts of cancellations on airlines' revenue streams. To determine the intensity of the cancellation process, a discrete time proportional odds (DTPO) model with a prospective time scale was estimated for a sample of tickets provided by the Airline Reporting Corporation. Empirical results based on 2004 data from eight domestic US markets indicate that the intensity of the cancellation process is strongly influenced both by the time from ticket purchase and the time before flight departure, as well as several other covariates, including departure day of week, market, and group size. In order to assess potential revenue benefits associated with the DTPO formulation, a modified simulation experiment of a "single-resource capacity control" was designed. Simulation results indicate that time-to-event cancellation forecasts can generate revenue gains up to 2%. Overall, this research provides new insights into the transitional properties associated with the cancellation process, which will help airlines to improve their overbooking strategies.
19

Ein klimasensitives, autoregressives Modell zur Beschreibung der Einzelbaum Mortalität / A time-discrete climate-sensitive survival model for tree mortalities resolved on single tree level

Schoneberg, Sebastian 18 August 2017 (has links)
No description available.
20

Novel methods for network meta-analysis and surrogate endpoints validation in randomized controlled trials with time-to-event data

Tang, Xiaoyu 08 February 2024 (has links)
Most statistical methods to design and analyze randomized controlled trials with time-to-event data, and synthesize their results in meta-analyses, use the hazard ratio (HR) as the measure of treatment effect. However, the HR relies on the proportional hazard assumption which is often violated, especially in cancer trials. In addition, the HR might be challenging to interpret and is frequently misinterpreted as a risk ratio (RR). In meta-analysis, conventional methods ignore that HRs are estimated over different time supports when the component trials have different follow-up durations. These issues also pertain to advanced statistical methods, such as network meta-analysis and surrogate endpoints validation. Novel methods that rely on the difference in restricted mean survival times (RMST) would help addressing these issues. In this dissertation, I first developed a Bayesian network meta-analysis model using the difference in RMST. This model synthesizes all the available evidence from multiple time points and treatment comparisons simultaneously through within-study covariance and between-study covariance for the differences in RMST. I proposed an estimator of the within-study covariance and estimated the model under the Bayesian framework. The simulation studies showed adequate performance in terms of mean bias and mean squared error. I illustrated the model on a network of randomized trials of second-line treatments of advanced non-small-cell lung cancer. Second, I introduced a novel two-stage meta-analytical model to evaluate trial-level surrogacy. I measured trial-level surrogacy by the coefficient of determination at multiple time points based on the differences in RMST. The model borrows strength across data available at multiple time points and enables assessing how the strength of surrogacy changes over time. Simulation studies showed that the estimates of coefficients of determination are unbiased and have high precision in almost all of the scenarios we examined. I demonstrated my model in two individual patient data meta-analyses in gastric cancer. Both methods, for network meta-analysis and surrogacy evaluation, have the advantage of not involving extrapolation beyond the observed time support in component trials and of not relying on the proportional hazard assumption. Finally, motivated by the common misinterpretation of the HR as a RR, I investigated the theoretical relationship between the HR and the RR and compared empirically the treatment effects measured by the HR and the RR in a large sample of oncology RCTs. When there is evidence of superiority for experimental group, misinterpreting the HR as the RR leads to overestimating the benefits by about 20%. / 2026-02-08T00:00:00Z

Page generated in 0.1401 seconds