1 |
EVALUATING THE IMPACTS OF ANTIDEPRESSANT USE ON THE RISK OF DEMENTIADuan, Ran 01 January 2019 (has links)
Dementia is a clinical syndrome caused by neurodegeneration or cerebrovascular injury. Patients with dementia suffer from deterioration in memory, thinking, behavior and the ability to perform everyday activities. Since there are no cures or disease-modifying therapies for dementia, there is much interest in identifying modifiable risk factors that may help prevent or slow the progression of cognitive decline. Medications are a common focus of this type of research.
Importantly, according to a report from the Centers for Disease Control and Prevention (CDC), 19.1% of the population aged 60 and over report taking antidepressants during 2011-2014, and this number tends to increase. However, antidepressant use among the elderly may be concerning because of the potentially harmful effects on cognition. To assess the impacts of antidepressants on the risk of dementia, we conducted three consecutive projects.
In the first project, a retrospective cohort study using Marginal Structural Cox Proportional Hazards regression model with Inverse Probability Weighting (IPW) was conducted to evaluate the average causal effects of different classes of antidepressant on the risk of dementia. Potential causal effects of selective serotonin reuptake inhibitors (SSRIs), serotonin and norepinephrine reuptake inhibitors (SNRIs), atypical anti-depressants (AAs) and tri-cyclic antidepressants (TCAs) on the risk of dementia were observed at the 0.05 significance level. Multiple sensitivity analyses supported these findings.
Unmeasured confounding is a threat to the validity of causal inference methods. In evaluating the effects of antidepressants, it is important to consider how common comorbidities of depression, such as sleep disorders, may affect both the exposure to anti-depressants and the onset of cognitive impairment. In this dissertation, sleep apnea and rapid-eye-movement behavior disorder (RBD) were unmeasured and thus uncontrolled confounders for the association between antidepressant use and the risk of dementia. In the second project, a bias factor formula for two binary unmeasured confounders was derived in order to account for these variables. Monte Carlo analysis was implemented to estimate the distribution of the bias factor for each class of antidepressant. The effects of antidepressants on the risk of dementia adjusted for both measured and unmeasured confounders were estimated. Sleep apnea and RBD attenuated the effect estimates for SSRI, SNRI and AA on the risk of dementia.
In the third project, to account for potential time-varying confounding and observed time-varying treatment, a multi-state Markov chain with three transient states (normal cognition, mild cognitive impairment (MCI), and impaired but not MCI) and two absorbing states (dementia and death) was performed to estimate the probabilities of moving between finite and mutually exclusive cognitive state. This analysis also allowed participants to recover from mild impairments (i.e., mild cognitive impairment, impaired but not MCI) to normal cognition, and accounted for the competing risk of death prior to dementia. These findings supported the results of the main analysis in the first project.
|
2 |
Analysis of Dependently Truncated Sample Using Inverse Probability Weighted EstimatorLiu, Yang 01 August 2011 (has links)
Many statistical methods for truncated data rely on the assumption that the failure and truncation time are independent, which can be unrealistic in applications. The study cohorts obtained from bone marrow transplant (BMT) registry data are commonly recognized as truncated samples, the time-to-failure is truncated by the transplant time. There are clinical evidences that a longer transplant waiting time is a worse prognosis of survivorship. Therefore, it is reasonable to assume the dependence between transplant and failure time. To better analyze BMT registry data, we utilize a Cox analysis in which the transplant time is both a truncation variable and a predictor of the time-to-failure. An inverse-probability-weighted (IPW) estimator is proposed to estimate the distribution of transplant time. Usefulness of the IPW approach is demonstrated through a simulation study and a real application.
|
3 |
Some Aspects of Propensity Score-based Estimators for Causal InferencePingel, Ronnie January 2014 (has links)
This thesis consists of four papers that are related to commonly used propensity score-based estimators for average causal effects. The first paper starts with the observation that researchers often have access to data containing lots of covariates that are correlated. We therefore study the effect of correlation on the asymptotic variance of an inverse probability weighting and a matching estimator. Under the assumptions of normally distributed covariates, constant causal effect, and potential outcomes and a logit that are linear in the parameters we show that the correlation influences the asymptotic efficiency of the estimators differently, both with regard to direction and magnitude. Further, the strength of the confounding towards the outcome and the treatment plays an important role. The second paper extends the first paper in that the estimators are studied under the more realistic setting of using the estimated propensity score. We also relax several assumptions made in the first paper, and include the doubly robust estimator. Again, the results show that the correlation may increase or decrease the variances of the estimators, but we also observe that several aspects influence how correlation affects the variance of the estimators, such as the choice of estimator, the strength of the confounding towards the outcome and the treatment, and whether constant or non-constant causal effect is present. The third paper concerns estimation of the asymptotic variance of a propensity score matching estimator. Simulations show that large gains can be made for the mean squared error by properly selecting smoothing parameters of the variance estimator and that a residual-based local linear estimator may be a more efficient estimator for the asymptotic variance. The specification of the variance estimator is shown to be crucial when evaluating the effect of right heart catheterisation, i.e. we show either a negative effect on survival or no significant effect depending on the choice of smoothing parameters. In the fourth paper, we provide an analytic expression for the covariance matrix of logistic regression with normally distributed regressors. This paper is related to the other papers in that logistic regression is commonly used to estimate the propensity score.
|
4 |
Empirical essays on job search behavior, active labor market policies, and propensity score balancing methodsSchmidl, Ricarda January 2014 (has links)
In Chapter 1 of the dissertation, the role of social networks is analyzed as an important determinant in the search behavior of the unemployed. Based on the hypothesis that the unemployed generate information on vacancies through their social network, search theory predicts that individuals with large social networks should experience an increased productivity of informal search, and reduce their search in formal channels. Due to the higher productivity of search, unemployed with a larger network are also expected to have a higher reservation wage than unemployed with a small network. The model-theoretic predictions are tested and confirmed empirically. It is found that the search behavior of unemployed is significantly affected by the presence of social contacts, with larger networks implying a stronger substitution away from formal search channels towards informal channels. The substitution is particularly pronounced for passive formal search methods, i.e., search methods that generate rather non-specific types of job offer information at low relative cost. We also find small but significant positive effects of an increase of the network size on the reservation wage. These results have important implications on the analysis of the job search monitoring or counseling measures that are usually targeted at formal search only.
Chapter 2 of the dissertation addresses the labor market effects of vacancy information during the early stages of unemployment. The outcomes considered are the speed of exit from unemployment, the effects on the quality of employment and the short-and medium-term effects on active labor market program (ALMP) participation. It is found that vacancy information significantly increases the speed of entry into employment; at the same time the probability to participate in ALMP is significantly reduced. Whereas the long-term reduction in the ALMP arises in consequence of the earlier exit from unemployment, we also observe a short-run decrease for some labor market groups which suggest that caseworker use high and low intensity activation measures interchangeably which is clearly questionable from an efficiency point of view. For unemployed who find a job through vacancy information we observe a small negative effect on the weekly number of hours worked.
In Chapter 3, the long-term effects of participation in ALMP are assessed for unemployed youth under 25 years of age. Complementary to the analysis in Chapter 2, the effects of participation in time- and cost-intensive measures of active labor market policies are examined. In particular we study the effects of job creation schemes, wage subsidies, short-and long-term training measures and measures to promote the participation in vocational training. The outcome variables of interest are the probability to be in regular employment, and participation in further education during the 60 months following program entry. The analysis shows that all programs, except job creation schemes have positive and long-term effects on the employment probability of youth. In the short-run only short-term training measures generate positive effects, as long-term training programs and wage subsidies exhibit significant locking-in'' effects. Measures to promote vocational training are found to increase the probability of attending education and training significantly, whereas all other programs have either no or a negative effect on training participation. Effect heterogeneity with respect to the pre-treatment level education shows that young people with higher pre-treatment educational levels benefit more from participation most programs. However, for longer-term wage subsidies we also find strong positive effects for young people with low initial education levels. The relative benefit of training measures is higher in West than in East Germany.
In the evaluation studies of Chapters 2 and 3 semi-parametric balancing methods of Propensity Score Matching (PSM) and Inverse Probability Weighting (IPW) are used to eliminate the effects of counfounding factors that influence both the treatment participation as well as the outcome variable of interest, and to establish a causal relation between program participation and outcome differences. While PSM and IPW are intuitive and methodologically attractive as they do not require parametric assumptions, the practical implementation may become quite challenging due to their sensitivity to various data features. Given the importance of these methods in the evaluation literature, and the vast number of recent methodological contributions in this field, Chapter 4 aims to reduce the knowledge gap between the methodological and applied literature by summarizing new findings of the empirical and statistical literature and practical guidelines for future applied research. In contrast to previous publications this study does not only focus on the estimation of causal effects, but stresses that the balancing challenge can and should be discussed independent of question of causal identification of treatment effects on most empirical applications. Following a brief outline of the practical implementation steps required for PSM and IPW, these steps are presented in detail chronologically, outlining practical advice for each step. Subsequently, the topics of effect estimation, inference, sensitivity analysis and the combination with parametric estimation methods are discussed. Finally, new extensions of the methodology and avenues for future research are presented. / In Kapitel 1 der Dissertation wird die Rolle von sozialen Netzwerken als Determinante im Suchverhalten von Arbeitslosen analysiert. Basierend auf der Hypothese, dass Arbeitslose durch ihr soziales Netzwerk Informationen über Stellenangebote generieren, sollten Personen mit großen sozialen Netzwerken eine erhöhte Produktivität ihrer informellen Suche erfahren, und ihre Suche in formellen Kanälen reduzieren. Durch die höhere Produktivität der Suche sollte für diese Personen zudem der Reservationslohn steigen. Die modelltheoretischen Vorhersagen werden empirisch getestet, wobei die Netzwerkinformationen durch die Anzahl guter Freunde, sowie Kontakthäufigkeit zu früheren Kollegen approximiert wird. Die Ergebnisse zeigen, dass das Suchverhalten der Arbeitslosen durch das Vorhandensein sozialer Kontakte signifikant beeinflusst wird. Insbesondere sinkt mit der Netzwerkgröße formelle Arbeitssuche - die Substitution ist besonders ausgeprägt für passive formelle Suchmethoden, d.h. Informationsquellen die eher unspezifische Arten von Jobangeboten bei niedrigen relativen Kosten erzeugen. Im Einklang mit den Vorhersagen des theoretischen Modells finden sich auch deutlich positive Auswirkungen einer Erhöhung der Netzwerkgröße auf den Reservationslohn.
Kapitel 2 befasst sich mit den Arbeitsmarkteffekten von Vermittlungsangeboten (VI) in der frühzeitigen Aktivierungsphase von Arbeitslosen. Die Nutzung von VI könnte dabei eine „doppelte Dividende“ versprechen. Zum einen reduziert die frühe Aktivierung die Dauer der Arbeitslosigkeit, und somit auch die Notwendigkeit späterer Teilnahme in Arbeitsmarktprogrammen (ALMP). Zum anderen ist die Aktivierung durch Information mit geringeren locking-in‘‘ Effekten verbunden als die Teilnahme in ALMP. Ziel der Analyse ist es, die Effekte von frühen VI auf die Eingliederungsgeschwindigkeit, sowie die Teilnahmewahrscheinlichkeit in ALMP zu messen. Zudem werden mögliche Effekte auf die Qualität der Beschäftigung untersucht. Die Ergebnisse zeigen, dass VI die Beschäftigungswahrscheinlichkeit signifikant erhöhen, und dass gleichzeitig die Wahrscheinlichkeit in ALMP teilzunehmen signifikant reduziert wird. Für die meisten betrachteten Subgruppen ergibt sich die langfristige Reduktion der ALMP Teilnahme als Konsequenz der schnelleren Eingliederung. Für einzelne Arbeitsmarktgruppen ergibt sich zudem eine frühe und temporare Reduktion, was darauf hinweist, dass Maßnahmen mit hohen und geringen „locking-in“ Effekten aus Sicht der Sachbearbeiter austauschbar sind, was aus Effizienzgesichtspunkten fragwürdig ist. Es wird ein geringer negativer Effekt auf die wöchentliche Stundenanzahl in der ersten abhängigen Beschäftigung nach Arbeitslosigkeit beobachtet.
In Kapitel 3 werden die Langzeiteffekte von ALMP für arbeitslose Jugendliche unter 25 Jahren ermittelt. Die untersuchten ALMP sind ABM-Maßnahmen, Lohnsubventionen, kurz-und langfristige Maßnahmen der beruflichen Bildung sowie Maßnahmen zur Förderung der Teilnahme an Berufsausbildung. Ab Eintritt in die Maßnahme werden Teilnehmer und Nicht-Teilnehmer für einen Zeitraum von sechs Jahren beobachtet. Als Zielvariable wird die Wahrscheinlichkeit regulärer Beschäftigung, sowie die Teilnahme in Ausbildung untersucht. Die Ergebnisse zeigen, dass alle Programme, bis auf ABM, positive und langfristige Effekte auf die Beschäftigungswahrscheinlichkeit von Jugendlichen haben. Kurzfristig finden wir jedoch nur für kurze Trainingsmaßnahmen positive Effekte, da lange Trainingsmaßnahmen und Lohnzuschüsse mit signifikanten locking-in‘‘ Effekten verbunden sind. Maßnahmen zur Förderung der Berufsausbildung erhöhen die Wahrscheinlichkeit der Teilnahme an einer Ausbildung, während alle anderen Programme keinen oder einen negativen Effekt auf die Ausbildungsteilnahme haben. Jugendliche mit höherem Ausbildungsniveau profitieren stärker von der Programmteilnahme. Jedoch zeigen sich für längerfristige Lohnsubventionen ebenfalls starke positive Effekte für Jugendliche mit geringer Vorbildung. Der relative Nutzen von Trainingsmaßnahmen ist höher in West- als in Ostdeutschland.
In den Evaluationsstudien der Kapitel 2 und 3 werden die semi-parametrischen Gewichtungsverfahren Propensity Score Matching (PSM) und Inverse Probability Weighting (IPW) verwendet, um den Einfluss verzerrender Faktoren, die sowohl die Maßnahmenteilnahme als auch die Zielvariablen beeinflussen zu beseitigen, und kausale Effekte der Programmteilahme zu ermitteln. Während PSM and IPW intuitiv und methodisch sehr attraktiv sind, stellt die Implementierung der Methoden in der Praxis jedoch oft eine große Herausforderung dar. Das Ziel von Kapitel 4 ist es daher, praktische Hinweise zur Implementierung dieser Methoden zu geben. Zu diesem Zweck werden neue Erkenntnisse der empirischen und statistischen Literatur zusammengefasst und praxisbezogene Richtlinien für die angewandte Forschung abgeleitet. Basierend auf einer theoretischen Motivation und einer Skizzierung der praktischen Implementierungsschritte von PSM und IPW werden diese Schritte chronologisch dargestellt, wobei auch auf praxisrelevante Erkenntnisse aus der methodischen Forschung eingegangen wird. Im Anschluss werden die Themen Effektschätzung, Inferenz, Sensitivitätsanalyse und die Kombination von IPW und PSM mit anderen statistischen Methoden diskutiert. Abschließend werden neue Erweiterungen der Methodik aufgeführt.
|
5 |
Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade InversaSientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
|
6 |
Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade InversaSientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
|
7 |
Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade InversaSientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
|
8 |
Regression modeling with missing outcomes : competing risks and longitudinal data / Contributions aux modèles de régression avec réponses manquantes : risques concurrents et données longitudinalesMoreno Betancur, Margarita 05 December 2013 (has links)
Les données manquantes sont fréquentes dans les études médicales. Dans les modèles de régression, les réponses manquantes limitent notre capacité à faire des inférences sur les effets des covariables décrivant la distribution de la totalité des réponses prévues sur laquelle porte l'intérêt médical. Outre la perte de précision, toute inférence statistique requière qu'une hypothèse sur le mécanisme de manquement soit vérifiée. Rubin (1976, Biometrika, 63:581-592) a appelé le mécanisme de manquement MAR (pour les sigles en anglais de « manquant au hasard ») si la probabilité qu'une réponse soit manquante ne dépend pas des réponses manquantes conditionnellement aux données observées, et MNAR (pour les sigles en anglais de « manquant non au hasard ») autrement. Cette distinction a des implications importantes pour la modélisation, mais en général il n'est pas possible de déterminer si le mécanisme de manquement est MAR ou MNAR à partir des données disponibles. Par conséquent, il est indispensable d'effectuer des analyses de sensibilité pour évaluer la robustesse des inférences aux hypothèses de manquement.Pour les données multivariées incomplètes, c'est-à-dire, lorsque l'intérêt porte sur un vecteur de réponses dont certaines composantes peuvent être manquantes, plusieurs méthodes de modélisation sous l'hypothèse MAR et, dans une moindre mesure, sous l'hypothèse MNAR ont été proposées. En revanche, le développement de méthodes pour effectuer des analyses de sensibilité est un domaine actif de recherche. Le premier objectif de cette thèse était de développer une méthode d'analyse de sensibilité pour les données longitudinales continues avec des sorties d'étude, c'est-à-dire, pour les réponses continues, ordonnées dans le temps, qui sont complètement observées pour chaque individu jusqu'à la fin de l'étude ou jusqu'à ce qu'il sorte définitivement de l'étude. Dans l'approche proposée, on évalue les inférences obtenues à partir d'une famille de modèles MNAR dits « de mélange de profils », indexés par un paramètre qui quantifie le départ par rapport à l'hypothèse MAR. La méthode a été motivée par un essai clinique étudiant un traitement pour le trouble du maintien du sommeil, durant lequel 22% des individus sont sortis de l'étude avant la fin.Le second objectif était de développer des méthodes pour la modélisation de risques concurrents avec des causes d'évènement manquantes en s'appuyant sur la théorie existante pour les données multivariées incomplètes. Les risques concurrents apparaissent comme une extension du modèle standard de l'analyse de survie où l'on distingue le type d'évènement ou la cause l'ayant entrainé. Les méthodes pour modéliser le risque cause-spécifique et la fonction d'incidence cumulée supposent en général que la cause d'évènement est connue pour tous les individus, ce qui n'est pas toujours le cas. Certains auteurs ont proposé des méthodes de régression gérant les causes manquantes sous l'hypothèse MAR, notamment pour la modélisation semi-paramétrique du risque. Mais d'autres modèles n'ont pas été considérés, de même que la modélisation sous MNAR et les analyses de sensibilité. Nous proposons des estimateurs pondérés et une approche par imputation multiple pour la modélisation semi-paramétrique de l'incidence cumulée sous l'hypothèse MAR. En outre, nous étudions une approche par maximum de vraisemblance pour la modélisation paramétrique du risque et de l'incidence sous MAR. Enfin, nous considérons des modèles de mélange de profils dans le contexte des analyses de sensibilité. Un essai clinique étudiant un traitement pour le cancer du sein de stade II avec 23% des causes de décès manquantes sert à illustrer les méthodes proposées. / Missing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches.
|
Page generated in 0.1642 seconds