Spelling suggestions: "subject:"confounding adjustment"" "subject:"confounding djustment""
1 |
The Validity of Summary Comorbidity MeasuresGilbert, Elizabeth January 2016 (has links)
Prognostic scores, and more specifically comorbidity scores, are important and widely used measures in the health care field and in health services research. A comorbidity is an existing disease an individual has in addition to a primary condition of interest, such as cancer. A comorbidity score is a summary score that can be created from these individual comorbidities for prognostic purposes, as well as for confounding adjustment. Despite their widespread use, the properties of and conditions under which comorbidity scores are valid dimension reduction tools in statistical models is largely unknown. This dissertation explores the use of summary comorbidity measures in statistical models. Three particular aspects are examined. First, it is shown that, under standard conditions, the predictive ability of these summary comorbidity measures remains as accurate as the individual comorbidities in regression models, which can include factors such as treatment variables and additional covariates. However, these results are only true when no interaction exists between the individual comorbidities and any additional covariate. The use of summary comorbidity measures in the presence of such interactions leads to biased results. Second, it is shown that these measures are also valid in the causal inference framework through confounding adjustment in estimating treatment effects. Lastly, we introduce a time dependent extension of summary comorbidity scores. This time dependent score can account for changes in patients' health over time and is shown to be a more accurate predictor of patient outcomes. A data example using breast cancer data from the SEER Medicare Database is used throughout this dissertation to illustrate the application of these results to the health care field. / Statistics
|
2 |
Bayesian Methods and Computation for Large Observational DatasetsWatts, Krista Leigh 30 September 2013 (has links)
Much health related research depends heavily on the analysis of a rapidly expanding universe of observational data. A challenge in analysis of such data is the lack of sound statistical methods and tools that can address multiple facets of estimating treatment or exposure effects in observational studies with a large number of covariates. We sought to advance methods to improve analysis of large observational datasets with an end goal of understanding the effect of treatments or exposures on health. First we compared existing methods for propensity score (PS) adjustment, specifically Bayesian propensity scores. This concept had previously been introduced (McCandless et al., 2009) but no rigorous evaluation had been done to evaluate the impact of feedback when fitting the joint likelihood for both the PS and outcome models. We determined that unless specific steps were taken to mitigate the impact of feedback, it has the potential to distort estimates of the treatment effect. Next, we developed a method for accounting for uncertainty in confounding adjustment in the context of multiple exposures. Our method allows us to select confounders based on their association with the joint exposure and the outcome while also accounting for the uncertainty in the confounding adjustment. Finally, we developed two methods to combine het- erogenous sources of data for effect estimation, specifically information coming from a primary data source that provides information for treatments, outcomes, and a limited set of measured confounders on a large number of people and smaller supplementary data sources containing a much richer set of covariates. Our methods avoid the need to specify the full joint distribution of all covariates.
|
Page generated in 0.0996 seconds