Spelling suggestions: "subject:"comparative effectiveness"" "subject:"comparative affectiveness""
21 |
Coronary revascularisation in the UK : using routinely collected data to explore case trends, treatment effectiveness and outcome predictionMcallister, Katherine January 2015 (has links)
Background: Coronary artery disease is a common cause of morbidity and mortality in the UK. Interventional revascularisation procedures for addressing the disease include percutaneous coronary intervention (PCI) and coronary artery bypass grafting (CABG), which respectively seek to open up or bypass blocked arteries to restore blood flow to heart muscle. Rates at which these procedures are carried out have changed in recent years, as have clinical indications for referral. PCI is delivered by interventional cardiologists, while CABG is carried out by cardiothoracic surgeons, necessitating multi-disciplinary decision making. There is both within- and cross-speciality debate as to the optimal treatment strategy in some case types. Evaluation of the care provided is of clinical and political importance, and requires information about how post-procedure event rates per operator and hospital compare with those expected given the composition of patient populations. Methods: Two UK-wide audit databases of PCI and CABG procedures were used to explore a range of clinical outcome questions. The patient populations contained within each database were compared to see how they differed, and also how each had changed in recent years. In CABG patients, comparative effectiveness of two different surgical techniques (single vs bilateral mammary artery grafting) was assessed with respect to both short-term and long-term mortality outcomes. In PCI patients, a risk model to predict 30-day mortality was developed for use in clinical appraisal. Results: In both patient populations there had been changes to the relative frequencies of many characteristics over time. In the CABG population, multivariable analysis showed that patients undergoing single mammary artery grafting had lower odds of all-cause mortality within 30 days of procedure than those receiving bilateral mammary artery grafting, but had worse overall survival in the long term. In the PCI population, the developed risk model demonstrated good calibration and discrimination at predicting 30-day all-cause mortality. Discussion: The studies described above demonstrate that large-scale routinely collected data can be used to gain insights into clinical care quality and delivery. These resources are under-utilised at present; correcting this requires an understanding of the limitations of the data and how the information contained therein relates to actual clinical care.
|
22 |
Access to Care and Surgery Outcomes Among People with Epilepsy on MedicaidSchiltz, Nicholas Kenneth 23 August 2013 (has links)
No description available.
|
23 |
Comparative Effectiveness of Alendronate and Risedronate on the Risk of Non-Vertebral Fractures in Older Women: An Instrumental Variables Approach: A DissertationChen, Yong 19 December 2011 (has links)
Osteoporosis is a significant public health problem in the U.S. It not only affects the physical well-being of the older women but also creates a substantial financial burden for the health care system. The mainstay of osteoporosis medications is bisphosphonate treatment of which alendronate and risedronate are the most commonly prescribed in clinical practice. However, there have been no head-to-head randomized controlled trials (RCTs) evaluating the effects of these two bisphosphonates on fracture outcomes.
In the absence of RCTs, observational studies are necessary to provide alternative evidence on the comparative effectiveness between alendronate and risedronate on fracture outcomes. However, existing observational studies have provided inconclusive results partially due to residual confounding from unobserved variables such as patients’ health status or behavior. IV analysis may be one method to address unmeasured confounding bias in observational studies. While it has not been applied in bisphosphonate research, it has been used in research on a variety of other prescription medications.
In this dissertation, we applied the IV approach with an IV, date of generic alendronate availability, to evaluate the comparative effectiveness between alendronate and risedronate using observational data. This dissertation improved current research in several ways. First, we extended the IV approach to research on bisphosphonates. Second, compared with the current observational studies on bisphosphonates, this dissertation may more accurately estimate the relative effects between alendronate and risedronate because IV analysis is not subject to unmeasured confounding bias. Third, the study results extended the current evidence of the comparative effectiveness between the two most commonly prescribed bisphosphonates. Finally, we proposed and provided empirical evidence of a new IV that might be used for future prescription drug research.
The finding of this dissertation can be summarized from three aspects. First, we found that the evidence supported the validity of the date of generic availability as an IV in the study of bisphosphonates. Second, applying IV approach to study the comparative effectiveness of alendronate and risedronate, we found that alendronate and risedronate were comparable to reduce the risk of 12-month non-vertebral fractures in older women. Since generic alendronate is availability on the market while generic risedronate is not, promoting the use of alendronate may help reduce the healthcare cost and not sacrifice the clinical effectiveness. Finally, by comparing the proposed IV with a popular IV-physician preference, we found that both the calendar time IV based on the date of generic availability and the physician preference appeared to be valid. It might be practically easier to use the calendar time IV than the physician preference IV.
|
24 |
Methodological challenges in the comparative assessment of effectiveness and safety of oral anticoagulants in individuals with atrial fibrillation using administrative healthcare dataGubaidullina, Liliya 08 1900 (has links)
La fibrillation auriculaire (FA), l’arythmie cardiaque la plus courante est un facteur de risque majeur pour le développement de l’accident vasculaire cérébral ischémique (AVC). Les anticoagulants oraux directs (AOD) ont largement remplacé la warfarine en usage clinique pour la prévention des AVC dans la FA. Cette recherche a examiné deux défis méthodologiques importants qui peuvent survenir dans les études observationnelles sur l’efficacité et l’innocuité comparatives des AOD et de la warfarine. Premièrement : un biais d’information résultant d’une classification erronée de l’exposition au traitement à la warfarine suite aux ajustements de doses fréquentes qui ne sont pas adéquatement consignés dans les données de dispensations pharmacologiques. Deuxièmement : un biais de sélection, en raison de la censure informative, généré par des mécanismes de censure différentiels, chez les patients exposés aux AOD, ou à la warfarine.
À l’aide des données administratives du Québec, j’ai mené trois études de cohortes rétrospectives qui ont portées sur toutes les personnes ayant initié un anticoagulant oral de 2010 à 2016. Ces études étaient restreintes aux résidents du Québec couverts par le régime public d'assurance médicaments (environ 40% de la population au Québec), c’est-à-dire : des personnes âgées de 65 ans et plus; des bénéficiaires de l’aide sociale; des personnes qui n’ont pas accès à une assurance-maladie privée; et les personnes à leur charge.
Dans la première étude, nous avons émis l'hypothèse que les données sur les réclamations en pharmacie ne reflètent pas correctement la durée de la dispensation de la warfarine. Les écarts entre les renouvellements consécutifs étaient plus grands pour la warfarine que les AOD. Dans cette étude, on a trouvé que l'écart moyen pour les usagers de la warfarine était de 9.3 jours (avec un intervalle de confiance de 95% [IC]: 8.97-9.59), l'apixaban de 3.08 jours (IC de 95%: 2.96--3.20), et de 3.15 jours pour le rivaroxaban (IC de 95%: 3.03-3.27). Les écarts entre les renouvellements consécutifs présentaient une plus grande variabilité chez les personnes qui prenaient de la warfarine comparativement à celles qui prenaient des AOD. Cette variation peut refléter les changements de posologie de la warfarine lorsque la dose quotidienne est ajustée par le professionnel de la santé en fonction des résultats du rapport normalisé international (INR). L’ajustement de la dose peut prolonger (ou raccourcir) la période couverte par le nombre de comprimés délivrés.
Dans la deuxième étude, nous avons émis l'hypothèse que la définition de la durée d'exposition basée sur la variable des « jours fournis », disponible dans la base de données, et le délai de grâce fixe, entraîneront une erreur de classification différentielle de l’exposition à la warfarine par rapport aux AOD. Dans cette étude, on a utilisé deux approches pour définir la durée des dispensations : la variable des « jours fournis » disponible dans la base de données ainsi qu’une approche axée sur les données pour la définition de la durée de dispensation qui tient compte des antécédents de distribution précédents. La deuxième étude a révélé qu'en utilisant la variable des « jours fournis », la durée moyenne (et l'écart type) des durées des dispensations pour le dabigatran, le rivaroxaban, et la warfarine étaient de 19 (15), 19 (14), et de 13 (12) jours, respectivement. En utilisant l’approche fondée sur des données, les durées étaient de 20 (16), 19 (15), et de 15 (16) jours, respectivement. Ainsi, l'approche fondée sur les données s’est rapprochée de la variable des « jours fournis » pour les thérapies à dose standard telles que le dabigatran et le rivaroxaban. Une approche axée sur les données pour la définition de la durée de dispensation, qui tient compte des antécédents de distribution précédents, permet de mieux saisir la variabilité de la durée de dispensation de la warfarine par rapport à la méthode basée sur la variable des « jours fournis ». Toutefois, cela n’a pas eu d’impact sur les estimations du rapport de risque sur la sécurité comparative des AOD par rapport à la warfarine.
Dans la troisième étude, nous avons émis l'hypothèse que lors de l'évaluation de l’effet d’un traitement continu avec des anticoagulants oraux (l'analyse per-protocole), la censure élimine les patients les plus malades du groupe des AOD et des patients en meilleure santé du groupe de warfarine. Cela peut baisser l'estimation de l'efficacité et de l'innocuité comparative en faveur des AOD. L’étude a démontré que les mécanismes de censure chez les initiateurs d’AOD et de warfarine étaient différents. Ainsi, certaines covariables pronostiquement significatives, telles que l’insuffisance rénale chronique et l’insuffisance cardiaque congestive, étaient associées avec une augmentation de la probabilité de censure chez les initiateurs d’AOD, et une diminution de la probabilité de censure chez les initiateurs de warfarine. Pour corriger le biais de sélection introduit par la censure, nous avons appliqué la méthode de pondération par la probabilité inverse de censure. Deux stratégies de spécification du modèle pour l’estimation des poids de censure ont été explorées : le modèle non stratifié, et le modèle stratifié en fonction de l’exposition. L’étude a démontré que lorsque les poids de censure sont générés sans tenir compte des dynamiques de censure spécifiques, les estimés ponctuels sont biaisés de 15% en faveur des AOD par rapport à l'ajustement des estimés ponctuels avec des poids de censure stratifiée selon l’exposition (rapport de risque: 1.41; IC de 95%: 1.34, 1.48 et rapport de risque: 1.26; IC de 95%: 1.20, 1.33, respectivement).
Dans l’ensemble, les résultats de cette thèse ont d’importantes implications méthodologiques pour les futures études pharmacoépidémiologiques. À la lumière de ceux-ci, les résultats des études observationnelles précédentes peuvent être revus et une certaine hétérogénéité peut être expliquée. Les résultats pourraient également être extrapolés à d’autres questions cliniques. / Atrial fibrillation (AF), the most common cardiac arrhythmia is a major risk factor for the development of ischemic stroke. Direct oral anticoagulants (DOACs) replaced warfarin in clinical use for stroke prevention in AF. This research investigated two important methodological challenges that may arise in observational studies on the comparative effectiveness and safety of DOACs and warfarin. First, an information bias resulting from misclassification of exposure to dose-varying warfarin therapy when using days supplied value recorded in pharmacy claims data. Second, a selection bias due to informative censoring with differential censoring mechanisms in the DOACs- and the warfarin exposure groups.
Using the Québec administrative databases, I conducted three retrospective cohort studies that included patients initiating an oral anticoagulant between 2010 and 2016. The studies were restricted to Québec residents covered by the public drug insurance plan (about 40% of Québec’s population), including those aged 65 years and older, welfare recipients, those not covered by private medical insurance, and their dependents.
In the first study, we hypothesized that pharmacy claims data inadequately captured the duration of the dispensation of warfarin. Gaps between subsequent dispensations (refill gaps) and their variation are larger for warfarin than for DOACs. In this study, we found that the average refill gap for the users of warfarin was 9.3 days (95% confidence interval [CI]:8.97-9.59), apixaban 3.08 days (95%CI: 2.96--3.20), dabigatran 3.70 days (95%CI: 3.56-3.84) and rivaroxaban 3.15 days (95%CI: 3.03-3.27). The variance of refill gaps was greater among warfarin users than among DOAC users. This variation may reflect the changes in warfarin posology when the daily dose is adjusted by a physician or a pharmacist based on previously observed international normalized ratio (INR) results. The dose adjustment may lead to a prolongation of the period covered by the number of dispensed pills.
In the second study, we hypothesized that the definition of duration of dispensation based on the days supplied value and a fixed grace period will lead to differential misclassification of exposure to warfarin and DOACs. This may bias the estimate of comparative safety in favor of DOACs. In this study, we used two approaches to define the duration of dispensations: the recorded days supplied value, and the longitudinal coverage approximation (data-driven) that may account for individual variation in drug usage patterns. The second study found that using the days supplied, the mean (and standard deviation) dispensation durations for dabigatran, rivaroxaban, and warfarin were 19 (15), 19 (14), and 13 (12) days, respectively. Using the data-driven approach, the durations were 20 (16), 19 (15), and 15 (16) days, respectively. Thus, the data-driven approach closely approximated the recorded days supplied value for the standard dose therapies such as dabigatran and rivaroxaban. For warfarin, the data-driven approach captured more variability in the duration of dispensations compared to the days supplied value, which may better reflect the true drug-taking behavior of warfarin. However, this did not impact the hazard ratio estimates on the comparative safety of DOACs vs. warfarin.
In the third study, we hypothesized that when assessing the effect of continuous treatment with oral anticoagulants (per-protocol effect), censoring removes sicker patients from the DOACs group and healthier patients from the warfarin group. This may bias the estimate of comparative effectiveness and safety in favor of DOACs. The study showed that the mechanisms of censoring in the DOAC and the warfarin exposure groups were different. Thus, prognostically meaningful covariates, such as chronic renal failure and congestive heart failure, had an opposite direction of association with the probability of censoring in the DOACs and warfarin groups. To correct the selection bias introduced by censoring, we applied the inverse probability of censoring weights. Two strategies for the specification of the model for the estimation of censoring weights were explored: exposure-unstratified and exposure-stratified. The study found that exposure-unstratified censoring weights did not account for the differential mechanism of censoring across the treatment group and failed to eliminate the selection bias. The hazard ratio associated with continuous treatment with warfarin versus DOACs adjusted with exposure unstratified censoring weights was 15% biased in favor of DOACs compared to the hazard ratio adjusted with exposure-stratified censoring weights (hazard ratio: 1.41; 95% CI: 1.34, 1.48 and hazard ratio: 1.26; 95%CI: 1.20, 1.33, respectively).
Overall, the findings of this thesis have important methodological implications for future pharmacoepidemiologic studies. Moreover, the results of the previous observational studies can be reappraised, and some heterogeneity can be explained. The findings can be extrapolated to other clinical questions.
|
25 |
EFFECTIVENESS OF USING AUTOMATICALLY ADVANCED VS. MANUALLY ADVANCED INFOGRAPHICS IN HEALTH AWARENESSAsefeh Kardgar (18451410) 02 May 2024 (has links)
<p dir="ltr">Infographics are increasingly used as visual communication tools for conveying health information to diverse audiences. However, research is lacking on how specific infographic design factors influence learning outcomes. This study aimed to determine the comparative effectiveness of automatically advanced (Group A) versus manually advanced (Group B) infographics for promoting breast cancer awareness and knowledge. A mixed-methods quasi-experimental pretest-posttest design was utilized. The sample comprised 42 participants for analysis. Of these, the majority, 41 persons self-reported as female, with one participant indicating their gender as 'other.' Participant ages ranged from 25 to 55 years (M = 40.5, SD = 7.62). Most participants were well-educated, with graduate degrees or other advanced education beyond a bachelor's degree. Participants were randomly assigned to either the automatically advanced infographic group (Group A) or the manually advanced infographic group (Group B). Results indicated that Group B had significantly higher scores on the knowledge post-test compared to Group A, suggesting improved recall and comprehension of key information. There were no significant differences in cognitive load ratings or viewing duration between the groups. Qualitative feedback from participants suggested that Group B's manually advanced infographic facilitated better self-pacing and absorption of content. While the study's findings provide preliminary evidence supporting the efficacy of manually advanced infographics in learning complex health information, limitations are acknowledged. The research contributes to the design of patient education materials and underscores the necessity for further investigations across varied populations and health topics to understand the impact of infographic design more comprehensively on learning and behavior.</p>
|
26 |
Medicare managed care : market penetration and the resulting health outcomesHoward, Steven W. 07 December 2011 (has links)
Managed care plans purport to improve the health of their members with chronic diseases. How has the growing adoption of Medicare Advantage (MA), the managed care program for Medicare beneficiaries, affected the progression of chronic disease? The literature is rich with articles focusing on managed care organizations' impacts on quality of care, access, patient satisfaction, and costs. However, few studies have analyzed these impacts with respect to market penetration of Medicare managed care.
The objective of this research has been to analyze the relationships between the market penetration of MA plans and the progression of chronic diseases among Medicare beneficiaries. The Chronic Disease Severity Index scale (CDSI) was constructed to represent beneficiaries' overall chronic disease states for survey or claims-based data, when more direct clinical measures of disease progression are not available. Using the CDSI on the MEPS survey dataset from AHRQ, we sought to assess the impacts of MA market penetration and other covariates on the overall chronic disease state of Medicare beneficiaries from 2004 through 2008.
Though the model explains much of the variation in CDSI change, the author expected the multilevel model would show that MA penetration explains a significant level of variation in CDSI change. However, this hypothesis was not substantiated, and the findings suggest that unmeasured factors may be contributing to additional unexplained heterogeneity.
Policymakers should explore opportunities to refine the current MA program. The MA program costs the federal government more than the Traditional Fee-for-Service Medicare program, and there is no definitive evidence that outcomes differ. Within both programs, there is opportunity to experiment with different models of payment, healthcare service delivery and care coordination.
The Patient Protection and Affordable Care Act (ACA) contains provisions for innovative demonstration projects in delivery and payment. The effectiveness of these ACA initiatives must be monitored, both for impacts on health outcomes and for economic effects. This research can inform future approaches to outcomes assessment using the CDSI, and multilevel modeling methodologies similar to those employed here.
Firms offering MA health plans would be prudent to proactively demonstrate their value to beneficiaries and taxpayers. They should explore means of better monitoring and reporting the longitudinal outcomes of their enrolled beneficiaries. Demonstrating that they can bring value in terms of improved health outcomes will help insure their long-term survival, both in the marketplace and in the political arena. / Graduation date: 2012
|
Page generated in 0.1156 seconds