• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 63
  • 63
  • 13
  • 11
  • 11
  • 11
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Extending the clinical and economic evaluations of a randomised controlled trial the IONA study /

Henderson, Neil James Kerr. January 2008 (has links)
Thesis (Ph.D.) - University of Glasgow, 2008. / Ph.D. thesis submitted to the Department of Statistics, Faculty of Information and Mathematical Sciences, University of Glasgow, 2008. Includes bibliographical references. Print version also available.
32

Bivariate survival time and censoring

Tsai, Wei-Yann. January 1982 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1982. / Typescript. Vita. Description based on print version record. Includes bibliographical references (leaves 126-131).
33

On statistical surveillance issues of optimality and medical applications /

Sonesson, Christian. January 2003 (has links)
Thesis (doctoral)--Göteborgs universitet, 2003. / Includes bibliographical references.
34

A Bayesian approach to detect the onset of activity limitation among adults in NHIS

Bai, Yan. January 2005 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: Change point; Gibbs sampler; Hierarchical Bayesian model; Reversible jump. Includes bibliographical references (p.79-80).
35

Estimation of Standardized Mortality Ratio in Geographic Epidemiology

Kettermann, Anna January 2004 (has links) (PDF)
No description available.
36

Bioequivalence tests based on individual estimates using non-compartmental or model-based analysis

Makulube, Mzamo January 2019 (has links)
A research report submitted in partial fulfilment of Mathematical Statistics Masters by Coursework and Research Report to the Faculty of Science, University of the Witwatersrand, Johannesburg, 2019 / The growing demand for generic drugs has led to an increase in the generic drug industry. As a result, there has been a growing demand for bioequivalence studies. The challenges with the bioequivalence studies arose with the method used to quantify bioavailability. Bioavailability is commonly estimated by the area under the concentration-time curve (AUC), which is traditionally estimated by Non-Compartmental Analysis (NCA) such as interpolation in aid of the trapezoidal rule. However, when the number of samples per subject is insufficient, the NCA estimates may be biased and this can result in incorrect conclusions about bioequivalence. Alternatively, AUC can be estimated by the Non-Linear Mixed Effect Model (NLMEM). The objective of this study is to evaluate bioequivalence on lnAUC estimated by using a NCA approach to those based on the lnAUC estimated by the NLMEM approach. The NCA and NLMEM approaches are compared on the resulting bias when the linear mixed effect model is used to analyse the lnAUC data estimated by each method. The methods are evaluated on simulated and real data. The 2x2 crossover designs of different sample sizes and sampling time intensities are simulated using two null hypotheses. In each crossover design, concentration profiles are simulated with different levels of between-subject variability, within-subject variability and residual error variance. A higher bias is obtained with the lnAUC estimated by the NCA approach for trials with a limited number of samples per subject. The NCA estimates provide satisfactory global TypeI-error results. The NLMEM fails to distinguish between the existing formulation differences when the residual variability is high. / TL (2020)
37

Bayesian networks for evidence based clinical decision support

Yet, Barbaros January 2013 (has links)
Evidence based medicine (EBM) is defined as the use of best available evidence for decision making, and it has been the predominant paradigm in clinical decision making for the last 20 years. EBM requires evidence from multiple sources to be combined, as published results may not be directly applicable to individual patients. For example, randomised controlled trials (RCT) often exclude patients with comorbidities, so a clinician has to combine the results of the RCT with evidence about comorbidities using his clinical knowledge of how disease, treatment and comorbidities interact with each other. Bayesian networks (BN) are well suited for assisting clinicians making evidence-based decisions as they can combine knowledge, data and other sources of evidence. The graphical structure of BN is suitable for representing knowledge about the mechanisms linking diseases, treatments and comorbidities and the strength of relations in this structure can be learned from data and published results. However, there is still a lack of techniques that systematically use knowledge, data and published results together to build BNs. This thesis advances techniques for using knowledge, data and published results to develop and refine BNs for assisting clinical decision-making. In particular, the thesis presents four novel contributions. First, it proposes a method of combining knowledge and data to build BNs that reason in a way that is consistent with knowledge and data by allowing the BN model to include variables that cannot be measured directly. Second, it proposes techniques to build BNs that provide decision support by combining the evidence from meta-analysis of published studies with clinical knowledge and data. Third, it presents an evidence framework that supplements clinical BNs by representing the description and source of medical evidence supporting each element of a BN. Fourth, it proposes a knowledge engineering method for abstracting a BN structure by showing how each abstraction operation changes knowledge encoded in the structure. These novel techniques are illustrated by a clinical case-study in trauma-care. The aim of the case-study is to provide decision support in treatment of mangled extremities by using clinical expertise, data and published evidence about the subject. The case study is done in collaboration with the trauma unit of the Royal London Hospital.
38

Statistical Learning Methods for Personalized Medical Decision Making

Liu, Ying January 2016 (has links)
The theme of my dissertation is on merging statistical modeling with medical domain knowledge and machine learning algorithms to assist in making personalized medical decisions. In its simplest form, making personalized medical decisions for treatment choices and disease diagnosis modality choices can be transformed into classification or prediction problems in machine learning, where the optimal decision for an individual is a decision rule that yields the best future clinical outcome or maximizes diagnosis accuracy. However, challenges emerge when analyzing complex medical data. On one hand, statistical modeling is needed to deal with inherent practical complications such as missing data, patients' loss to follow-up, ethical and resource constraints in randomized controlled clinical trials. On the other hand, new data types and larger scale of data call for innovations combining statistical modeling, domain knowledge and information technologies. This dissertation contains three parts addressing the estimation of optimal personalized rule for choosing treatment, the estimation of optimal individualized rule for choosing disease diagnosis modality, and methods for variable selection if there are missing data. In the first part of this dissertation, we propose a method to find optimal Dynamic treatment regimens (DTRs) in Sequential Multiple Assignment Randomized Trial (SMART) data. Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each stage of treatment by potentially time-varying patient features and intermediate outcomes observed in previous stages. The complexity, patient heterogeneity, and chronicity of many diseases and disorders call for learning optimal DTRs that best dynamically tailor treatment to each individual's response over time. We propose a robust and efficient approach referred to as Augmented Multistage Outcome-Weighted Learning (AMOL) to identify optimal DTRs from sequential multiple assignment randomized trials. We improve outcome-weighted learning (Zhao et al.~2012) to allow for negative outcomes; we propose methods to reduce variability of weights to achieve numeric stability and higher efficiency; and finally, for multiple-stage trials, we introduce robust augmentation to improve efficiency by drawing information from Q-function regression models at each stage. The proposed AMOL remains valid even if the regression model is misspecified. We formally justify that proper choice of augmentation guarantees smaller stochastic errors in value function estimation for AMOL; we then establish the convergence rates for AMOL. The comparative advantage of AMOL over existing methods is demonstrated in extensive simulation studies and applications to two SMART data sets: a two-stage trial for attention deficit hyperactivity disorder and the STAR*D trial for major depressive disorder. The second part of the dissertation introduced a machine learning algorithm to estimate personalized decision rules for medical diagnosis/screening to maximize a weighted combination of sensitivity and specificity. Using subject-specific risk factors and feature variables, such rules administer screening tests with balanced sensitivity and specificity, and thus protect low-risk subjects from unnecessary pain and stress caused by false positive tests, while achieving high sensitivity for subjects at high risk. We conducted simulation study mimicking a real breast cancer study, and we found significant improvements on sensitivity and specificity comparing our personalized screening strategy (assigning mammography+MRI to high-risk patients and mammography alone to low-risk subjects based on a composite score of their risk factors) to one-size-fits-all strategy (assigning mammography+MRI or mammography alone to all subjects). When applying to a Parkinson's disease (PD) FDG-PET and fMRI data, we showed that the method provided individualized modality selection that can improve AUC, and it can provide interpretable decision rules for choosing brain imaging modality for early detection of PD. To the best of our knowledge, this is the first time in the literature to propose automatic data-driven methods and learning algorithm for personalized diagnosis/screening strategy. In the last part of the dissertation, we propose a method, Multiple Imputation Random Lasso (MIRL), to select important variables and to predict the outcome for an epidemiological study of Eating and Activity in Teens. % in the presence of missing data. In this study, 80% of individuals have at least one variable missing. Therefore, using variable selection methods developed for complete data after list-wise deletion substantially reduces prediction power. Recent work on prediction models in the presence of incomplete data cannot adequately account for large numbers of variables with arbitrary missing patterns. We propose MIRL to combine penalized regression techniques with multiple imputation and stability selection. Extensive simulation studies are conducted to compare MIRL with several alternatives. MIRL outperforms other methods in high-dimensional scenarios in terms of both reduced prediction error and improved variable selection performance, and it has greater advantage when the correlation among variables is high and missing proportion is high. MIRL is shown to have improved performance when comparing with other applicable methods when applied to the study of Eating and Activity in Teens for the boys and girls separately, and to a subgroup of low social economic status (SES) Asian boys who are at high risk of developing obesity.
39

Globalisation and commercialisation of healthcare services : with reference to the United States and United Kingdom

Drymoussis, Michael January 2014 (has links)
The thesis seeks to interrogate historically the relationship between multinational healthcare service companies and states in the pursuit of market-oriented reforms for healthcare. It constitutes a critical reading of the idea of globalisation as a concept with substantive explanatory value to analyse the causal role of multinational service firms in a commercial transformation in national healthcare service sectors. It analyses the development and expansion of commercial (for-profit) healthcare service provision and financing in the healthcare systems of OECD countries. The hospital and health insurance sectors in the US and UK are analysed as case studies towards developing this critical reading from a more specific national setting. The thesis contributes to developing a framework for analysing the emergence of an international market for trade in healthcare services, which is a recently emerging area of research in the social sciences. As such, it uses an interdisciplinary approach, utilising insights from health policy and international political economy. The research entails a longitudinal study of secondary and primary sources of qualitative data broadly covering the period 1975-2005. I have also made extensive use of quantitative data to illustrate key economic trends that are relevant to the changes in the particular healthcare services sectors analysed. The research finds a substantive shift in the mixed economy of healthcare in which commercial healthcare service provision and financing are increasing. However, while the internationalisation of healthcare service firms is a key element in helping to drive some of this change, the changes are ultimately highly dependent on state-level decision making and regulation. In this context, the thesis argues that globalisation presents an inadequate and potentially misleading conceptual framework for analysing these changes without a historical grounding in the particular developments of national and international markets for healthcare services.
40

Modelling longitudinal binary disease outcome data including the effect of covariates and extra variability.

Ngcobo, Siyabonga. January 2011 (has links)
The current work deals with modelling longitudinal or repeated non-Gaussian measurements for a respiratory disease. The analysis of longitudinal data for non-Gaussian binary disease outcome data can broadly be modeled using three different approaches; the marginal, random effects and transition models. The marginal type model is used if one is interested in estimating population averaged effects such as whether a treatment works or not on an average individual. On the other hand random effects models are important if apart from measuring population averaged effects a researcher is also interested in subject specific effects. In this case to get marginal effects from the subject-specific model we integrate out the random effects. Transition models are also called conditional models as a general term. Thus all the three types of models are important in understanding the effects of covariates and disease progression and distribution of outcomes in a population. In the current work the three models have been researched on and fitted to data. The random effects or subject-specific model is further modified to relax the assumption that the random effects should be strictly normal. This leads to the so called hierarchical generalized linear model (HGLM) based on the h-likelihood formulation suggested by Lee and Nelder (1996). The marginal model was fitted using generalized estimating equations (GEE) using PROC GENMOD in SAS. The random effects model was fitted using PROC GLIMMIX and PROC NLMIXED in SAS (generalized linear mixed model). The latter approach was found to be more flexible except for the need of specifying initial parameter values. The transition model was used to capture the dependence between outcomes in particular the dependence of the current response or outcome on the previous response and fitted using PROC GENMOD. The HGLM was fitted using the GENSTAT software. Longitudinal disease outcome data can provide real and reliable data to model disease progression in the sense that it can be used to estimate important disease i parameters such as prevalence, incidence and others such as the force of infection. Problem associated with longitudinal data include loss of information due to loss to follow up such as dropout and missing data in general. In some cases cross-sectional data can be used to find the required estimates but longitudinal data is more efficient but may require more time, effort and cost to collect. However the successful estimation of a given parameter or function depends on the availability of the relevant data for it. It is sometimes impossible to estimate a parameter of interest if the data cannot its estimation. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2011.

Page generated in 0.0983 seconds