• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 336
  • 27
  • 18
  • 12
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Impact of pay for performance on inequalities in diabetes management in primary care

Millett, Christopher Joseph January 2008 (has links)
Background: A new contract for general practitioners in the United Kingdom represents the most radical shift towards pay for performance seen in any health care system. The contract provides an important opportunity to address inequalities in chronic disease management. This thesis examines the impact of this pay for performance incentive on inequalities in the management of diabetes between ethnic groups. Methods: (1) Population based longitudinal survey using electronic general practice records carried out in Wandsworth, south London before and after the introduction of pay for performance (2) Secondary analysis of data from the Health Survey for England (1998-2004). Results: The proportion of patients achieving treatment targets for diabetes increased significantly after the implementation of pay for performance. The extents of these increases were broadly uniform across ethnic groups, with the exception of the black Caribbean group, which had significantly lower improvement in HbA1c and blood pressure control relative to the white British group. Variations in prescribing and achievement of treatment targets between ethnic groups evident in 2003 were not attenuated in 2005. Processes of care for diabetes were generally equitable before the introduction of pay for performance. Conclusions: Pay for performance has not addressed inequalities in the management of diabetes between ethnic groups. Quality improvement initiatives must place greater emphasis on minority communities to avoid continued inequalities in morbidity and mortality from the major complications of diabetes.
222

Bayesian statistical methods for genetic association studies with case-control and cohort design

Tachmazidou, Ioanna January 2008 (has links)
Large-scale genetic association studies are carried out with the hope of discovering single nucleotide polymorphisms involved in the etiology of complex diseases. We propose a coalescent-based model for association mapping which potentially increases the power to detect disease-susceptibility variants in genetic association studies with case-control and cohort design. The approach uses Bayesian partition modelling to cluster haplotypes with similar disease risks by exploiting evolutionary information. We focus on candidate gene regions and we split the chromosomal region of interest into sub-regions or windows of high linkage disequilibrium (LD) therein assuming a perfect phylogeny. The haplotype space is then partitioned into disjoint clusters within which the phenotype-haplotype association is assumed to be the same. The novelty of our approach consists in the fact that the distance used for clustering haplotypes has an evolutionary interpretation, as haplotypes are clustered according to the time to their most recent common mutation. Our approach is fully Bayesian and we develop Markov Chain Monte Carlo algorithms to sample efficiently over the space of possible partitions. We have also developed a Bayesian survival regression model for high-dimension and small sample size settings. We provide a Bayesian variable selection procedure and shrinkage tool by imposing shrinkage priors on the regression coefficients. We have developed a computationally efficient optimization algorithm to explore the posterior surface and find the maximum a posteriori estimates of the regression coefficients. We compare the performance of the proposed methods in simulation studies and using real datasets to both single-marker analyses and recently proposed multi-marker methods and show that our methods perform similarly in localizing the causal allele while yielding lower false positive rates. Moreover, our methods offer computational advantages over other multi-marker approaches.
223

Bayesian methods for modelling non-random missing data mechanisms in longitudinal studies

Mason, Alexina Jane January 2009 (has links)
In longitudinal studies, data are collected on a group of individuals over a period of time, and inevitably this data will contain missing values. Assuming that this missingness follows convenient `random- like' patterns may not be realistic, so there is much interest in methods for analysing incomplete longitudinal data which allow the incorporation of more realistic assumptions about the missing data mechanism. We explore the use of Bayesian full probability modelling in this context, which involves the specification of a joint model including a model for the question of interest and a model for the missing data mechanism. Using simulated data with missing outcomes generated by an informative missingness mechanism, we start by investigating the circumstances and the extent to which Bayesian methods can improve parameter estimates and model fit compared to complete-case analysis. This includes examining the impact of misspecifying different parts of the model. With real datasets, when the form of the missingness is unknown, a diagnostic that indicates the amount of information in the missing data given our model assumptions would be useful. pD is a measure of the dimensionality of a Bayesian model, and we explore its use and limitations for this purpose. Bayesian full probability modelling is then used in more complex settings, using real examples of longitudinal data taken from the British birth cohort studies and a clinical trial, some of which have missing covariates. We look at ways of incorporating information from additional sources into our models to help parameter estimation, including data from other studies and knowledge elicited from an expert. Additionally, we assess the sensitivity of the conclusions regarding the question of interest to varying the assumptions in different parts of the joint model, explore ways of presenting this information, and outline a strategy for Bayesian modelling of non-ignorable missing data.
224

Stochastic analysis of nonlinear dynamics and feedback control for gene regulatory networks with applications to synthetic biology

Strelkowa, Natalja January 2011 (has links)
The focus of the thesis is the investigation of the generalized repressilator model (repressing genes ordered in a ring structure). Using nonlinear bifurcation analysis stable and quasi-stable periodic orbits in this genetic network are characterized and a design for a switchable and controllable genetic oscillator is proposed. The oscillator operates around a quasi-stable periodic orbit using the classical engineering idea of read-out based control. Previous genetic oscillators have been designed around stable periodic orbits, however we explore the possibility of quasi-stable periodic orbit expecting better controllability. The ring topology of the generalized repressilator model has spatio-temporal symmetries that can be understood as propagating perturbations in discrete lattices. Network topology is a universal cross-discipline transferable concept and based on it analytical conditions for the emergence of stable and quasi-stable periodic orbits are derived. Also the length and distribution of quasi-stable oscillations are obtained. The findings suggest that long-lived transient dynamics due to feedback loops can dominate gene network dynamics. Taking the stochastic nature of gene expression into account a master equation for the generalized repressilator is derived. The stochasticity is shown to influence the onset of bifurcations and quality of oscillations. Internal noise is shown to have an overall stabilizing effect on the oscillating transients emerging from the quasi-stable periodic orbits. The insights from the read-out based control scheme for the genetic oscillator lead us to the idea to implement an algorithmic controller, which would direct any genetic circuit to a desired state. The algorithm operates model-free, i.e. in principle it is applicable to any genetic network and the input information is a data matrix of measured time series from the network dynamics. The application areas for readout-based control in genetic networks range from classical tissue engineering to stem cells specification, whenever a quantitatively and temporarily targeted intervention is required.
225

Development and use of methods to estimate chronic disease prevalence in small populations

Soljak, Michael January 2011 (has links)
Introduction National data on the prevalence of chronic diseases on general practice registers is now available. The aim of this PhD was to develop and validate epidemiological models for the expected prevalence of chronic obstructive pulmonary disease (COPD), coronary heart disease (CHD), stroke, hypertension, overall cardiovascular disease (CVD) and high CVD risk at general practice and small area level, and to explore the extent of undiagnosed disease, factors associated with it, and its impact on population health. Methods Multinomial logistic regression models were fitted to pooled Health Survey for England data to derive odds ratios for disease risk factors. These were applied to general practice and small area level population data, split by age, sex, ethnicity, deprivation, rurality and smoking status, to estimate expected disease prevalence at these levels. Validation was carried out using external data, including population-based epidemiological research and case-finding initiatives. Practice-level undiagnosed disease prevalence i.e. expected minus registered disease prevalence, and hospital admission rates for these conditions, were evaluated as outcome indicators of the quality and supply of primary health care services, using ordinary least squares (OLS) regression, geographically-weighted regression (GWR), and other spatial analytic methods. Results Risk factors, odds of disease and expected prevalence were consistent with external data sources. Spatial analysis showed strong evidence of spatial non-stationarity of undiagnosed disease prevalence, with high levels of undiagnosed disease in London and other conurbations, and associations with low supply of primary health care services. Higher hospital admission rates were associated with population deprivation, poorer quality and supply of primary health care services and poorer access to them, and for COPD, with higher levels of undiagnosed disease. Conclusion The epidemiologic prevalence models have been implemented in national data sources such as NHS Comparators, the Association of Public Health Observatories website, and a number of national reports. Early experience suggests that they are useful for guiding case-finding at practice level and improving and regulating the quality of primary health care. Comparisons with external data, in particular prevalence of disease detected by general practices, suggest that model predictions are valid. Practice-level spatial analyses of undiagnosed disease prevalence and hospital admission rates failed to demonstrate superiority of GWR over OLS methods. Disease modellers should be encouraged to collaborate more effectively, and to validate and compare modelling methods using an agreed framework. National leadership is needed to further develop and implement disease models. It is likely that prevalence models will prove to be most useful for identifying undiagnosed diseases with a slow and insidious onset, such as COPD, diabetes and hypertension.
226

Application of Bayesian networks to problems within obesity epidemiology

Harding, Nicholas John January 2011 (has links)
Obesity is a significant public health problem in the United Kingdom and many other parts of the world, including some low-income settings. Although obesity prevalence has been rising for several decades, governments have been slow to implement policies that may have an impact at a population level. Numerous socio-demographic factors have been linked with obesity, but are highly intercorrelated, and identifying relevant factors or at-risk population groups is difficult. This thesis uses a graphical modelling approach, specifically Bayesian networks, to model the joint distribution of socio-demographic factors and obesity related behaviour. The key advantages of graphical models in this context are their ability to model highly correlated data, and to represent complex relationships efficiently as network structure. Three separate pieces of work comprise this thesis. The first uses a sampling technique to identify the networks that best explain the observed data, and employs the common structural features of these networks to infer conditional dependencies present between socio-demographic variables and obesity related behaviour indicators. We find determinants of recreational physical activity differ between males and females, and age and ethnicity have a significant influence on snacking behaviour. The second piece of work usesBayesian networks to build a model of health behaviour given socio demographic input, and then applies this to data from the 2001 census in order to provide an estimate of the health behaviour of a real population. The final analysis uses Bayesian network structure to explore potential determinants of body fat deposition patterns and compares the results tothose derived from a Generalized Linear Model (GLM). Our approach successfully identifies the main determinants, age and Body Mass Index, although is not a genuine alternative due to a lack of sensitivity to less important determinants. Beyond the application to obesity, results of this thesis are of a wider relevance to epidemiology as the field moves towards an increased use of Machine Learning techniques. The work conducted has also met and overcome several technical issues that are likely to be of relevance to others exploring similar approaches.
227

Data mining and decision support in pharmaceutical databases

Pasupa, Kitsuchart January 2007 (has links)
This thesis lies in the area of chemoinformatics, known as virtual screening (VS). VS describes a set of computational methods that provide a fast and cheap alternative to biological screening which involves the selection, synthesis and testing of molecules to ascertain their biological activity in a particular domain, e.g. pain relief, reduction of inflammation. This is important because reducing the cost and, crucially, time in the early stages of compound development can have a disproportionate benefit in profitability in a cycle that has a short patent lifetime. Machine learning methods are becoming popular in this domain but problems arise when 2D fingerprints are used as descriptors. Fingerprints are an extremely sparse, binary-valued representation of molecules. Furthermore, VS also suffers strongly from the so-called "small-sample-size" problem where the number of covariates is comparable to or exceeds the number of samples. These problems can be solved by developing machine learning algorithm which can handle very large sets of high-dimensional data. The high-dimensional data contains an unprecedented level of complexity, hence, some forms of complexity control are therefore necessary. Alternatively a suitable dimensional reduction method can be used. This thesis consists of four major works which are conducted with the MDL Drug Data Report (MDDR) database. The works are as follows: (i) Development of binary kernel discrimination (BKD). (ii) A new algorithm is introduced for kernel machine family, the so-call "parsimonious kernel fisher discrimination". The proposed algorithm is then applied to VS tasks. (iii) Prediction by posterior estimation in VS. (iv) A comparison of four variants of principal component analysis with potential in VS. The experiments show that, BKD in conjunction with Jaccard/Tanimoto is found to be the best method while other approaches are found to be less accurate than BKD but still comparable in a number of cases.
228

Classification of medical images with small data sets

Trakas, Joannis January 2009 (has links)
No description available.
229

Efficient Computation of Value of Information

Brennan, Alan January 2007 (has links)
This thesis is concerned with computation of expected value of information (EVI). The topic is important because EVI methodology is a rational, coherent framework for prioritising and planning the design of biomedical and clinical research studies that represent an enormous expenditure world-wide. At the start of my research few studies existed. During the course of the PhD, my own work and that of other colleagues has been published and the uptake of the developing methods is increasing. The thesis contains a review of early literature as well as of the the emerging studies over the 5 years since my first work was done in 2002 (Chapter 2). Methods to compute partial expected value of perfect information are developed and tested in illustrative cost-utility decision models with non-linear net benefit functions and correlated parameters. Evaluation using nested Monte Carlo simulations is investigated and the number of inner and outer simulations required is explored (Chapter 3). The computation of expected value of sample information using nested Monte Carlo simulations combined with Bayesian updating of model parameters with conjugate distributions given simulated data is examined (Chapter 4). In Chapter 5, the development of a novel Bayesian approximation for posterior expectations is given and this is applied and tested in the computation of EVSI for an illustrative model again with normally distributed parameters. The application is further extended to a non-conjugate proportional hazards Weibull distribution, a common circumstance for clinical trial concerned with survival or time to event data (Chapter 6). The application of the Bayesian approximation in the Weibull model is then tested against 4 other methods for estimating the Bayesian up- dated Weibull parameters including the computationally intensive Markov Chain Monte Carlo (MCMC) approach which could be considered the gold standard (Chapter 7). The result of the methodological developments in this thesis and the testing on case studies is that some new approaches to computing EVI are now a;vailable. In many models this will improve the efficiency of computation, making possible EVI calculations in some previously infeasible circumstances. In Chapter 8, I summarise the achievements made in this work, how they relate to that of other scholars and the research agenda which still faces us. I conclude with the firm hope that EVI methods will begin to provide decision makers with clearer support when deciding on investments in further research.
230

An investigation of the potential for decision-analytic modelling to inform policy on breast cancer screening programmes in general populations

Madan, Jason January 2009 (has links)
No description available.

Page generated in 0.0314 seconds