• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 20
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Feature extraction and information fusion in face and palmprint multimodal biometrics

Ahmad, Muhammad Imran January 2013 (has links)
Multimodal biometric systems that integrate the biometric traits from several modalities are able to overcome the limitations of single modal biometrics. Fusing the information at an earlier level by consolidating the features given by different traits can give a better result due to the richness of information at this stage. In this thesis, three novel methods are derived and implemented on face and palmprint modalities, taking advantage of the multimodal biometric fusion at feature level. The benefits of the proposed method are the enhanced capabilities in discriminating information in the fused features and capturing all of the information required to improve the classification performance. Multimodal biometric proposed here consists of several stages such as feature extraction, fusion, recognition and classification. Feature extraction gathers all important information from the raw images. A new local feature extraction method has been designed to extract information from the face and palmprint images in the form of sub block windows. Multiresolution analysis using Gabor transform and DCT is computed for each sub block window to produce compact local features for the face and palmprint images. Multiresolution Gabor analysis captures important information in the texture of the images while DCT represents the information in different frequency components. Important features with high discrimination power are then preserved by selecting several low frequency coefficients in order to estimate the model parameters. The local features extracted are fused in a new matrix interleaved method. The new fused feature vector is higher in dimensionality compared to the original feature vectors from both modalities, thus it carries high discriminating power and contains rich statistical information. The fused feature vector also has larger data points in the feature space which is advantageous for the training process using statistical methods. The underlying statistical information in the fused feature vectors is captured using GMM where several numbers of modal parameters are estimated from the distribution of fused feature vector. Maximum likelihood score is used to measure a degree of certainty to perform recognition while maximum likelihood score normalization is used for classification process. The use of likelihood score normalization is found to be able to suppress an imposter likelihood score when the background model parameters are estimated from a pool of users which include statistical information of an imposter. The present method achieved the highest recognition accuracy 97% and 99.7% when tested using FERET-PolyU dataset and ORL-PolyU dataset respectively.
2

Georges Canguilhem : norms and knowledge in the life sciences

Brilman, Marina C. January 2009 (has links)
In the second half of the twentieth century, the interest of the social sciences in the life sciences has intensified. This intensification might be explained through the idea that, as Michel Foucault puts it, what defines modem rationality is the entry of 'life' into regimes of knowledge and power. I argue that this 'entry' can be traced back to the work of Immanuel Kant. He established the autonomy of reason by simultaneously including and excluding life from reason. Kant explained the emergence of reason by likening it to a biological process but then excluded such processes from reason through his notion of the 'lawfulness of the contingent'. I argue that this two-pronged approach leads to a recurring negotiation of the relation between life and knowledge in the contemporary life and social sciences. I argue that it was not Foucault who directly engaged with how the life sciences lie at the heart of modern rationality. Rather, it was the French philosopher and historian of science Georges Canguilhem. I argue that he questioned modern rationality by inquiring into some of its most fundamental epistemological or discursive forms. In order to illustrate this, I address his inquiry into the concepts of environment, individuality, knowledge or information, and normativity. The potential of these concepts to migrate across disciplinary boundaries is indicative of the fact that the productivity of Canguilhem's work extends far beyond the life sciences.
3

Developing methods for causal mediation analysis of parenting interventions to improve child antisocial behaviour

Zhang, Cheng January 2015 (has links)
Parenting programmes are the most effective intervention to change persistent child antisocial behaviour and are widely used, but little is known about the mechanisms through which they work and hence how to improve them. This PhD project aims to bridge this gap by performing formal mediation analyses partitioning total effects of parenting programmes on child outcome into indirect effects (mediated through aspects of parenting) and direct effects (non-mediated effects). This thesis focuses on further developing methods for mediation analysis to cover complex scenarios and applies them in three trials (SPOKES, CPT and HCA) of parenting programmes. This project improves traditional methods for trials that assume no putative mediator-outcome confounding in three ways: Firstly, the mediator-outcome relationship is adjusted for observed confounding variables. The newly developed MI-BT method facilitates the application of Multiple Imputation to handle missing data and the use of linear mixed models to reflect trial design, and generates non-parametric inferences via a bootstrap approach. The application of this method to the SPOKES trial showed statistically significant indirect effects for two mediators (parental warmth and criticism). Secondly, the MI-BT method is extended to combine with instrumental variables method and become the IV-MI-BT method which allows for unmeasured confounding of the mediator-outcome relationship in the presence of missing data. The application of this method to the SPOKES trial showed that while IV estimators of mediation effects were similar in value compared to MI-BT estimates, their confidence intervals were inflated. Finally, methods were further developed to enable pooling of individual participant data from multiple trials and so provide for potentially more precise and more generalizable mediation analyses. A framework for systematically conducting such an IV-MI-BT IPD meta-mediation analysis is described. Meta-analysis of the three contributing trials did not detect any evidence for between-trial heterogeneity in mediation effects of interest. Pooling of the studies resulted in smaller and non-significant overall indirect effect estimates and provided a considerable precision gain compared to the SPOKES only analysis.
4

Stochastic control methods to individualise drug therapy by incorporating pharmacokinetic, pharmacodynamic and adverse event data

Francis, Ben January 2013 (has links)
There are a number of methods available to clinicians for determining an individualised dosage regimen for a patient. However, often these methods are non-adaptive to the patient’s requirements and do not allow for changing clinical targets throughout the course of therapy. The drug dose algorithm constructed in this thesis, using stochastic control methods, harnesses information on the variability of the patient’s response to the drug thus ensuring the algorithm is adapting to the needs of the patient. Novel research is undertaken to include process noise in the Pharmacokinetic/Pharmacodynamic (PK/PD) response prediction to better simulate the patient response to the dose by allowing values sampled from the individual PK/PD parameter distributions to vary over time. The Kalman filter is then adapted to use these predictions alongside measurements, feeding information back into the algorithm in order to better ascertain the current PK/PD response of the patient. From this a dosage regimen is estimated to induce desired future PK/PD response via an appropriately formulated cost function. Further novel work explores different formulations of this cost function by considering probabilities from a Markov model. In applied examples, previous methodology is adapted to allow control of patients that have missing covariate information to be appropriately dosed in warfarin therapy. Then using the introduced methodology in the thesis, the drug dose algorithm is shown to be adaptive to patient needs for imatinib and simvastatin therapy. The differences, between standard dosing and estimated dosage regimens using the methodologies developed, are wide ranging as some patients require no dose alterations whereas other required a substantial change in dosing to meet the PK/PD targets. The outdated paradigm of ‘one size fits all’ dosing is subject to debate and the research in this thesis adds to the evidence and also provides an algorithm for a better approach to the challenge of individualising drug therapy to treat the patient more effectively. The drug dose algorithm developed is applicable to many different drug therapy scenarios due to the enhancements made to the formulation of the cost functions. With this in mind, application of the drug dose algorithm in a wide range of clinical dosing decisions is possible.
5

Methodology and software for joint modelling of time-to-event data and longitudinal outcomes across multiple studies

Sudell, M. E. January 2018 (has links)
Thesis Title: Methodology and Software for Joint Modelling of Time-to-Event Data and Longitudinal Outcomes Across Multiple Studies Author: Maria Sudell Introduction and Aims: Univariate joint models for longitudinal and time-to-event data simultaneously model one outcome that is repeatedly measured over time, with another outcome which measures the time until the occurrence of an event. They have been increasingly used in the literature to account for dropout in longitudinal studies, to include time-varying covariates in time-to-event analyses, or to investigate links between longitudinal and time-to-event outcomes. Meta-analysis is the quantitative pooling of data from multiple studies. Such analyses can provide increased sample size and so detect small covariate effects. Modelling of multi-study data requires accounting for the clustering of individuals within studies and careful consideration of heterogeneity between studies. Research concerning methodology for modelling of joint longitudinal and time-to-event data in a multi-study or meta-analytic setting does not currently exist. This thesis develops novel methodologies and software for the modelling of multi-study joint longitudinal and time-to-event data. Methods: A review of current reporting standards of analyses applying joint modelling methodology to single study datasets, with a view to future Aggregate Data Meta-Analyses (AD-MA) of joint data is undertaken. Methodology for the one and two-stage Individual Participant Data Meta-Analyses (IPD-MA) are developed. A software package in the R language containing functionalities for various aspects of multi-study joint modelling analyses is built. The methodology and software is implemented in a real hypertension dataset, and also is tested in extensive simulation studies. Results: Reporting of model structure was amongst the areas identified for improvement in the reporting of joint models employed in single study applied analyses. Sufficient information was reported in the majority of studies for them to contribute to future AD-MA. Guidelines developed to ensure good quality two-stage IPD-MA of joint data were presented, designed to ensure only parameters with comparable interpretations are pooled. A range of one-stage models, each accounting for between study heterogeneity in varying ways, were described and applied to real data and simulation analyses. Models employing study level random effects were found unreliable for the investigated association structure, however fixed effect approaches or those that stratified baseline hazard by study were more reliable. The benefit of using joint models over separate time-to-event models in the presence of significant association between the longitudinal and time-to-event outcomes in both one and two-stage analyses was established. Novel software capable of one or two-stage analyses of large multi-study joint datasets was demonstrated in both the real data and simulation analyses. Conclusions: Reporting of joint modelling structure in single study applied analyses should be maintained and improved. Two-stage meta-analyses of joint modelling results should take care to pool only parameters with comparable interpretations. In meta-analyses, investigators should employ a joint modelling approach when association is known or suspected between the longitudinal and time-to-event outcomes. Further work into meta-analytic joint models is required to expand the range of available multi-study joint modelling structures, to allow for multivariate joint data, and to employ multivariate meta-analytic techniques in a two-stage meta-analysis.
6

Variable selection for classification in complex ophthalmic data : a multivariate statistical framework

Walsh, P. E. January 2017 (has links)
Variable selection is an essential part of the process of model-building for classification or prediction. Some of the challenges of variable selection are heterogeneous variance-covariance matrices, differing scales of variables, non-normally distributed data and missing data. Statistical methods exist for variable selection however these are often univariate, make restrictive assumptions about the distribution of data or are expensive in terms of the computational power required. In this thesis I focus on filter methods of variable selection that are computationally fast and propose a metric of discrimination. The main objectives of this thesis are (1) to propose a novel Signal-to-Noise Ratio (SNR) discrimination metric accommodating heterogeneous variance-covariance matrices, (2) to develop a multiple forward selection (MFS) algorithm employing the novel SNR metric, (3) to assess the performance of the MFS-SNR algorithm compared to alternative methods of variable selection, (4) to investigate the ability of the MFS-SNR algorithm to carry out variable selection when data are not normally distributed and (5) to apply the MFS-SNR algorithm to the task of variable selection from real datasets. The MFS-SNR algorithm was implemented in the R programming environment. It calculates the SNR for subsets of variables, identifying the optimal variable during each round of selection as whichever causes the largest increase in SNR. A dataset was simulated comprising 10 variables: 2 discriminating variables, 7 non-discriminating variables and one non-discriminating variable which enhanced the discriminatory performance of other variables. In simulations the frequency of each variable’s selection was recorded. The probability of correct classification (PCC) and area under the curve (AUC) were calculated for sets of selected variables. I assessed the ability of the MFS-SNR algorithm to select variables when data are not normally distributed using simulated data. I compared the MFS-SNR algorithm to filter methods utilising information gain, chi-square statistics and the Relief-F algorithm as well as a support vector machines and an embedded method using random forests. A version of the MFS algorithm utilising Hotelling’s T2 statistic (MFS-T2) was included in this comparison. The MFS-SNR algorithm selected all 3 variables relevant to discrimination with higher or equivalent frequencies to competing methods in all scenarios. Following non-normal variable transformation the MFS-SNR algorithm still selected the variables known to be relevant to discrimination in the simulated scenarios. Finally, I studied both the MFS-SNR and MFS-T2 algorithm’s ability to carry out variable selection for disease classification using several clinical datasets from ophthalmology. These datasets represented a spectrum of quality issues such as missingness, imbalanced group sizes, heterogeneous variance-covariance matrices and differing variable scales. In 3 out of 4 datasets the MFS-SNR algorithm out-performed the MFS-T2 algorithm. In the fourth study both MFS-T2 and MFS-SNR produced the same variable selection results. In conclusion I have demonstrated that the novel SNR is an extension of Hotelling’s T2 statistic accommodating heterogeneity of variance-covariance matrices. The MFS-SNR algorithm is capable of selecting the relevant variables whether data are normally distributed or not. In the simulated scenarios the MFS-SNR algorithm performs at least as well as competing methods and outperforms the MFS-T2 algorithm when selecting variables from real clinical datasets.
7

Biomarker-guided clinical trial designs

Antoniou, Miranta January 2018 (has links)
Personalized medicine is a rapidly growing area of research which has attracted much attention in recent years in the field of medicine. The ultimate aim of this approach is to ensure that the most appropriate treatment which provides clinical benefit will be tailored to each patient according their personal characteristics. However, testing the effectiveness of a biomarker-guided approach to treatment in improving patient health yields challenges both in terms of trial design and analysis. Although a variety of biomarker-guided designs have been proposed recently, their statistical validity, application and interpretation has not yet been fully explored. A comprehensive literature review based on an in-depth search strategy has been conducted with a view to providing researchers with clarity in definition, methodology and terminology of the various reported biomarker-guided trial designs. Additionally, a user-friendly online tool (www.BiGTeD.org) informed by our review has been developed to help investigators embarking on such trials decide on the most appropriate design. Simulation studies for the investigation of key statistical aspects of such trial designs and statistical approaches such as the sample size requirement under different settings have been performed. Furthermore, a strategy has been applied to choose the most optimal design in a given setting where a previously proposed clinical trial proved inefficient due to the very large sample size that was required. Statistical techniques to calculate the corresponding sample size have been applied and an adaptive version of the proposed design has been explored through simulations. Practical challenges of biomarker-guided trials in terms of funding, ethical and regulatory issues, recruitment, monitoring, statistical analysis plan, biomarker assessment and data sharing issues are also addressed in this thesis. The different biomarker-guided designs proposed so far need to be better understood by the research community in terms of analysis and planning and practical application as their proper use and choice can increase the probability of success of clinical trials which will result in development of personalised treatments in the future. Therefore, with this PhD thesis, we contribute to the knowledge enhancement of researchers regarding these studies by providing essential information and presenting statistical issues arising in their implementation. We hope that this work will help scientists to choose the right clinical trial design in the era of personalized medicine which is of utmost importance for the translation of drug development into the improvement of human health.
8

Data sharing and transparency : the impact on evidence synthesis

Nevitt, S. J. January 2017 (has links)
No description available.
9

Epistemologies and ontologies of genomics : two approaches to the problems of biology

Davies, Jonathan January 2009 (has links)
If we were to ask the question, "why does organism X develop/behave as it does?" or, "why does organism X give rise (through e.g. sexual reproduction) to an organism that is similar to itself?" we have two broad types of explanation: the localised and the distributed. These, and other questions addressed in developmental and evolutionary biology, I characterise as problems of biology. Localised causal explanations (LCEs) focus on the identification of an entity (localised in space) or event (localised in time) responsible for the phenomenon to be explained. Distributed causal explanations (DCEs) have tended to focus on the identification of (often global) properties or features (distributed in space) or processes (distributed in time) responsible for the phenomenon to be explained.
10

Mathematical models for wound healing lymphangiogenesis and other biomedical phenomena

Bianchi, Arianna January 2016 (has links)
In this thesis we explore the mathematical modelling of wound healing lymphangiogenesis, tumour neoneurogenesis and Drosophila courtship behaviour. We begin by focussing on the mathematical modelling of lymphatic regeneration in wound healing. Indeed, several studies suggest that one possible cause of impaired wound healing is failed or insu cient lymphangiogenesis, that is the formation of new lymphatic capillaries. Although many mathematical models have been developed to describe the formation of blood capillaries (angiogenesis), very few have been proposed for the regeneration of the lymphatic network. In Chapter 2 a model of five ordinary differential equations is presented to describe lymphangiogenesis in a skin wound. The variables represent different cell densities and growth factor concentrations, and where possible the parameters are estimated from experimental and clinical data. The system output is compared with the available biological literature and, based on parameter sensitivity analysis, new therapeutic approaches are suggested to enhance lymphangiogenesis in diabetic wounds. Chapter 3 extends the aforementioned work to two PDE systems aimed at describing two possible hypotheses for the lymphangiogenesis process: 1) lymphatic capillaries sprout from existing and interrupted capillaries at the edge of the wound, in analogy to the blood angiogenesis case; 2) lymphatic endothelial cells first collect together in the wound region through following the lymph flow and then begin to form a network. Furthermore, we include the effect of advection from both background interstitial flow and additional lymph flow from the open capillaries, andaddress the question of their relative importance in the lymphangiogenesis process. Malignant tumours induce not only the formation of a lymphatic and a blood vascular network, but also innervation around themselves. However, the relationship between tumour progression and the nervous system is still poorly understood. In Chapter 4 we study the interactions between the nervous system and tumour cells through an 8-dimensional ODE model. The model confirrms experimental observations that a tumour promotes nerve formation around itself, and that high levels of nerve growth factor (NGF) and axon guidance molecules (AGMs) are recorded in the presence of a tumour. Our results also reflect the observation that high stress levels (represented by higher norepinephrine release by sympathetic nerves) contribute to tumour development and spread, indicating a mutually bene cial relationship between tumour cells and neurons. In Chapter 5 a preliminary model for courtship behavioural patterns of Drosophila melanogaster is suggested. Drosophila courtship behaviour is considered a good model to investigate neurodegenerative diseases (such as Parkinson's) in humans. The present chapter illustrates the biological and health-care related background to this topic, and then presents a possible modelling approach based on Pasemann's work on neural networks. We conclude with a brief discussion that summarises the main results and outlines directions for future work.

Page generated in 0.0302 seconds