11 |
Biological informationLean, Oliver Miles January 2016 (has links)
This thesis addresses the active controversy regarding the nature and role of informational concepts as applied to the biological sciences - in particular, the relationship between statistical or correlational information on one hand and meaningful, semantic, intentional information on the other. It first develops a set of basic conceptual tools that can be applied to any, or at least most, putative cases of information processing in biological systems. This framework shows that, contrary to popular belief, we can make sense of biological information in the former, statistical sense, without it trivially applying to any and all physical processes that take place in living things. I then demonstrate the utility of this framework by applying its tools to specific information-related controversies: the concept of innateness, and information versus influence in animal communication. These chapters demonstrate that these issues can be clarified with the tools previously developed. I also discuss the notion of primitive content - the simplest form of biological phenomenon that can reasonably be said to be contentful. This issue serves as a biological basis for future research regarding the ongoing philosophical problem of relating the physical to the mental.
|
12 |
Modelling departure from randomised treatment in randomised controlled trials with survival outcomesDodd, Susanna January 2014 (has links)
Randomised controlled trials are considered the gold standard study design, as random treatment assignment provides balance in prognosis between treatment arms and protects against selection bias. When trials are subject to departures from randomised treatment, however, simple but naïve statistical methods that purport to estimate treatment efficacy, such as per protocol or as treated analyses, fail to respect this randomisation balance and typically introduce selection bias. This bias occurs because departure from randomised treatment is often clinically indicated, resulting in systematic differences between patients who do and do not adhere to their assigned intervention. There exist more appropriate statistical methods to adjust for departure from randomised treatment but, as demonstrated by a review of published trials, these are rarely employed, primarily due to their complexity and unfamiliarity. The focus of this research has been to explore, explain, demonstrate and compare the use of causal methodologies in the analysis of trials, in order to increase the accessibility and comprehensibility by non-specialist analysts of the available, but somewhat technical, statistical methods to adjust for treatment deviations. An overview of such methods is presented, intended as an aid to researchers new to the field of causal inference, with an emphasis on practical considerations necessary to ensure appropriate implementation of techniques, and complemented by a number of guidance tools summarising the necessary clinical and statistical considerations when carrying out such analyses. Practical demonstrations of causal analysis techniques are then presented, with existing methods extended and adapted to allow for complexities arising from the trial scenarios. A particular application from epilepsy demonstrates the impact of various statistical factors when adjusting for skewed time-varying confounders and different reasons for treatment changes on a complicated time to event outcome, including choice of model (pooled logistic regression versus Cox models for inverse probability of censoring weighting methods, compared with a rank-preserving structural failure time model), time interval (for creating panel data for time-varying confounders and outcome), confidence interval estimation method (standard versus bootstrapped) and the considerations regarding use of spline variables to estimate underlying risk in pooled logistic regression. In this example, the structural failure time model is severely limited by its restriction on the types of treatment changes that can be adjusted for; as such, the majority of treatment changes are necessarily censored, introducing bias similar to that in a per protocol analysis. With inverse probability weighting adjustment, as more treatment changes and confounders are accounted for, treatment effects are observed to move further away from the null. Generally, Cox models seemed to be more susceptible to changes in modelling factors (confidence interval estimation, time interval and confounder adjustment) and displayed greater fluctuations in treatment effect than corresponding pooled logistic regression models. This apparent greater stability of logistic regression, even when subject to severe overfitting, represents a major advantage over Cox modelling in this context, countering the inherent complications relating to the fitting of spline variables. This novel application of complex methods in a complicated trial scenario provides a useful example for discussion of typical analysis issues and limitations, as it addresses challenges that are likely to be common in trials featuring problems with nonadherence. Recommendations are provided for analysts when considering which of these analysis methods should be applied in a given trial setting.
|
13 |
Prognostic factors for epilepsyBonnett, Laura January 2012 (has links)
Introduction and Aims: Epilepsy is a neurological disorder and is a heterogeneous condition both in terms of cause and prognosis. Prognostic factors identify patients at varying degrees of risk for specific outcomes which facilitates treatment choice and aids patient counselling. Few prognostic models based on prospective cohorts or randomised controlled trial data have been published in epilepsy. Patients with epilepsy can be loosely categorised as having had a first seizure, being newly diagnosed with epilepsy, having established epilepsy or frequent unremitting seizures despite optimum treatment. This thesis concerns modelling prognostic factors for these patient groups, for outcomes including seizure recurrence, seizure remission and treatment failure. Methods: Methods for modelling prognostic factors are discussed and applied to several examples including eligibility to drive following a first seizure and following withdrawal of treatment after a period of remission from seizures. Internal and external model validation techniques are reviewed. The latter is investigated further in a simulation study, the results of which are demonstrated in a motivating example. Mixture modelling is introduced and assessed to better predict whether a patient would achieve remission from seizures immediately, at a later time point, or whether they may never achieve remission. Results: Multivariable models identified a number of significant factors. Future risk of a seizure was therefore obtained for various patient subgroups. The models identified that the chance of a second seizure was below the risk threshold for driving, set by the DVLA, after six months, and the risk of a seizure following treatment withdrawal after a period of remission from seizures was below the risk threshold after three months. Selected models were found to be internally valid and the simulation study indicated that concordance and a variety of imputation methods for handling covariates missing from the validation dataset were useful approaches for external validation of prognostic models. Assessing these methods for a selected model indicated that the model was valid in independent datasets. Mixture modelling techniques begin to show an improved prognostic model for the frequently reported outcome time to 12-month remission. Conclusions: The models described within this thesis can be used to predict outcome for patients with first seizures or epilepsy aiding individual patient risk stratification and the design and analysis of future epilepsy trials. Prognostic models are not commonly externally validated. A method of external validation in the presence of a missing covariate has been proposed and may facilitate validation of prognostic models making the evidence base more transparent and reliable and instil confidence in any significant findings.
|
14 |
User-centric biometrics : authentication in a self-service environmentRiley, Chris W. January 2012 (has links)
Biometric authentication is the process of establishing an individual's identity based on measurable characteristics of their anatomy, physiology or behavior. Biometrics do not share many of the limitations of traditional authentication mechanisms, as the characteristics used for biometric authentication cannot be lost, forgotten or easily replicated. Despite these advantages, there are unresolved problems with the use and acceptability of biometrics and the technology has not seen the strong uptake that many predicted. There is a significant literature discussing the implications of biometric technology use, though much of this work is theoretical in nature and there is comparatively little empirically grounded work with a focus on the biometric user experience. This thesis presents research investigating biometric authentication from a user-centric perspective. The principal aims of this research were to deepen our understanding of the usability and acceptability of biometric authentication and use this knowledge to improve design. A series of controlled evaluations are presented, where biometric systems and different aspects of system design were investigated. To understand wider implementation issues, a field trial of a biometric system in a real-world environment was also carried out. A second strand of research focused on how biometrics are perceived and both survey and interview approaches were used to explore this issue. In general the empirical work can be characterized by a trend of structured, quantitative methodologies leading into less-structured approaches as contextual and experiential aspects of system use were investigated. A framework for the biometric user experience is presented based on this work. The framework is used to structure the design guidelines and knowledge emerging from this work. A methodology for the user-centric evaluation of biometrics is also proposed. The results from this project further our understanding of usable system design, but biometrics have proven to be an emotive technology and implementation remains a complex issue.
|
15 |
Novel active sweat pores based liveness detection techniques for fingerprint biometricsMemon, Shahzad Ahmed January 2012 (has links)
Liveness detection in automatic fingerprint identification systems (AFIS) is an issue which still prevents its use in many unsupervised security applications. In the last decade, various hardware and software solutions for the detection of liveness from fingerprints have been proposed by academic research groups. However, the proposed methods have not yet been practically implemented with existing AFIS. A large amount of research is needed before commercial AFIS can be implemented. In this research, novel active pore based liveness detection methods were proposed for AFIS. These novel methods are based on the detection of active pores on fingertip ridges, and the measurement of ionic activity in the sweat fluid that appears at the openings of active pores. The literature is critically reviewed in terms of liveness detection issues. Existing fingerprint technology, and hardware and software solutions proposed for liveness detection are also examined. A comparative study has been completed on the commercially and specifically collected fingerprint databases, and it was concluded that images in these datasets do not contained any visible evidence of liveness. They were used to test various algorithms developed for liveness detection; however, to implement proper liveness detection in fingerprint systems a new database with fine details of fingertips is needed. Therefore a new high resolution Brunel Fingerprint Biometric Database (B-FBDB) was captured and collected for this novel liveness detection research. The first proposed novel liveness detection method is a High Pass Correlation Filtering Algorithm (HCFA). This image processing algorithm has been developed in Matlab and tested on B-FBDB dataset images. The results of the HCFA algorithm have proved the idea behind the research, as they successfully demonstrated the clear possibility of liveness detection by active pore detection from high resolution images. The second novel liveness detection method is based on the experimental evidence. This method explains liveness detection by measuring the ionic activities above the sample of ionic sweat fluid. A Micro Needle Electrode (MNE) based setup was used in this experiment to measure the ionic activities. In results, 5.9 pC to 6.5 pC charges were detected with ten NME positions (50μm to 360 μm) above the surface of ionic sweat fluid. These measurements are also a proof of liveness from active fingertip pores, and this technique can be used in the future to implement liveness detection solutions. The interaction of NME and ionic fluid was modelled in COMSOL multiphysics, and the effect of electric field variations on NME was recorded at 5μm -360μm positions above the ionic fluid.
|
16 |
Graph-based approach for the approximate solution of the chemical master equationBasile, Raffaele January 2015 (has links)
The chemical master equation (CME) represents the accepted stochastic description of chemical reaction kinetics in mesoscopic systems. As its exact solution – which gives the corresponding probability density function – is possible only in very simple cases, there is a clear need for approximation techniques. Here, we propose a novel perturbative three-step approach which draws heavily on graph theory: (i) we expand the eigenvalues of the transition state matrix in the CME as a series in a non-dimensional parameter that depends on the reaction rates and the reaction volume; (ii) we derive an analogous series for the corresponding eigenvectors via a graph-based algorithm; (iii) we combine the resulting expansions into an approximate solution to the CME. We illustrate our approach by applying it to a reversible dimerization reaction; then, we formulate a set of conditions, which ensure its applicability to more general reaction networks. We follow attempting to apply the results to a more complicated system, namely push-pull, but the problem reveals too complex for a complete solution. Finally, we discuss the limitations of the methodology.
|
17 |
Applications of statistics in criminal justice and associated health issuesMerrall, Elizabeth Lai Chui January 2012 (has links)
No description available.
|
18 |
Centers of complex networksWuchty, Stefan, Stadler, Peter F. 11 October 2018 (has links)
The central vertices in complex networks are of particular interest because they might play the role of organizational hubs. Here, we consider three different geometric centrality measures, excentricity, status, and centroid value, that were originally used in the context of resource placement problems. We show that these quantities lead to useful descriptions of the centers of biological networks which often, but not always, correlate with a purely local notion of centrality such as the vertex degree. We introduce the notion of local centers as local optima of a centrality value “landscape” on a network and discuss briefly their role.
|
19 |
Statistical issues in Mendelian randomization : use of genetic instrumental variables for assessing causal associationsBurgess, Stephen January 2012 (has links)
Mendelian randomization is an epidemiological method for using genetic variationto estimate the causal effect of the change in a modifiable phenotype onan outcome from observational data. A genetic variant satisfying the assumptionsof an instrumental variable for the phenotype of interest can be usedto divide a population into subgroups which differ systematically only in thephenotype. This gives a causal estimate which is asymptotically free of biasfrom confounding and reverse causation. However, the variance of the causalestimate is large compared to traditional regression methods, requiring largeamounts of data and necessitating methods for efficient data synthesis. Additionally,if the association between the genetic variant and the phenotype is notstrong, then the causal estimates will be biased due to the “weak instrument”in finite samples in the direction of the observational association. This biasmay convince a researcher that an observed association is causal. If the causalparameter estimated is an odds ratio, then the parameter of association willdiffer depending on whether viewed as a population-averaged causal effect ora personal causal effect conditional on covariates. We introduce a Bayesian framework for instrumental variable analysis, whichis less susceptible to weak instrument bias than traditional two-stage methods,has correct coverage with weak instruments, and is able to efficiently combinegene–phenotype–outcome data from multiple heterogeneous sources. Methodsfor imputing missing genetic data are developed, allowing multiple genetic variantsto be used without reduction in sample size. We focus on the question ofa binary outcome, illustrating how the collapsing of the odds ratio over heterogeneousstrata in the population means that the two-stage and the Bayesianmethods estimate a population-averaged marginal causal effect similar to thatestimated by a randomized trial, but which typically differs from the conditionaleffect estimated by standard regression methods. We show how thesemethods can be adjusted to give an estimate closer to the conditional effect. We apply the methods and techniques discussed to data on the causal effect ofC-reactive protein on fibrinogen and coronary heart disease, concluding withan overall estimate of causal association based on the totality of available datafrom 42 studies.
|
20 |
Application of software engineering methodologies to the development of mathematical biological modelsGill, Mandeep Singh January 2013 (has links)
Mathematical models have been used to capture the behaviour of biological systems, from low-level biochemical reactions to multi-scale whole-organ models. Models are typically based on experimentally-derived data, attempting to reproduce the observed behaviour through mathematical constructs, e.g. using Ordinary Differential Equations (ODEs) for spatially-homogeneous systems. These models are developed and published as mathematical equations, yet are of such complexity that they necessitate computational simulation. This computational model development is often performed in an ad hoc fashion by modellers who lack extensive software engineering experience, resulting in brittle, inefficient model code that is hard to extend and reuse. Several Domain Specific Languages (DSLs) exist to aid capturing such biological models, including CellML and SBML; however these DSLs are designed to facilitate model curation rather than simplify model development. We present research into the application of techniques from software engineering to this domain; starting with the design, development and implementation of a DSL, termed Ode, to aid the creation of ODE-based biological models. This introduces features beneficial to model development, such as model verification and reproducible results. We compare and contrast model development to large-scale software development, focussing on extensibility and reuse. This work results in a module system that enables the independent construction and combination of model components. We further investigate the use of software engineering processes and patterns to develop complex modular cardiac models. Model simulation is increasingly computationally demanding, thus models are often created in complex low-level languages such as C/C++. We introduce a highly-efficient, optimising native-code compiler for Ode that generates custom, model-specific simulation code and allows use of our structured modelling features without degrading performance. Finally, in certain contexts the stochastic nature of biological systems becomes relevant. We introduce stochastic constructs to the Ode DSL that enable models to use Stochastic Differential Equations (SDEs), the Stochastic Simulation Algorithm (SSA), and hybrid methods. These use our native-code implementation and demonstrate highly-efficient stochastic simulation, beneficial as stochastic simulation is highly computationally intensive. We introduce a further DSL to model ion channels declaratively, demonstrating the benefits of DSLs in the biological domain. This thesis demonstrates the application of software engineering methodologies, and in particular DSLs, to facilitate the development of both deterministic and stochastic biological models. We demonstrate their benefits with several features that enable the construction of large-scale, reusable and extensible models. This is accomplished whilst providing efficient simulation, creating new opportunities for biological model development, investigation and experimentation.
|
Page generated in 0.0276 seconds