Spelling suggestions: "subject:"algorithm"" "subject:"analgorithm""
1 |
Network inference using independence criteriaVerbyla, Petras January 2018 (has links)
Biological systems are driven by complex regulatory processes. Graphical models play a crucial role in the analysis and reconstruction of such processes. It is possible to derive regulatory models using network inference algorithms from high-throughput data, for example; from gene or protein expression data. A wide variety of network inference algorithms have been designed and implemented. Our aim is to explore the possibilities of using statistical independence criteria for biological network inference. The contributions of our work can be categorized into four sections. First, we provide a detailed overview of some of the most popular general independence criteria: distance covariance (dCov), kernel canonical variance (KCC), kernel generalized variance (KGV) and the Hilbert-Schmidt Independence Criterion (HSIC). We provide easy to understand geometrical interpretations for these criteria. We also explicitly show the equivalence of dCov, KGV and HSIC. Second, we introduce a new criterion for measuring dependence based on the signal to noise ratio (SNRIC). SNRIC is significantly faster to compute than other popular independence criteria. SNRIC is an approximate criterion but becomes exact under many popular modelling assumptions, for example for data from an additive noise model. Third, we compare the performance of the independence criteria on biological experimental data within the framework of the PC algorithm. Since not all criteria are available in a version that allows for testing conditional independence, we propose and test an approach which relies on residuals and requires only an unconditional version of an independence criterion. Finally we propose a novel method to infer networks with feedback loops. We use an MCMC sampler, which samples using a loss function based on an independence criterion. This allows us to find networks under very general assumptions, such as non-linear relationships, non-Gaussian noise distributions and feedback loops.
|
2 |
Selection of Sufficient Adjustment Sets for Causal Inference : A Comparison of Algorithms and Evaluation Metrics for Structure LearningWidenfalk, Agnes January 2022 (has links)
Causal graphs are essential tools to find sufficient adjustment sets in observational studies. Subject matter experts can sometimes specify these graphs, but often the dependence structure of the variables, and thus the graph, is unknown even to them. In such cases, structure learning algorithms can be used to learn the graph. Early structure learning algorithms were implemented for either exclusively discrete or continuous variables. Recently, methods have been developed for structure learning on mixed data, including both continuous and discrete variables. In this thesis, three structure learning algorithms for mixed data are evaluated through a simulation study. The evaluation is based on graph recovery metrics and the ability to find a sufficient adjustment set for the average treatment effect (ATE). Depending on the intended purpose of the learned graph, the different evaluation metrics should be given varying attention. It is also concluded that the pcalg+micd algorithm learns graphs such that it is possible to find a sufficient adjustment set for the ATE in more than 99% of the cases. Moreover, the learned graphs from pcalg+micd are the most accurate compared to the true graph using the largest sample size.
|
Page generated in 0.0434 seconds