• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 160
  • 45
  • 32
  • 16
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 312
  • 312
  • 80
  • 54
  • 52
  • 49
  • 44
  • 42
  • 42
  • 42
  • 35
  • 34
  • 32
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Robust inference of gene regulatory networks : System properties, variable selection, subnetworks, and design of experiments

Nordling, Torbjörn E. M. January 2013 (has links)
In this thesis, inference of biological networks from in vivo data generated by perturbation experiments is considered, i.e. deduction of causal interactions that exist among the observed variables. Knowledge of such regulatory influences is essential in biology. A system property–interampatteness–is introduced that explains why the variation in existing gene expression data is concentrated to a few “characteristic modes” or “eigengenes”, and why previously inferred models have a large number of false positive and false negative links. An interampatte system is characterized by strong INTERactions enabling simultaneous AMPlification and ATTEnuation of different signals and we show that perturbation of individual state variables, e.g. genes, typically leads to ill-conditioned data with both characteristic and weak modes. The weak modes are typically dominated by measurement noise due to poor excitation and their existence hampers network reconstruction. The excitation problem is solved by iterative design of correlated multi-gene perturbation experiments that counteract the intrinsic signal attenuation of the system. The next perturbation should be designed such that the expected response practically spans an additional dimension of the state space. The proposed design is numerically demonstrated for the Snf1 signalling pathway in S. cerevisiae. The impact of unperturbed and unobserved latent state variables, that exist in any real biological system, on the inferred network and required set-up of the experiments for network inference is analysed. Their existence implies that a subnetwork of pseudo-direct causal regulatory influences, accounting for all environmental effects, in general is inferred. In principle, the number of latent states and different paths between the nodes of the network can be estimated, but their identity cannot be determined unless they are observed or perturbed directly. Network inference is recognized as a variable/model selection problem and solved by considering all possible models of a specified class that can explain the data at a desired significance level, and by classifying only the links present in all of these models as existing. As shown, these links can be determined without any parameter estimation by reformulating the variable selection problem as a robust rank problem. Solution of the rank problem enable assignment of confidence to individual interactions, without resorting to any approximation or asymptotic results. This is demonstrated by reverse engineering of the synthetic IRMA gene regulatory network from published data. A previously unknown activation of transcription of SWI5 by CBF1 in the IRMA strain of S. cerevisiae is proven to exist, which serves to illustrate that even the accumulated knowledge of well studied genes is incomplete. / Denna avhandling behandlar inferens av biologiskanätverk från in vivo data genererat genom störningsexperiment, d.v.s. bestämning av kausala kopplingar som existerar mellan de observerade variablerna. Kunskap om dessa regulatoriska influenser är väsentlig för biologisk förståelse. En system egenskap—förstärksvagning—introduceras. Denna förklarar varför variationen i existerande genexpressionsdata är koncentrerat till några få ”karakteristiska moder” eller ”egengener” och varför de modeller som konstruerats innan innehåller många falska positiva och falska negativa linkar. Ett system med förstärksvagning karakteriseras av starka kopplingar som möjliggör simultan FÖRSTÄRKning och förSVAGNING av olika signaler. Vi demonstrerar att störning av individuella tillståndsvariabler, t.ex. gener, typiskt leder till illakonditionerat data med både karakteristiska och svaga moder. De svaga moderna domineras typiskt av mätbrus p.g.a. dålig excitering och försvårar rekonstruktion av nätverket. Excitationsproblemet löses med iterativdesign av experiment där korrelerade störningar i multipla gener motverkar systemets inneboende försvagning av signaller. Följande störning bör designas så att det förväntade svaret praktiskt spänner ytterligare en dimension av tillståndsrummet. Den föreslagna designen demonstreras numeriskt för Snf1 signalleringsvägen i S. cerevisiae. Påverkan av ostörda och icke observerade latenta tillståndsvariabler, som existerar i varje verkligt biologiskt system, på konstruerade nätverk och planeringen av experiment för nätverksinferens analyseras. Existens av dessa tillståndsvariabler innebär att delnätverk med pseudo-direkta regulatoriska influenser, som kompenserar för miljöeffekter, generellt bestäms. I princip så kan antalet latenta tillstånd och alternativa vägar mellan noder i nätverket bestämmas, men deras identitet kan ej bestämmas om de inte direkt observeras eller störs. Nätverksinferens behandlas som ett variabel-/modelselektionsproblem och löses genom att undersöka alla modeller inom en vald klass som kan förklara datat på den önskade signifikansnivån, samt klassificera endast linkar som är närvarande i alla dessa modeller som existerande. Dessa linkar kan bestämmas utan estimering av parametrar genom att skriva om variabelselektionsproblemet som ett robustrangproblem. Lösning av rangproblemet möjliggör att statistisk konfidens kan tillskrivas individuella linkar utan approximationer eller asymptotiska betraktningar. Detta demonstreras genom rekonstruktion av det syntetiska IRMA genreglernätverket från publicerat data. En tidigare okänd aktivering av transkription av SWI5 av CBF1 i IRMA stammen av S. cerevisiae bevisas. Detta illustrerar att t.o.m. den ackumulerade kunskapen om välstuderade gener är ofullständig. / <p>QC 20130508</p>
202

Bayesian Methods in Gaussian Graphical Models

Mitsakakis, Nikolaos 31 August 2010 (has links)
This thesis contributes to the field of Gaussian Graphical Models by exploring either numerically or theoretically various topics of Bayesian Methods in Gaussian Graphical Models and by providing a number of interesting results, the further exploration of which would be promising, pointing to numerous future research directions. Gaussian Graphical Models are statistical methods for the investigation and representation of interdependencies between components of continuous random vectors. This thesis aims to investigate some issues related to the application of Bayesian methods for Gaussian Graphical Models. We adopt the popular $G$-Wishart conjugate prior $W_G(\delta,D)$ for the precision matrix. We propose an efficient sampling method for the $G$-Wishart distribution based on the Metropolis Hastings algorithm and show its validity through a number of numerical experiments. We show that this method can be easily used to estimate the Deviance Information Criterion, providing a computationally inexpensive approach for model selection. In addition, we look at the marginal likelihood of a graphical model given a set of data. This is proportional to the ratio of the posterior over the prior normalizing constant. We explore methods for the estimation of this ratio, focusing primarily on applying the Monte Carlo simulation method of path sampling. We also explore numerically the effect of the completion of the incomplete matrix $D^{\mathcal{V}}$, hyperparameter of the $G$-Wishart distribution, for the estimation of the normalizing constant. We also derive a series of exact and approximate expressions for the Bayes Factor between two graphs that differ by one edge. A new theoretical result regarding the limit of the normalizing constant multiplied by the hyperparameter $\delta$ is given and its implications to the validity of an improper prior and of the subsequent Bayes Factor are discussed.
203

Bayesian Methods in Gaussian Graphical Models

Mitsakakis, Nikolaos 31 August 2010 (has links)
This thesis contributes to the field of Gaussian Graphical Models by exploring either numerically or theoretically various topics of Bayesian Methods in Gaussian Graphical Models and by providing a number of interesting results, the further exploration of which would be promising, pointing to numerous future research directions. Gaussian Graphical Models are statistical methods for the investigation and representation of interdependencies between components of continuous random vectors. This thesis aims to investigate some issues related to the application of Bayesian methods for Gaussian Graphical Models. We adopt the popular $G$-Wishart conjugate prior $W_G(\delta,D)$ for the precision matrix. We propose an efficient sampling method for the $G$-Wishart distribution based on the Metropolis Hastings algorithm and show its validity through a number of numerical experiments. We show that this method can be easily used to estimate the Deviance Information Criterion, providing a computationally inexpensive approach for model selection. In addition, we look at the marginal likelihood of a graphical model given a set of data. This is proportional to the ratio of the posterior over the prior normalizing constant. We explore methods for the estimation of this ratio, focusing primarily on applying the Monte Carlo simulation method of path sampling. We also explore numerically the effect of the completion of the incomplete matrix $D^{\mathcal{V}}$, hyperparameter of the $G$-Wishart distribution, for the estimation of the normalizing constant. We also derive a series of exact and approximate expressions for the Bayes Factor between two graphs that differ by one edge. A new theoretical result regarding the limit of the normalizing constant multiplied by the hyperparameter $\delta$ is given and its implications to the validity of an improper prior and of the subsequent Bayes Factor are discussed.
204

Bayesian Model Selection for High-dimensional High-throughput Data

Joshi, Adarsh 2010 May 1900 (has links)
Bayesian methods are often criticized on the grounds of subjectivity. Furthermore, misspecified priors can have a deleterious effect on Bayesian inference. Noting that model selection is effectively a test of many hypotheses, Dr. Valen E. Johnson sought to eliminate the need of prior specification by computing Bayes' factors from frequentist test statistics. In his pioneering work that was published in the year 2005, Dr. Johnson proposed using so-called local priors for computing Bayes? factors from test statistics. Dr. Johnson and Dr. Jianhua Hu used Bayes' factors for model selection in a linear model setting. In an independent work, Dr. Johnson and another colleage, David Rossell, investigated two families of non-local priors for testing the regression parameter in a linear model setting. These non-local priors enable greater separation between the theories of null and alternative hypotheses. In this dissertation, I extend model selection based on Bayes' factors and use nonlocal priors to define Bayes' factors based on test statistics. With these priors, I have been able to reduce the problem of prior specification to setting to just one scaling parameter. That scaling parameter can be easily set, for example, on the basis of frequentist operating characteristics of the corresponding Bayes' factors. Furthermore, the loss of information by basing a Bayes' factors on a test statistic is minimal. Along with Dr. Johnson and Dr. Hu, I used the Bayes' factors based on the likelihood ratio statistic to develop a method for clustering gene expression data. This method has performed well in both simulated examples and real datasets. An outline of that work is also included in this dissertation. Further, I extend the clustering model to a subclass of the decomposable graphical model class, which is more appropriate for genotype data sets, such as single-nucleotide polymorphism (SNP) data. Efficient FORTRAN programming has enabled me to apply the methodology to hundreds of nodes. For problems that produce computationally harder probability landscapes, I propose a modification of the Markov chain Monte Carlo algorithm to extract information regarding the important network structures in the data. This modified algorithm performs well in inferring complex network structures. I use this method to develop a prediction model for disease based on SNP data. My method performs well in cross-validation studies.
205

Model Selection and Uniqueness Analysis for Reservoir History Matching

Rafiee, Mohammad Mohsen 28 March 2011 (has links) (PDF)
“History matching” (model calibration, parameter identification) is an established method for determination of representative reservoir properties such as permeability, porosity, relative permeability and fault transmissibility from a measured production history; however the uniqueness of selected model is always a challenge in a successful history matching. Up to now, the uniqueness of history matching results in practice can be assessed only after individual and technical experience and/or by repeating history matching with different reservoir models (different sets of parameters as the starting guess). The present study has been used the stochastical theory of Kullback & Leibler (K-L) and its further development by Akaike (AIC) for the first time to solve the uniqueness problem in reservoir engineering. In addition - based on the AIC principle and the principle of parsimony - a penalty term for OF has been empirically formulated regarding geoscientific and technical considerations. Finally a new formulation (Penalized Objective Function, POF) has been developed for model selection in reservoir history matching and has been tested successfully in a North German gas field. / „History Matching“ (Modell-Kalibrierung, Parameter Identifikation) ist eine bewährte Methode zur Bestimmung repräsentativer Reservoireigenschaften, wie Permeabilität, Porosität, relative Permeabilitätsfunktionen und Störungs-Transmissibilitäten aus einer gemessenen Produktionsgeschichte (history). Bis heute kann die Eindeutigkeit der identifizierten Parameter in der Praxis nicht konstruktiv nachgewiesen werden. Die Resultate eines History-Match können nur nach individueller Erfahrung und/oder durch vielmalige History-Match-Versuche mit verschiedenen Reservoirmodellen (verschiedenen Parametersätzen als Startposition) auf ihre Eindeutigkeit bewertet werden. Die vorliegende Studie hat die im Reservoir Engineering erstmals eingesetzte stochastische Theorie von Kullback & Leibler (K-L) und ihre Weiterentwicklung nach Akaike (AIC) als Basis für die Bewertung des Eindeutigkeitsproblems genutzt. Schließlich wurde das AIC-Prinzip als empirischer Strafterm aus geowissenschaftlichen und technischen Überlegungen formuliert. Der neu formulierte Strafterm (Penalized Objective Function, POF) wurde für das History Matching eines norddeutschen Erdgasfeldes erfolgreich getestet.
206

Portfolio management using computational intelligence approaches : forecasting and optimising the stock returns and stock volatilities with fuzzy logic, neural network and evolutionary algorithms

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN's initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
207

Exploring the Boundaries of Gene Regulatory Network Inference

Tjärnberg, Andreas January 2015 (has links)
To understand how the components of a complex system like the biological cell interact and regulate each other, we need to collect data for how the components respond to system perturbations. Such data can then be used to solve the inverse problem of inferring a network that describes how the pieces influence each other. The work in this thesis deals with modelling the cell regulatory system, often represented as a network, with tools and concepts derived from systems biology. The first investigation focuses on network sparsity and algorithmic biases introduced by penalised network inference procedures. Many contemporary network inference methods rely on a sparsity parameter such as the L1 penalty term used in the LASSO. However, a poor choice of the sparsity parameter can give highly incorrect network estimates. In order to avoid such poor choices, we devised a method to optimise the sparsity parameter, which maximises the accuracy of the inferred network. We showed that it is effective on in silico data sets with a reasonable level of informativeness and demonstrated that accurate prediction of network sparsity is key to elucidate the correct network parameters. The second investigation focuses on how knowledge from association networks can be transferred to regulatory network inference procedures. It is common that the quality of expression data is inadequate for reliable gene regulatory network inference. Therefore, we constructed an algorithm to incorporate prior knowledge and demonstrated that it increases the accuracy of network inference when the quality of the data is low. The third investigation aimed to understand the influence of system and data properties on network inference accuracy. L1 regularisation methods commonly produce poor network estimates when the data used for inference is ill-conditioned, even when the signal to noise ratio is so high that all links in the network can be proven to exist for the given significance. In this study we elucidated some general principles for under what conditions we expect strongly degraded accuracy. Moreover, it allowed us to estimate expected accuracy from conditions of simulated data, which was used to predict the performance of inference algorithms on biological data. Finally, we built a software package GeneSPIDER for solving problems encountered during previous investigations. The software package supports highly controllable network and data generation as well as data analysis and exploration in the context of network inference. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 4: Manuscript.</p><p> </p>
208

Analysis of price transmission and asymmetric adjustment using Bayesian econometric methodology

Acquah, Henry de-Graft 31 January 2008 (has links)
No description available.
209

Distributions d'auto-amorçage exactes ponctuelles des courbes ROC et des courbes de coûts

Gadoury, David January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
210

Biological effects of high energy radiation and ultra high dose rates

Zackrisson, Björn January 1991 (has links)
Recently a powerful electron accelerator, 50 MeV race-track microtron, has been taken into clinical use. This gives the opportunity to treat patients with higher x-ray and electron energies than before. Furthermore, treatments can be performed were the entire fractional dose can be delivered in parts of a second. The relative biological effectiveness (RBE) of high energy photons (up to 50 MV) was studied in vitro and in vivo. Oxygen enhancement ratio (OER) of 50 MV photons and RBE of 50 MeV electrons were investigated in vitro. Single-fraction experiments, in vitro, using V-79 Chinese hamster fibroblasts showed an RBE for 50 MV x-rays of approximately 1.1 at surviving fraction 0.01, with reference to the response to 4 MV x- rays. No significant difference in OER could be demonstrated. Fractionation experiments were carried out to establish the RBE at the clinically relevant dose level, 2 Gy. The RBE calculated for the 2 Gy/fraction experiments was 1.17. The RBEs for 20 MV x-rays and 50 MeV electrons were equal to one. In order to investigate the validity of these results, the jejunal crypt microcolony assay in mice was used to determine the RBE of 50 MV x-rays. The RBE for 50 MV x-rays in this case was estimated to be 1.06 at crypt surviving fraction 0.1. Photonuclear processes are proposed as one possible explanation to the higher RBE for 50 MV x-rays. Several studies of biological response to ionizing radiation of high absorbed dose rates have been performed, often with conflicting results. With the aim of investigating whether a difference in effect between irradiation at high dose rates and at conventional dose rates could be verified, pulsed 50 MeV electrons from a clinical accelerator were used for experiments with ultra high dose rates (mean dose rate: 3.8 x 10^ Gy/s) in comparison to conventional (mean dose rate: 9.6 x 10"^ Gy/s). V-79 cells were irradiated in vitro under both oxic and anoxic conditions. No significant difference in relative biological effectiveness (RBE) or oxygen enhancement ratio (OER) was observed for ultra high dose rates compared to conventional dose rates. A central issue in clinical radiobiological research is the prediction of responses to different radiation qualities. The choice of cell survival and dose response model greatly influences the results. In this context the relationship between theory and model is emphasized. Generally, the interpretations of experimental data are dependent on the model. Cell survival models are systematized with respect to their relations to radiobiological theories of cell kill. The growing knowledge of biological, physical, and chemical mechanisms is reflected in the formulation of new models. This study shows that recent modelling has been more oriented towards the stochastic fluctuations connected to radiation energy deposition. This implies that the traditional cell survival models ought to be complemented by models of stochastic energy deposition processes at the intracellular level. / <p>S. 1-44: sammanfattning, s. 47-130: 5 uppsatser</p> / digitalisering@umu

Page generated in 0.0955 seconds