Spelling suggestions: "subject:"andgraphical 2models"" "subject:"andgraphical inodels""
1 
(Semi)Predictive Discretization During Model SelectionSteck, Harald, Jaakkola, Tommi S. 25 February 2003 (has links)
In this paper, we present an approach to discretizing multivariate continuous data while learning the structure of a graphical model. We derive the joint scoring function from the principle of predictive accuracy, which inherently ensures the optimal tradeoff between goodness of fit and model complexity (including the number of discretization levels). Using the socalled finest grid implied by the data, our scoring function depends only on the number of data points in the various discretization levels. Not only can it be computed efficiently, but it is also independent of the metric used in the continuous space. Our experiments with gene expression data show that discretization plays a crucial role regarding the resulting network structure.

2 
Learning Deep Generative ModelsSalakhutdinov, Ruslan 02 March 2010 (has links)
Building intelligent systems that are capable of extracting highlevel representations from highdimensional sensory data lies at the core of solving many AI related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that involve many layers of nonlinear processing. The aim of the thesis is to demonstrate that deep generative models that contain many layers of latent variables and millions of parameters can be learned efficiently, and that the learned highlevel feature representations can be successfully applied in a wide spectrum of application domains, including visual object recognition, information retrieval, and classification and regression tasks. In addition, similar methods can be used for nonlinear dimensionality reduction.

3 
Learning Deep Generative ModelsSalakhutdinov, Ruslan 02 March 2010 (has links)
Building intelligent systems that are capable of extracting highlevel representations from highdimensional sensory data lies at the core of solving many AI related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that involve many layers of nonlinear processing. The aim of the thesis is to demonstrate that deep generative models that contain many layers of latent variables and millions of parameters can be learned efficiently, and that the learned highlevel feature representations can be successfully applied in a wide spectrum of application domains, including visual object recognition, information retrieval, and classification and regression tasks. In addition, similar methods can be used for nonlinear dimensionality reduction.

4 
Activity Recognition from Physiological Data using Conditional Random FieldsChieu, Hai Leong, Lee, Wee Sun, Kaelbling, Leslie P. 01 1900 (has links)
We describe the application of conditional random fields (CRF) to physiological data modeling for the application of activity recognition. We use the data provided by the Physiological Data Modeling Contest (PDMC), a Workshop at ICML 2004. Data used in PDMC are sequential in nature: they consist of physiological sessions, and each session consists of minutebyminute sensor readings. We show that linear chain CRF can effectively make use of the sequential information in the data, and, with Expectation Maximization, can be trained on partially unlabeled sessions to improve performance. We also formulate a mixture CRF to make use of the identities of the human subjects to further improve performance. We propose that mixture CRF can be used for transfer learning, where models can be trained on data from different domains. During testing, if the domain of the test data is known, it can be used to instantiate the mixture node, and when it is unknown (or when it is a completely new domain), the marginal probabilities of the labels over all training domains can still be used effectively for prediction. / SingaporeMIT Alliance (SMA)

5 
Understanding brain functional connectivity using graphical modelsJanuary 2021 (has links)
archives@tulane.edu / With the rapid development of precision medicine across almost all areas of medicine, the Research Domain Criteria (RDoC) project has been initiated to develop datadriven matrices toward precision medicine for mental disorder by integrating multilevel information including genomics, molecules, circuits, and behaviors. This thesis, under the guidance of the RDoC framework, aims to gain a more complete understanding of the role of oscillatory behavior and network connectivity in normal/abnormal brain functioning and cognitive development. Two specific topics were involved: 1. Understand the complex mechanism for mental disorder through multiomics data; 2. Study the development of FC from childhood to adulthood using multiparadigm brain images. We intend to identify new and reliable biomarkers for the purpose of precise diagnosis and can potentially provide an enormous impetus for drug discovery through the comparison of normal and abnormal brains and the investigation of dynamic changes.
This thesis proposes several new analytic graphical models (directed and undirected) to assess brain functional connectivity (FC), each targeting a specific problem in the biomedical applications: the psilearning method to resolve the high dimensionality for networks on voxel level, the latent Gaussian copula model for mix data distributions, the joint Bayesian incorporating estimation to address heterogeneities in undirected graphical models; the psiLiNGAM and BiLiNGAM for the situations of small sample size and heterogeneities in directed acyclic graphs, respectively. The proposed methods are validated through a series of simulation studies and large genomic and neuroimaging datasets, where they confirm results from previous studies and lead to new biological insights. In addition, we put extra efforts on promoting reproducible research and make the proposed methods widely available to the scientific community by the release of free and opensource codes. / 1 / Aiying Zhang

6 
Inférence de réseaux de régulation génétique à partir de données du transcriptome non indépendamment et indentiquement distribuées / Inference of gene regulatory networks from non independently and identically distributed transcriptomic dataCharbonnier, Camille 04 December 2012 (has links)
Cette thèse étudie l'inférence de modèles graphiques Gaussiens en grande dimension à partir de données du transcriptome non indépendamment et identiquement distribuées dans l'objectif d'estimer des réseaux de régulation génétique. Dans ce contexte de données en grande dimension, l'hétérogénéité des données peut être mise à profit pour définir des méthodes de régularisation structurées améliorant la qualité des estimateurs. Nous considérons tout d'abord l'hétérogénéité apparaissant au niveau du réseau, fondée sur l'hypothèse que les réseaux biologiques sont organisés, ce qui nous conduit à définir une régularisation l1 pondérée. Modélisant l'hétérogénéité au niveau des données, nous étudions les propriétés théoriques d'une méthode de régularisation par bloc appelée coopérativeLasso, définie dans le but de lier l'inférence sur des jeux de données distincts mais proches en un certain sens. Pour finir, nous nous intéressons au problème central de l'incertitude des estimations, définissant un test d'homogénéité pour modèle linéaire en grande dimension. / This thesis investigates the inference of highdimensional Gaussian graphical models from non identically and independently distributed transcriptomic data in the objective of recovering gene regulatory networks. In the context of highdimensional statistics, data heterogeneity paves the way to the definition of structured regularizers in order to improve the quality of the estimator. We first consider heterogeneity at the network level, building upon the assumption that biological networks are organized, which leads to the definition of a weighted l1 regularization. Modelling heterogeneity at the observation level, we provide a consistency analysis of a recent blocksparse regularizer called the cooperativeLasso designed to combine observations from distinct but close datasets. Finally we address the crucial question of uncertainty, deriving homonegeity tests for highdimensional linear regression.

7 
Conditioning graphs: practical structures for inference in bayesian networksGrant, Kevin John 16 January 2007
Probability is a useful tool for reasoning when faced with uncertainty. Bayesian networks offer a compact representation of a probabilistic problem, exploiting independence amongst variables that allows a factorization of the joint probability into much smaller local probability distributions.<p>The standard approach to probabilistic inference in Bayesian networks is to compile the graph into a jointree, and perform computation over this secondary structure. While jointrees are among the most timeefficient methods of inference in Bayesian networks, they are not always appropriate for certain applications. The memory requirements of jointree can be prohibitively large. The algorithms for computing over jointrees are large and involved, making them difficult to port to other systems or be understood by general programmers without Bayesian network expertise. <p>This thesis proposes a different method for probabilistic inference in Bayesian networks. We present a data structure called a conditioning graph, which is a runtime representation of Bayesian network inference. The structure mitigates many of the problems of jointree inference. For example, conditioning graphs require much less space to store and compute over. The algorithm for calculating probabilities from a conditioning graph is small and basic, making it portable to virtually any architecture. And the details of Bayesian network inference are compiled away during the construction of the conditioning graph, leaving an intuitive structure that is easy to understand and implement without any Bayesian network expertise. <p>In addition to the conditioning graph architecture, we present several improvements to the model, that maintain its small and simplistic style while reducing the runtime required for computing over it. We present two heuristics for choosing variable orderings that result in shallower elimination trees, reducing the overall complexity of computing over conditioning graphs. We also demonstrate several compile and runtime extensions to the algorithm, that can produce substantial speedup to the algorithm while adding a small space constant to the implementation. We also show how to cache intermediate values in conditioning graphs during probabilistic computation, that allows conditioning graphs to perform at the same speed as standard methods by avoiding duplicate computation, at the price of more memory. The methods presented also conform to the basic style of the original algorithm. We demonstrate a novel technique for reducing the amount of required memory for caching. <p>We demonstrate empirically the compactness, portability, and ease of use of conditioning graphs. We also show that the optimizations of conditioning graphs allow competitive behaviour with standard methods in many circumstances, while still preserving its small and simple style. Finally, we show that the memory required under caching can be quite modest, meaning that conditioning graphs can be competitive with standard methods in terms of time, using a fraction of the memory.

8 
Conditioning graphs: practical structures for inference in bayesian networksGrant, Kevin John 16 January 2007 (has links)
Probability is a useful tool for reasoning when faced with uncertainty. Bayesian networks offer a compact representation of a probabilistic problem, exploiting independence amongst variables that allows a factorization of the joint probability into much smaller local probability distributions.<p>The standard approach to probabilistic inference in Bayesian networks is to compile the graph into a jointree, and perform computation over this secondary structure. While jointrees are among the most timeefficient methods of inference in Bayesian networks, they are not always appropriate for certain applications. The memory requirements of jointree can be prohibitively large. The algorithms for computing over jointrees are large and involved, making them difficult to port to other systems or be understood by general programmers without Bayesian network expertise. <p>This thesis proposes a different method for probabilistic inference in Bayesian networks. We present a data structure called a conditioning graph, which is a runtime representation of Bayesian network inference. The structure mitigates many of the problems of jointree inference. For example, conditioning graphs require much less space to store and compute over. The algorithm for calculating probabilities from a conditioning graph is small and basic, making it portable to virtually any architecture. And the details of Bayesian network inference are compiled away during the construction of the conditioning graph, leaving an intuitive structure that is easy to understand and implement without any Bayesian network expertise. <p>In addition to the conditioning graph architecture, we present several improvements to the model, that maintain its small and simplistic style while reducing the runtime required for computing over it. We present two heuristics for choosing variable orderings that result in shallower elimination trees, reducing the overall complexity of computing over conditioning graphs. We also demonstrate several compile and runtime extensions to the algorithm, that can produce substantial speedup to the algorithm while adding a small space constant to the implementation. We also show how to cache intermediate values in conditioning graphs during probabilistic computation, that allows conditioning graphs to perform at the same speed as standard methods by avoiding duplicate computation, at the price of more memory. The methods presented also conform to the basic style of the original algorithm. We demonstrate a novel technique for reducing the amount of required memory for caching. <p>We demonstrate empirically the compactness, portability, and ease of use of conditioning graphs. We also show that the optimizations of conditioning graphs allow competitive behaviour with standard methods in many circumstances, while still preserving its small and simple style. Finally, we show that the memory required under caching can be quite modest, meaning that conditioning graphs can be competitive with standard methods in terms of time, using a fraction of the memory.

9 
Monte Carlo integration in discrete undirected probabilistic modelsHamze, Firas 05 1900 (has links)
This thesis contains the author’s work in and contributions to the field of Monte Carlo sampling for undirected graphical models, a class of statistical model commonly used in machine learning, computer vision, and spatial statistics; the aim is to be able to use the methodology and resultant samples to estimate integrals of functions of the variables in the model. Over the course of the study, three different but related methods were proposed and have appeared as research papers. The thesis consists of an introductory chapter discussing the models considered, the problems involved, and a general outline of Monte Carlo methods. The three subsequent chapters contain versions of the published work. The second chapter, which has appeared in (Hamze and de Freitas 2004), is a presentation of new MCMC algorithms for computing the posterior distributions and expectations of the unknown variables in undirected graphical models with regular structure. For demonstration purposes, we focus on Markov Random Fields (MRFs). By partitioning the MRFs into nonoverlapping trees, it is possible to compute the posterior distribution of a particular tree exactly by conditioning on the remaining tree. These exact solutions allow us to construct efficient blocked and RaoBlackwellised MCMC algorithms. We show empirically that tree sampling is considerably more efficient than other partitioned sampling schemes and the naive Gibbs sampler, even in cases where loopy belief propagation fails to converge. We prove that tree sampling exhibits lower variance than the naive Gibbs sampler and other naive partitioning schemes using the theoretical measure of maximal correlation. We also construct new information theory tools for comparing different MCMC schemes and show that, under these, tree sampling is more efficient. Although the work discussed in Chapter 2 exhibited promise on the class of graphs to which it was suited, there are many cases where limiting the topology is quite a handicap. The work in Chapter 3 was an exploration in an alternative methodology for approximating functions of variables representable as undirected graphical models of arbitrary connectivity with pairwise potentials, as well as for estimating the notoriously difficult partition function of the graph. The algorithm, published in (Hamze and de Freitas 2005), fits into the framework of sequential Monte Carlo methods rather than the more widely used MCMC, and relies on constructing a sequence of intermediate distributions which get closer to the desired one. While the idea of using “tempered” proposals is known, we construct a novel sequence of target distributions where, rather than dropping a global temperature parameter, we sequentially couple individual pairs of variables that are, initially, sampled exactly from a spanning treeof the variables. We present experimental results on inference and estimation of the partition function for sparse and denselyconnected graphs. The final contribution of this thesis, presented in Chapter 4 and also in (Hamze and de Freitas 2007), emerged from some empirical observations that were made while trying to optimize the sequence of edges to add to a graph so as to guide the population of samples to the highprobability regions of the model. Most important among these observations was that while several heuristic approaches, discussed in Chapter 1, certainly yielded improvements over edge sequences consisting of random choices, strategies based on forcing the particles to take large, biased random walks in the statespace resulted in a more efficient exploration, particularly at low temperatures. This motivated a new Monte Carlo approach to treating complex discrete distributions. The algorithm is motivated by the NFold Way, which is an ingenious eventdriven MCMC sampler that avoids rejection moves at any specific state. The NFold Way can however get “trapped” in cycles. We surmount this problem by modifying the sampling process to result in biased statespace paths of randomly chosen length. This alteration does introduce bias, but the bias is subsequently corrected with a carefully engineered importance sampler.

10 
Online Planning in Multiagent Expedition with Graphical ModelsHanshar, Franklin 14 December 2011 (has links)
This dissertation proposes a suite of novel approaches for solving multiagent decision and optimization problems based on the Collaborative Design Network (CDN), a framework for multiagent decision making.
The framework itself is distributed, decisiontheoretic and was originally proposed for multiagent componentcentred design. This application is a novel use of the CDN, and demonstrate the generality of the CDN framework for general decisiontheoretic planning.
First, the framework is applied towards tackling a multiagent decision problem outside of collaborative design called multiagent expedition (MAE), a testbed problem which abstracts many of the features of realworld multiagent decisionmaking problems. We formally introduce MAE, and show it to be a subclass of a decentralized partially observable Markov Decision process (DecPOMDP).
We apply the CDN to the online MAE planning problem. We demonstrate that the CDN can plan in MAE with conditional optimality given a set of basic assumptions on the structure and organization of the agent team. We introduce a set of knowledge representational aspects to achieve conditionally optimal planning. We experimentally verify our approach on a series of benchmark problems created for this dissertation to test the
various aspects of our CDN solution.
We also investigate further methods for scalability and speedup in MAE. The concept of \emph{partial evaluation} (PE) is introduced, based on the assumption that an agent has an intended effect given an agent's action and considers all other effects unintended. This assumption is used to derive a bound for planning that partitions the set of joint plans into a set of fully evaluated and a set of partial evaluated plans. Plans which are partially evaluated can significantly speed up planning in the centralized case.
PE is also applied to the CDN, to both public decisions between agents and private decisions local to an agent. We demonstrate that applying PE to public decisions in the CDN results in either intractable communication or suboptimal planning. When applied to private decisions, we show PE can still be very effective in decreasing planning runtime.

Page generated in 0.041 seconds