61 |
Automated characterization of skin aging using in vivo confocal microscopy / Caractérisation automatique du vieillissement de la peau par microscopie confocale in-vivoRobic, Julie 20 June 2018 (has links)
La microscopie confocale de réflectance in-vivo (RCM) est un outil puissant pour visualiser les couches cutanées à une résolution cellulaire. Des descripteurs du vieillissement cutané ont été mis en évidence à partir d'images confocales. Cependant, leur évaluation nécessite une analyse visuelle des images par des dermatologues expérimentés. L'objectif de cette thèse est le développement d'une technologie innovante pour quantifier automatiquement le phénomène du vieillissement cutané en utilisant la microscopie confocale de réflectance in vivo. Premièrement, la quantification de l’état de l’épiderme est abordée. Ensuite, la jonction dermique-épidermique est segmentée, et sa forme est caractérisée. Les mesures proposées mettent en évidence une différence significative entre les groupes d'âge et l’exposition au soleil. Enfin, les méthodes proposées sont validées par des études cliniques et d'efficacité de produits cosmétiques / In-vivo reflectance confocal microscopy (RCM) is a powerful tool to visualize the skin layers at cellular resolution. Aging descriptors have been highlighted from confocal images. However, it requires visual assessment of images by experienced dermatologists to assess those descriptors. The objective of this thesis is the development of an innovative technology to automatically quantify the phenomenon of skin aging using in vivo reflectance confocal microscopy. First, the quantification of the epidermal state is addressed. Then, the Dermal-Epidermal Junction is segmented, and its shape is characterized. The proposed measurements show significant difference among groups of age and photo-exposition. Finally, the proposed methods are validated through both clinical and cosmetic product efficacy studies
|
62 |
Approximate inference in graphical modelsHennig, Philipp January 2011 (has links)
Probability theory provides a mathematically rigorous yet conceptually flexible calculus of uncertainty, allowing the construction of complex hierarchical models for real-world inference tasks. Unfortunately, exact inference in probabilistic models is often computationally expensive or even intractable. A close inspection in such situations often reveals that computational bottlenecks are confined to certain aspects of the model, which can be circumvented by approximations without having to sacrifice the model's interesting aspects. The conceptual framework of graphical models provides an elegant means of representing probabilistic models and deriving both exact and approximate inference algorithms in terms of local computations. This makes graphical models an ideal aid in the development of generalizable approximations. This thesis contains a brief introduction to approximate inference in graphical models (Chapter 2), followed by three extensive case studies in which approximate inference algorithms are developed for challenging applied inference problems. Chapter 3 derives the first probabilistic game tree search algorithm. Chapter 4 provides a novel expressive model for inference in psychometric questionnaires. Chapter 5 develops a model for the topics of large corpora of text documents, conditional on document metadata, with a focus on computational speed. In each case, graphical models help in two important ways: They first provide important structural insight into the problem; and then suggest practical approximations to the exact probabilistic solution.
|
63 |
Normal Factor GraphsAl-Bashabsheh, Ali January 2014 (has links)
This thesis introduces normal factor graphs under a new semantics, namely, the exterior function semantics. Initially, this work was motivated by two distinct lines of research. One line is ``holographic algorithms,'' a powerful approach introduced by Valiant for solving various counting problems in computer science; the other is ``normal graphs,'' an elegant framework proposed by Forney for representing codes defined on graphs. The nonrestrictive normality constraint enables the notion of holographic transformations for normal factor graphs. We establish a theorem, called the generalized Holant theorem, which relates a normal factor graph to its holographic transformation. We show that the generalized Holant theorem on one hand underlies the principle of holographic algorithms, and on the other reduces to a general duality theorem for normal factor graphs, a special case of which was first proved by Forney. As an application beyond Forney's duality, we show that the normal factor graphs duality facilitates the approximation of the partition function for the two-dimensional nearest-neighbor Potts model. In the course of our development, we formalize a new semantics for normal factor graphs, which highlights various linear algebraic properties that enables the use of normal factor graphs as a linear algebraic tool. Indeed, we demonstrate the ability of normal factor graphs to encode several concepts from linear algebra and present normal factor graphs as a generalization of ``trace diagrams.'' We illustrate, with examples, the workings of this framework and how several identities from linear algebra may be obtained using a simple graphical manipulation procedure called ``vertex merging/splitting.'' We also discuss translation association schemes with the aid of normal factor graphs, which we believe provides a simple approach to understanding the subject. Further, under the new semantics, normal factor graphs provide a probabilistic model that unifies several graphical models such as factor graphs, convolutional factor graphs, and cumulative distribution networks.
|
64 |
Bayesian inference in probabilistic graphical modelsRios, Felix Leopoldo January 2017 (has links)
This thesis consists of four papers studying structure learning and Bayesian inference in probabilistic graphical models for both undirected and directed acyclic graphs (DAGs). Paper A presents a novel algorithm, called the Christmas tree algorithm (CTA), that incrementally construct junction trees for decomposable graphs by adding one node at a time to the underlying graph. We prove that CTA with positive probability is able to generate all junction trees of any given number of underlying nodes. Importantly for practical applications, we show that the transition probability of the CTA kernel has a computationally tractable expression. Applications of the CTA transition kernel are demonstrated in a sequential Monte Carlo (SMC) setting for counting the number of decomposable graphs. Paper B presents the SMC scheme in a more general setting specifically designed for approximating distributions over decomposable graphs. The transition kernel from CTA from Paper A is incorporated as proposal kernel. To improve the traditional SMC algorithm, a particle Gibbs sampler with a systematic refreshment step is further proposed. A simulation study is performed for approximate graph posterior inference within both log-linear and decomposable Gaussian graphical models showing efficiency of the suggested methodology in both cases. Paper C explores the particle Gibbs sampling scheme of Paper B for approximate posterior computations in the Bayesian predictive classification framework. Specifically, Bayesian model averaging (BMA) based on the posterior exploration of the class-specific model is incorporated into the predictive classifier to take full account of the model uncertainty. For each class, the dependence structure underlying the observed features is represented by a distribution over the space of decomposable graphs. Due to the intractability of explicit expression, averaging over the approximated graph posterior is performed. The proposed BMA classifier reveals superior performance compared to the ordinary Bayesian predictive classifier that does not account for the model uncertainty, as well as to a number of out-of-the-box classifiers. Paper D develops a novel prior distribution over DAGs with the ability to express prior knowledge in terms of graph layerings. In conjunction with the prior, a stochastic optimization algorithm based on the layering property of DAGs is developed for performing structure learning in Bayesian networks. A simulation study shows that the algorithm along with the prior has superior performance compared with existing priors when used for learning graph with a clearly layered structure. / <p>QC 20170915</p>
|
65 |
Probabilistic Models for Spatially Aggregated Data / 空間集約データのための確率モデルTanaka, Yusuke 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22586号 / 情博第723号 / 新制||情||124(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 石井 信, 教授 下平 英寿 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
66 |
Biological network models for inferring mechanism of action, characterizing cellular phenotypes, and predicting drug responseGriffin, Paula Jean 13 February 2016 (has links)
A primary challenge in the analysis of high-throughput biological data is the abundance of correlated variables. A small change to a gene's expression or a protein's binding availability can cause significant downstream effects. The existence of such chain reactions presents challenges in numerous areas of analysis. By leveraging knowledge of the network interactions that underlie this type of data, we can often enable better understanding of biological phenomena. This dissertation will examine network-based statistical approaches to the problems of mechanism-of-action inference, characterization of gene expression changes, and prediction of drug response.
First, we develop a method for multi-target perturbation detection in multi-omics biological data. We estimate a joint Gaussian graphical model across multiple data types using penalized regression, and filter for network effects. Next, we apply a set of likelihood ratio tests to identify the most likely site of the original perturbation. We also present a conditional testing procedure to allow for detection of secondary perturbations.
Second, we address the problem of characterization of cellular phenotypes via Bayesian regression in the Gene Ontology (GO). In our model, we use the structure of the GO to assign changes in gene expression to functional groups, and to model the covariance between these groups. In addition to describing changes in expression, we use these functional activity estimates to predict the expression of unobserved genes. We further determine when such predictions are likely to be inaccurate by identifying GO terms with poor agreement to gene-level estimates. In a case study, we identify GO terms relevant to changes in the growth rate of S. cerevisiae.
Lastly, we consider the prediction of drug sensitivity in cancer cell lines based on pathway-level activity estimates from ASSIGN, a Bayesian factor analysis model. We use penalized regression to predict response to various cancer treatments based on cancer subtype, pathway activity, and 2-way interactions thereof. We also present network representations of these interaction models and examine common patterns in their structure across treatments.
|
67 |
Implementing Bayesian Inference with Neural NetworksSokoloski, Sacha 26 July 2019 (has links)
Embodied agents, be they animals or robots, acquire information about the world through their senses. Embodied agents, however, do not simply lose this information once it passes by, but rather process and store it for future use. The most general theory of how an agent can combine stored knowledge with new observations is Bayesian inference. In this dissertation I present a theory of how embodied agents can learn to implement Bayesian inference with neural networks.
By neural network I mean both artificial and biological neural networks, and in my dissertation I address both kinds. On one hand, I develop theory for implementing Bayesian inference in deep generative models, and I show how to train multilayer perceptrons to compute approximate predictions for Bayesian filtering. On the other hand, I show that several models in computational neuroscience are special cases of the general theory that I develop in this dissertation, and I use this theory to model and explain several phenomena in neuroscience. The key contributions of this dissertation can be summarized as follows:
- I develop a class of graphical model called nth-order harmoniums. An nth-order harmonium is an n-tuple of random variables, where the conditional distribution of each variable given all the others is always an element of the same exponential family. I show that harmoniums have a recursive structure which allows them to be analyzed at coarser and finer levels of detail.
- I define a class of harmoniums called rectified harmoniums, which are constrained to have priors which are conjugate to their posteriors. As a consequence of this, rectified harmoniums afford efficient sampling and learning.
- I develop deep harmoniums, which are harmoniums which can be represented by hierarchical, undirected graphs. I develop the theory of rectification for deep harmoniums, and develop a novel algorithm for training deep generative models.
- I show how to implement a variety of optimal and near-optimal Bayes filters by combining the solution to Bayes' rule provided by rectified harmoniums, with predictions computed by a recurrent neural network. I then show how to train a neural network to implement Bayesian filtering when the transition and emission distributions are unknown.
- I show how some well-established models of neural activity are special cases of the theory I present in this dissertation, and how these models can be generalized with the theory of rectification.
- I show how the theory that I present can model several neural phenomena including proprioception and gain-field modulation of tuning curves.
- I introduce a library for the programming language Haskell, within which I have implemented all the simulations presented in this dissertation. This library uses concepts from Riemannian geometry to provide a rigorous and efficient environment for implementing complex numerical simulations.
I also use the results presented in this dissertation to argue for the fundamental role of neural computation in embodied cognition. I argue, in other words, that before we will be able to build truly intelligent robots, we will need to truly understand biological brains.
|
68 |
Bayesian structure learning in graphical modelsRios, Felix Leopoldo January 2016 (has links)
This thesis consists of two papers studying structure learning in probabilistic graphical models for both undirected graphs anddirected acyclic graphs (DAGs). Paper A, presents a novel family of graph theoretical algorithms, called the junction tree expanders, that incrementally construct junction trees for decomposable graphs. Due to its Markovian property, the junction tree expanders are shown to be suitable for proposal kernels in a sequential Monte Carlo (SMC) sampling scheme for approximating a graph posterior distribution. A simulation study is performed for the case of Gaussian decomposable graphical models showing efficiency of the suggested unified approach for both structural and parametric Bayesian inference. Paper B, develops a novel prior distribution over DAGs with the ability to express prior knowledge in terms of graph layerings. In conjunction with the prior, a search and score algorithm based on the layering property of DAGs, is developed for performing structure learning in Bayesian networks. A simulation study shows that the search and score algorithm along with the prior has superior performance for learning graph with a clearly layered structure compared with other priors. / <p>QC 20160111</p>
|
69 |
Dynamic Adaptive Robust Estimations for High-Dimensional Standardized Transelliptical Latent NetworksWu, Tzu-Chun 24 May 2022 (has links)
No description available.
|
70 |
Sum-Product Network in the context of missing data / Sum-Product Nätverk i samband med saknade dataClavier, Pierre January 2020 (has links)
In recent years, the interest in new Deep Learning methods has increased considerably due to their robustness and applications in many fields. However, the lack of interpretability of these models and the lack of theoretical knowledge about them raises many issues. It is in this context that sum product network models have emerged. From a mathematical point of view, SPNs can be described as Directed Acyclic Graphs. In practice, they can be seen as deep mixture models and as a consequence they can be used to represent very rich collections of distributions. The objective of this master thesis was threefold. First we formalized the concept of SPNs with proper mathematical notations, using the concept of Directed Acyclic Graphs and Bayesian Networks theory. Then we developed a new method for learning the structure of a SPN, based on K-means and Mutual Information Theory. Finally we proposed a new method for the estimation of parameters in a fixed SPN, in the context of incomplete data. Our estimation method is based on maximum likelihood methods with the EM algorithm. / Under de senaste åren har intresset för nya Deep Learning-metoder ökat avsevärt på grund av deras robusthet samt deras tillämpning inom en mängd områden. Bristen på teoretisk kunskap om dessa modeller samt deras svårtolkad karaktär väcker emellertid många frågor. Det är i detta sammanhang som Sum-Product Network kom fram, vilken erbjuder en viss ambivalens då den situerar sig mellan ett linjärt neuralt nätverk utan aktiveringsfunktion och en sannolikhetsgraf. Inom vanliga applikationer med verklig data hittar vi ofta ofullständiga, censurerade eller trunkerad data. Inlärningen av dessa grafer till verklig data är dock fortfarande obefintlig. Syftet med detta examensarbete är att studera några grundläggande egenskaper hos Sum-Product Networks och försöka utöka deras inlärning och uppträning till ofullständig data. Trovärdighetsskattningar med hjälp av EM-algoritmer kommer att användas för att utöka inlärningen av dessa grafer till ofullständiga data.
|
Page generated in 0.1093 seconds