• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 1
  • Tagged with
  • 35
  • 35
  • 17
  • 16
  • 11
  • 10
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Copula Modelling of High-Dimensional Longitudinal Binary Response Data / Copula-modellering av högdimensionell longitudinell binärresponsdata

Henningsson, Nils January 2022 (has links)
This thesis treats the modelling of a high-dimensional data set of longitudinal binary responses. The data consists of default indicators from different nations around the world as well as some explanatory variables such as exposure to underlying assets. The data used for the modelling is an aggregated term which combines several of the default indicators in the data set into one.  The modelling sets out from a portfolio perspective and seeks to find underlying correlations between the nations in the data set as well as see the extreme values produced by a portfolio with assets in the nations in the data set. The modelling takes a copula approach which uses Gaussian copulas to first formulate several different models mathematically and then optimize the parameters in the models to best fit the data set. Models A and B are optimized using standard stochastic gradient ascent on the likelihood function while model C uses variational inference and stochastic gradient ascent on the evidence lower bound for optimization. Using the different Gaussian copulas obtained from the optimization process a portfolio simulation is then done to examine the extreme values. The results show low correlations in models A and B while model C with it's additional regional correlations show slightly higher correlations in three of the subgroups. The portfolio simulations show similar tail behaviour in all three models, however model C produces more extreme risk measure outcomes in the form of higher VaR and ES. / Denna uppsats behandlar modellering av en datauppsättning bestående av högdimensionell longitudinell binärrespons. Datan består av konkursindikatorer för ett flertal suveräna stater runtom världen samt förklarande variabler så som exponering mot underliggande tillgångar. Datan som används i modelleringen är en aggregerad term som slår samman flera av konkursindikatorerna till en term. Modellerandet tar ett portföljperspektiv och försöker att finna underliggande korrelationer mellan nationerna i datamängden så väl som extremförluster som kan komma från en portfölj med tillgångar i de olika länderna som innefattas av datamängden. Utgångspunkten för modellerandet är ett copula-perspektiv som använder Gaussiska copulas där man först försöker matematiskt formulera flertalet modeller för att sedan optimera parametrarna i dessa modeller för att bäst passa datamängden till hands. För modell A och modell B optimeras log-likelihoodfunktionen med hjälp av stochastic gradient ascent medan i modell C används variational inference och sedan optimeras evidence lower bound med hjälp av stochastic gradient ascent. Med hjälp av de anpassade copula-modellerna simuleras sedan olika portföljer för att se vilka extremvärden som kan antas. Resultaten visar små korrelationer i modell A och B medan i modell C, med dess ytterligare regionala korrelationer, visas något större korrelation i tre av undergrupperna. Portföljsimuleringarna visar liknande svansbeteende i alla tre modeller, men modell C ger upphov till större riskmåttvärden i portföljerna i form av högre VaR och ES.
2

Variational inference for Gaussian-jump processes with application in gene regulation

Ocone, Andrea January 2013 (has links)
In the last decades, the explosion of data from quantitative techniques has revolutionised our understanding of biological processes. In this scenario, advanced statistical methods and algorithms are becoming fundamental to decipher the dynamics of biochemical mechanisms such those involved in the regulation of gene expression. Here we develop mechanistic models and approximate inference techniques to reverse engineer the dynamics of gene regulation, from mRNA and/or protein time series data. We start from an existent variational framework for statistical inference in transcriptional networks. The framework is based on a continuous-time description of the mRNA dynamics in terms of stochastic differential equations, which are governed by latent switching variables representing the on/off activity of regulating transcription factors. The main contributions of this work are the following. We speeded-up the variational inference algorithm by developing a method to compute a posterior approximate distribution over the latent variables using a constrained optimisation algorithm. In addition to computational benefits, this method enabled the extension to statistical inference in networks with a combinatorial model of regulation. A limitation of this framework is the fact that inference is possible only in transcriptional networks with a single-layer architecture (where a single or couples of transcription factors regulate directly an arbitrary number of target genes). The second main contribution in this work is the extension of the inference framework to hierarchical structures, such as feed-forward loop. In the last contribution we define a general structure for transcription-translation networks. This work is important since it provides a general statistical framework to model complex dynamics in gene regulatory networks. The framework is modular and scalable to realistically large systems with general architecture, thus representing a valuable alternative to traditional differential equation models. All models are embedded in a Bayesian framework; inference is performed using a variational approach and compared to exact inference where possible. We apply the models to the study of different biological systems, from the metabolism in E. coli to the circadian clock in the picoalga O. tauri.
3

Scalable Gaussian process inference using variational methods

Matthews, Alexander Graeme de Garis January 2017 (has links)
Gaussian processes can be used as priors on functions. The need for a flexible, principled, probabilistic model of functional relations is common in practice. Consequently, such an approach is demonstrably useful in a large variety of applications. Two challenges of Gaussian process modelling are often encountered. These are dealing with the adverse scaling with the number of data points and the lack of closed form posteriors when the likelihood is non-Gaussian. In this thesis, we study variational inference as a framework for meeting these challenges. An introductory chapter motivates the use of stochastic processes as priors, with a particular focus on Gaussian process modelling. A section on variational inference reviews the general definition of Kullback-Leibler divergence. The concept of prior conditional matching that is used throughout the thesis is contrasted to classical approaches to obtaining tractable variational approximating families. Various theoretical issues arising from the application of variational inference to the infinite dimensional Gaussian process setting are settled decisively. From this theory we are able to give a new argument for existing approaches to variational regression that settles debate about their applicability. This view on these methods justifies the principled extensions found in the rest of the work. The case of scalable Gaussian process classification is studied, both for its own merits and as a case study for non-Gaussian likelihoods in general. Using the resulting algorithms we find credible results on datasets of a scale and complexity that was not possible before our work. An extension to include Bayesian priors on model hyperparameters is studied alongside a new inference method that combines the benefits of variational sparsity and MCMC methods. The utility of such an approach is shown on a variety of example modelling tasks. We describe GPflow, a new Gaussian process software library that uses TensorFlow. Implementations of the variational algorithms discussed in the rest of the thesis are included as part of the software. We discuss the benefits of GPflow when compared to other similar software. Increased computational speed is demonstrated in relevant, timed, experimental comparisons.
4

Infinite-word topic models for digital media

Waters, Austin Severn 02 July 2014 (has links)
Digital media collections hold an unprecedented source of knowledge and data about the world. Yet, even at current scales, the data exceeds by many orders of magnitude the amount a single user could browse through in an entire lifetime. Making use of such data requires computational tools that can index, search over, and organize media documents in ways that are meaningful to human users, based on the meaning of their content. This dissertation develops an automated approach to analyzing digital media content based on topic models. Its primary contribution, the Infinite-Word Topic Model (IWTM), helps extend topic modeling to digital media domains by removing model assumptions that do not make sense for them -- in particular, the assumption that documents are composed of discrete, mutually-exclusive words from a fixed-size vocabulary. While conventional topic models like Latent Dirichlet Allocation (LDA) require that media documents be converted into bags of words, IWTM incorporates clustering into its probabilistic model and treats the vocabulary size as a random quantity to be inferred based on the data. Among its other benefits, IWTM achieves better performance than LDA while automating the selection of the vocabulary size. This dissertation contributes fast, scalable variational inference methods for IWTM that allow the model to be applied to large datasets. Furthermore, it introduces a new method, Incremental Variational Inference (IVI), for training IWTM and other Bayesian non-parametric models efficiently on growing datasets. IVI allows such models to grow in complexity as the dataset grows, as their priors state that they should. Finally, building on IVI, an active learning method for topic models is developed that intelligently samples new data, resulting in models that train faster, achieve higher performance, and use smaller amounts of labeled data. / text
5

Decoding Neural Signals Associated to Cytokine Activity / Identifiering av Nervsignaler Associerade Till Cytokin Aktivitet

Andersson, Gabriel January 2021 (has links)
The Vagus nerve has shown to play an important role regarding inflammatory diseases, regulating the production of proteins that mediate inflammation. Two important such proteins are the pro-inflammatory cytokines, TNF and IL-1β. This thesis makes use of Vagus nerve recordings, where TNF and IL-1β are subsequently injected in mice, with the aim to see if cytokine-specific information can be extracted. To this end, a type of semi-supervised learning approach is applied, where the observed waveform-data are modeled using a conditional probability distribution. The conditioning is done based on an estimate of how often each observed waveform occurs and local maxima of the conditional distribution are interpreted as candidate-waveforms to encode cytokine information. The methodology yields varying, but promising results. The occurrence of several candidate waveforms are found to increase substantially after exposure to cytokine. Difficulties obtaining coherent results are discussed, as well as different approaches for future work. / Vagusnerven har visat sig spela en viktig roll beträffande inflammatoriska sjukdomar. Denna nerv reglerar produktionen av inflammatoriska protein, som de inflammationsfrämjande cytokinerna TNF och IL-1β. Detta arbete använder sig av elektroniska mätningar av Vagusnerven i möss som under tiden blir injicerade med de två cytokinerna TNF och IL-1β. Syftet med arbetet är att undersöka om det är möjligt att extrahera information om de specifika cytokinerna från Vagusnervmätningarna. För att uppnå detta designar vi en semi-vägledd lärandemetod som modellerar dem observerade vågformerna med en betingad sannolikhetsfunktion. Betingandet baseras på en uppskattning av hur ofta varje enskild vågform förekommer och lokala maximum av den betingade sannolikhetsfunktionen tolkas som möjliga kandidat-vågformer att innehålla cytokin-information. Metodiken ger varierande, men lovande resultat. Förekomsten av flertalet kandidat-vågformer har en tydlig ökning efter tidpunkten för cytokin-injektion. Vidare så diskuteras svårigheter i att uppnå konsekventa resultat för alla mätningar, samt olika möjligheter för framtida arbete inom området.
6

Bayesian Learning with Dependency Structures via Latent Factors, Mixtures, and Copulas

Han, Shaobo January 2016 (has links)
<p>Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.</p> / Dissertation
7

Adapting deep neural networks as models of human visual perception

McClure, Patrick January 2018 (has links)
Deep neural networks (DNNs) have recently been used to solve complex perceptual and decision tasks. In particular, convolutional neural networks (CNN) have been extremely successful for visual perception. In addition to performing well on the trained object recognition task, these CNNs also model brain data throughout the visual hierarchy better than previous models. However, these DNNs are still far from completely explaining visual perception in the human brain. In this thesis, we investigated two methods with the goal of improving DNNs’ capabilities to model human visual perception: (1) deep representational distance learning (RDL), a method for driving representational spaces in deep nets into alignment with other (e.g. brain) representational spaces and (2) variational DNNs that use sampling to perform approximate Bayesian inference. In the first investigation, RDL successfully transferred information from a teacher model to a student DNN. This was achieved by driving the student DNN’s representational distance matrix (RDM), which characterises the representational geometry, into alignment with that of the teacher. This led to a significant increase in test accuracy on machine learning benchmarks. In the future, we plan to use this method to simultaneously train DNNs to perform complex tasks and to predict neural data. In the second investigation, we showed that sampling during learning and inference using simple Bernoulli- and Gaussian-based noise improved a CNN’s representation of its own uncertainty for object recognition. We also found that sampling during learning and inference with Gaussian noise improved how well CNNs predict human behavioural data for image classification. While these methods alone do not fully explain human vision, they allow for training CNNs that better model several features of human visual perception.
8

Statistical Methods for Characterizing Genomic Heterogeneity in Mixed Samples

Zhang, Fan 12 December 2016 (has links)
"Recently, sequencing technologies have generated massive and heterogeneous data sets. However, interpretation of these data sets is a major barrier to understand genomic heterogeneity in complex diseases. In this dissertation, we develop a Bayesian statistical method for single nucleotide level analysis and a global optimization method for gene expression level analysis to characterize genomic heterogeneity in mixed samples. The detection of rare single nucleotide variants (SNVs) is important for understanding genetic heterogeneity using next-generation sequencing (NGS) data. Various computational algorithms have been proposed to detect variants at the single nucleotide level in mixed samples. Yet, the noise inherent in the biological processes involved in NGS technology necessitates the development of statistically accurate methods to identify true rare variants. At the single nucleotide level, we propose a Bayesian probabilistic model and a variational expectation maximization (EM) algorithm to estimate non-reference allele frequency (NRAF) and identify SNVs in heterogeneous cell populations. We demonstrate that our variational EM algorithm has comparable sensitivity and specificity compared with a Markov Chain Monte Carlo (MCMC) sampling inference algorithm, and is more computationally efficient on tests of relatively low coverage (27x and 298x) data. Furthermore, we show that our model with a variational EM inference algorithm has higher specificity than many state-of-the-art algorithms. In an analysis of a directed evolution longitudinal yeast data set, we are able to identify a time-series trend in non-reference allele frequency and detect novel variants that have not yet been reported. Our model also detects the emergence of a beneficial variant earlier than was previously shown, and a pair of concomitant variants. Characterization of heterogeneity in gene expression data is a critical challenge for personalized treatment and drug resistance due to intra-tumor heterogeneity. Mixed membership factorization has become popular for analyzing data sets that have within-sample heterogeneity. In recent years, several algorithms have been developed for mixed membership matrix factorization, but they only guarantee estimates from a local optimum. At the gene expression level, we derive a global optimization (GOP) algorithm that provides a guaranteed epsilon-global optimum for a sparse mixed membership matrix factorization problem for molecular subtype classification. We test the algorithm on simulated data and find the algorithm always bounds the global optimum across random initializations and explores multiple modes efficiently. The GOP algorithm is well-suited for parallel computations in the key optimization steps. "
9

Sparse Gaussian process approximations and applications

van der Wilk, Mark January 2019 (has links)
Many tasks in machine learning require learning some kind of input-output relation (function), for example, recognising handwritten digits (from image to number) or learning the motion behaviour of a dynamical system like a pendulum (from positions and velocities now to future positions and velocities). We consider this problem using the Bayesian framework, where we use probability distributions to represent the state of uncertainty that a learning agent is in. In particular, we will investigate methods which use Gaussian processes to represent distributions over functions. Gaussian process models require approximations in order to be practically useful. This thesis focuses on understanding existing approximations and investigating new ones tailored to specific applications. We advance the understanding of existing techniques first through a thorough review. We propose desiderata for non-parametric basis function model approximations, which we use to assess the existing approximations. Following this, we perform an in-depth empirical investigation of two popular approximations (VFE and FITC). Based on the insights gained, we propose a new inter-domain Gaussian process approximation, which can be used to increase the sparsity of the approximation, in comparison to regular inducing point approximations. This allows GP models to be stored and communicated more compactly. Next, we show that inter-domain approximations can also allow the use of models which would otherwise be impractical, as opposed to improving existing approximations. We introduce an inter-domain approximation for the Convolutional Gaussian process - a model that makes Gaussian processes suitable to image inputs, and which has strong relations to convolutional neural networks. This same technique is valuable for approximating Gaussian processes with more general invariance properties. Finally, we revisit the derivation of the Gaussian process State Space Model, and discuss some subtleties relating to their approximation. We hope that this thesis illustrates some benefits of non-parametric models and their approximation in a non-parametric fashion, and that it provides models and approximations that prove to be useful for the development of more complex and performant models in the future.
10

Suivi multi-locuteurs avec information audio-visuel pour la perception du robot / audio-visual multiple-speaker tracking for robot perception

Ban, Yutong 10 May 2019 (has links)
La perception des robots joue un rôle crucial dans l’interaction homme-robot (HRI). Le système de perception fournit les informations au robot sur l’environnement, ce qui permet au robot de réagir en consequence. Dans un scénario de conversation, un groupe de personnes peut discuter devant le robot et se déplacer librement. Dans de telles situations, les robots sont censés comprendre où sont les gens, ceux qui parlent et de quoi ils parlent. Cette thèse se concentre sur les deux premières questions, à savoir le suivi et la diarisation des locuteurs. Nous utilisons différentes modalités du système de perception du robot pour remplir cet objectif. Comme pour l’humain, l’ouie et la vue sont essentielles pour un robot dans un scénario de conversation. Les progrès de la vision par ordinateur et du traitement audio de la dernière décennie ont révolutionné les capacités de perception des robots. Dans cette thèse, nous développons les contributions suivantes : nous développons d’abord un cadre variationnel bayésien pour suivre plusieurs objets. Le cadre bayésien variationnel fournit des solutions explicites, rendant le processus de suivi très efficace. Cette approche est d’abord appliqué au suivi visuel de plusieurs personnes. Les processus de créations et de destructions sont en adéquation avec le modèle probabiliste proposé pour traiter un nombre variable de personnes. De plus, nous exploitons la complémentarité de la vision et des informations du moteur du robot : d’une part, le mouvement actif du robot peut être intégré au système de suivi visuel pour le stabiliser ; d’autre part, les informations visuelles peuvent être utilisées pour effectuer l’asservissement du moteur. Par la suite, les informations audio et visuelles sont combinées dans le modèle variationnel, pour lisser les trajectoires et déduire le statut acoustique d’une personne : parlant ou silencieux. Pour expérimenter un scenario où l’information visuelle est absente, nous essayons le modèle pour la localisation et le suivi des locuteurs basé sur l’information acoustique uniquement. Les techniques de déréverbération sont d’abord appliquées, dont le résultat est fourni au système de suivi. Enfin, une variante du modèle de suivi des locuteurs basée sur la distribution de von-Mises est proposée, celle-ci étant plus adaptée aux données directionnelles. Toutes les méthodes proposées sont validées sur des bases de données specifiques à chaque application. / Robot perception plays a crucial role in human-robot interaction (HRI). Perception system provides the robot information of the surroundings and enables the robot to give feedbacks. In a conversational scenario, a group of people may chat in front of the robot and move freely. In such situations, robots are expected to understand where are the people, who are speaking, or what are they talking about. This thesis concentrates on answering the first two questions, namely speaker tracking and diarization. We use different modalities of the robot’s perception system to achieve the goal. Like seeing and hearing for a human-being, audio and visual information are the critical cues for a robot in a conversational scenario. The advancement of computer vision and audio processing of the last decade has revolutionized the robot perception abilities. In this thesis, we have the following contributions: we first develop a variational Bayesian framework for tracking multiple objects. The variational Bayesian framework gives closed-form tractable problem solutions, which makes the tracking process efficient. The framework is first applied to visual multiple-person tracking. Birth and death process are built jointly with the framework to deal with the varying number of the people in the scene. Furthermore, we exploit the complementarity of vision and robot motorinformation. On the one hand, the robot’s active motion can be integrated into the visual tracking system to stabilize the tracking. On the other hand, visual information can be used to perform motor servoing. Moreover, audio and visual information are then combined in the variational framework, to estimate the smooth trajectories of speaking people, and to infer the acoustic status of a person- speaking or silent. In addition, we employ the model to acoustic-only speaker localization and tracking. Online dereverberation techniques are first applied then followed by the tracking system. Finally, a variant of the acoustic speaker tracking model based on von-Mises distribution is proposed, which is specifically adapted to directional data. All the proposed methods are validated on datasets according to applications.

Page generated in 0.1202 seconds