• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 1
  • Tagged with
  • 35
  • 35
  • 17
  • 16
  • 11
  • 10
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Combined decision making with multiple agents

Simpson, Edwin Daniel January 2014 (has links)
In a wide range of applications, decisions must be made by combining information from multiple agents with varying levels of trust and expertise. For example, citizen science involves large numbers of human volunteers with differing skills, while disaster management requires aggregating information from multiple people and devices to make timely decisions. This thesis introduces efficient and scalable Bayesian inference for decision combination, allowing us to fuse the responses of multiple agents in large, real-world problems and account for the agents’ unreliability in a principled manner. As the behaviour of individual agents can change significantly, for example if agents move in a physical space or learn to perform an analysis task, this work proposes a novel combination method that accounts for these time variations in a fully Bayesian manner using a dynamic generalised linear model. This approach can also be used to augment agents’ responses with continuous feature data, thus permitting decision-making when agents’ responses are in limited supply. Working with information inferred using the proposed Bayesian techniques, an information-theoretic approach is developed for choosing optimal pairs of tasks and agents. This approach is demonstrated by an algorithm that maintains a trustworthy pool of workers and enables efficient learning by selecting informative tasks. The novel methods developed here are compared theoretically and empirically to a range of existing decision combination methods, using both simulated and real data. The results show that the methodology proposed in this thesis improves accuracy and computational efficiency over alternative approaches, and allows for insights to be determined into the behavioural groupings of agents.
22

Generative Modelling and Probabilistic Inference of Growth Patterns of Individual Microbes

Nagarajan, Shashi January 2022 (has links)
The fundamental question of how cells maintain their characteristic size remains open. Cell size measurements made through microscopic time-lapse imaging of microfluidic single cell cultivations have posed serious challenges to classical cell growth models and are supporting the development of newer, nuanced models that explain empirical findings better. Yet current models are limited, either to specific types of cells and/or to cell growth under specific microenvironmental conditions. Together with the fact that tools for robust analysis of said time-lapse images are not widely available as yet, the above-mentioned point presents an opportunity to progress the cell growth and size homeostasis discourse through generative, probabilistic modeling and analysis of the utility of different statistical estimation and inference techniques in recovering the parameters of the same. In this thesis, I present a novel Model Framework for simulating microfluidic single-cell cultivations with 36 different simulation modalities, each integrating dominant cell growth theories and generative modelling techniques. I also present a comparative analysis of how different Frequentist and Bayesian probabilistic inference techniques such as Nuisance Variable Elimination and Variational Inference work in the context of a case study of the estimation of a single model describing a microfluidic cell cultivation.
23

Scalable Inference in Latent Gaussian Process Models

Wenzel, Florian 05 February 2020 (has links)
Latente Gauß-Prozess-Modelle (latent Gaussian process models) werden von Wissenschaftlern benutzt, um verborgenen Muster in Daten zu er- kennen, Expertenwissen in probabilistische Modelle einfließen zu lassen und um Vorhersagen über die Zukunft zu treffen. Diese Modelle wurden erfolgreich in vielen Gebieten wie Robotik, Geologie, Genetik und Medizin angewendet. Gauß-Prozesse definieren Verteilungen über Funktionen und können als flexible Bausteine verwendet werden, um aussagekräftige probabilistische Modelle zu entwickeln. Dabei ist die größte Herausforderung, eine geeignete Inferenzmethode zu implementieren. Inferenz in probabilistischen Modellen bedeutet die A-Posteriori-Verteilung der latenten Variablen, gegeben der Daten, zu berechnen. Die meisten interessanten latenten Gauß-Prozess-Modelle haben zurzeit nur begrenzte Anwendungsmöglichkeiten auf großen Datensätzen. In dieser Doktorarbeit stellen wir eine neue effiziente Inferenzmethode für latente Gauß-Prozess-Modelle vor. Unser neuer Ansatz, den wir augmented variational inference nennen, basiert auf der Idee, eine erweiterte (augmented) Version des Gauß-Prozess-Modells zu betrachten, welche bedingt konjugiert (conditionally conjugate) ist. Wir zeigen, dass Inferenz in dem erweiterten Modell effektiver ist und dass alle Schritte des variational inference Algorithmus in geschlossener Form berechnet werden können, was mit früheren Ansätzen nicht möglich war. Unser neues Inferenzkonzept ermöglicht es, neue latente Gauß-Prozess- Modelle zu studieren, die zu innovativen Ergebnissen im Bereich der Sprachmodellierung, genetischen Assoziationsstudien und Quantifizierung der Unsicherheit in Klassifikationsproblemen führen. / Latent Gaussian process (GP) models help scientists to uncover hidden structure in data, express domain knowledge and form predictions about the future. These models have been successfully applied in many domains including robotics, geology, genetics and medicine. A GP defines a distribution over functions and can be used as a flexible building block to develop expressive probabilistic models. The main computational challenge of these models is to make inference about the unobserved latent random variables, that is, computing the posterior distribution given the data. Currently, most interesting Gaussian process models have limited applicability to big data. This thesis develops a new efficient inference approach for latent GP models. Our new inference framework, which we call augmented variational inference, is based on the idea of considering an augmented version of the intractable GP model that renders the model conditionally conjugate. We show that inference in the augmented model is more efficient and, unlike in previous approaches, all updates can be computed in closed form. The ideas around our inference framework facilitate novel latent GP models that lead to new results in language modeling, genetic association studies and uncertainty quantification in classification tasks.
24

Modeling, Evaluation and Analysis of Dynamic Networks for Social Network Analysis

Junuthula, Ruthwik Reddy January 2018 (has links)
No description available.
25

Bayesian Identification of Nonlinear Structural Systems: Innovations to Address Practical Uncertainty

Alana K Lund (10702392) 26 April 2021 (has links)
The ability to rapidly assess the condition of a structure in a manner which enables the accurate prediction of its remaining capacity has long been viewed as a crucial step in allowing communities to make safe and efficient use of their public infrastructure. This objective has become even more relevant in recent years as both the interdependency and state of deterioration in infrastructure systems throughout the world have increased. Current practice for structural condition assessment emphasizes visual inspection, in which trained professionals will routinely survey a structure to estimate its remaining capacity. Though these methods have the ability to monitor gross structural changes, their ability to rapidly and cost-effectively assess the detailed condition of the structure with respect to its future behavior is limited.<div>Vibration-based monitoring techniques offer a promising alternative to this approach. As opposed to visually observing the surface of the structure, these methods judge its condition and infer its future performance by generating and updating models calibrated to its dynamic behavior. Bayesian inference approaches are particularly well suited to this model updating problem as they are able to identify the structure using sparse observations while simultaneously assessing the uncertainty in the identified parameters. However, a lack of consensus on efficient methods for their implementation to full-scale structural systems has led to a diverse set of Bayesian approaches, from which no clear method can be selected for full-scale implementation. The objective of this work is therefore to assess and enhance those techniques currently used for structural identification and make strides toward developing unified strategies for robustly implementing them on full-scale structures. This is accomplished by addressing several key research questions regarding the ability of these methods to overcome issues in identifiability, sensitivity to uncertain experimental conditions, and scalability. These questions are investigated by applying novel adaptations of several prominent Bayesian identification strategies to small-scale experimental systems equipped with nonlinear devices. Through these illustrative examples I explore the robustness and practicality of these algorithms, while also considering their extensibility to higher-dimensional systems. Addressing these core concerns underlying full-scale structural identification will enable the practical application of Bayesian inference techniques and thereby enhance the ability of communities to detect and respond to the condition of their infrastructure.<br></div>
26

PROBABLY APPROXIMATELY CORRECT BOUNDS FOR ESTIMATING MARKOV TRANSITION KERNELS

Imon Banerjee (17555685) 06 December 2023 (has links)
<p dir="ltr">This thesis presents probably approximately correct (PAC) bounds on estimates of the transition kernels of Controlled Markov chains (CMC’s). CMC’s are a natural choice for modelling various industrial and medical processes, and are also relevant to reinforcement learning (RL). Learning the transition dynamics of CMC’s in a sample efficient manner is an important question that is open. This thesis aims to close this gap in knowledge in literature.</p><p dir="ltr">In Chapter 2, we lay the groundwork for later chapters by introducing the relevant concepts and defining the requisite terms. The two subsequent chapters focus on non-parametric estimation. </p><p dir="ltr">In Chapter 3, we restrict ourselves to a finitely supported CMC with d states and k controls and produce a general theory for minimax sample complexity of estimating the transition matrices.</p><p dir="ltr">In Chapter 4 we demonstrate the applicability of this theory by using it to recover the sample complexities of various controlled Markov chains, as well as RL problems.</p><p dir="ltr">In Chapter 5 we move to a continuous state and action spaces with compact supports. We produce a robust, non-parametric test to find the best histogram based estimator of the transition density; effectively reducing the problem into one of model selection based on empricial processes.</p><p dir="ltr">Finally, in Chapter 6 we move to a parametric and Bayesian regime, and restrict ourselves to Markov chains. Under this setting we provide a PAC-Bayes bound for estimating model parameters under tempered posteriors.</p>
27

Branching Out with Mixtures: Phylogenetic Inference That’s Not Afraid of a Little Uncertainty / Förgreningar med mixturer: Fylogenetisk inferens som inte räds lite osäkerhet

Molén, Ricky January 2023 (has links)
Phylogeny, the study of evolutionary relationships among species and other taxa, plays a crucial role in understanding the history of life. Bayesian analysis using Markov chain Monte Carlo (MCMC) is a widely used approach for inferring phylogenetic trees, but it suffers from slow convergence in higher dimensions and is slow to converge. This thesis focuses on exploring variational inference (VI), a methodology that is believed to lead to improved speed and accuracy of phylogenetic models. However, VI models are known to concentrate the density of the learned approximation in high-likelihood areas. This thesis evaluates the current state of Variational Inference Bayesian Phylogenetics (VBPI) and proposes a solution using a mixture of components to improve the VBPI method's performance on complex datasets and multimodal latent spaces. Additionally, we cover the basics of phylogenetics to provide a comprehensive understanding of the field. / Fylogeni, vilket är studien av evolutionära relationer mellan arter och andra taxonomiska grupper, spelar en viktig roll för att förstå livets historia. En ofta använd metod för att dra slutsatser om fylogenetiska träd är bayesiansk analys med Markov Chain Monte Carlo (MCMC), men den lider av långsam konvergens i högre dimensioner och kräver oändligt med tid. Denna uppsats fokuserar på att undersöka hur variationsinferens (VI) kan nyttjas inom fylogenetisk inferens med hög noggranhet. Vi fokuserar specifik på en modell kallad VBPI. Men VI-modeller är allmänt kända att att koncentrera sig på höga sannolikhetsområden i posteriorfördelningar. Vi utvärderar prestandan för Variatinal Inference Baysian Phylogenetics (VBPI) och föreslår en förbättring som använder mixturer av förslagsfördelningar för att förbättra VBPI-modellens förmåga att hantera mer komplexa datamängder och multimodala posteriorfördelningar. Utöver dettta går vi igenom grunderna i fylogenetik för att ge en omfattande förståelse av området.
28

OLLDA: Dynamic and Scalable Topic Modelling for Twitter : AN ONLINE SUPERVISED LATENT DIRICHLET ALLOCATION ALGORITHM

Jaradat, Shatha January 2015 (has links)
Providing high quality of topics inference in today's large and dynamic corpora, such as Twitter, is a challenging task. This is especially challenging taking into account that the content in this environment contains short texts and many abbreviations. This project proposes an improvement of a popular online topics modelling algorithm for Latent Dirichlet Allocation (LDA), by incorporating supervision to make it suitable for Twitter context. This improvement is motivated by the need for a single algorithm that achieves both objectives: analyzing huge amounts of documents, including new documents arriving in a stream, and, at the same time, achieving high quality of topics’ detection in special case environments, such as Twitter. The proposed algorithm is a combination of an online algorithm for LDA and a supervised variant of LDA - labeled LDA. The performance and quality of the proposed algorithm is compared with these two algorithms. The results demonstrate that the proposed algorithm has shown better performance and quality when compared to the supervised variant of LDA, and it achieved better results in terms of quality in comparison to the online algorithm. These improvements make our algorithm an attractive option when applied to dynamic environments, like Twitter. An environment for analyzing and labelling data is designed to prepare the dataset before executing the experiments. Possible application areas for the proposed algorithm are tweets recommendation and trends detection. / Tillhandahålla högkvalitativa ämnen slutsats i dagens stora och dynamiska korpusar, såsom Twitter, är en utmanande uppgift. Detta är särskilt utmanande med tanke på att innehållet i den här miljön innehåller korta texter och många förkortningar. Projektet föreslår en förbättring med en populär online ämnen modellering algoritm för Latent Dirichlet Tilldelning (LDA), genom att införliva tillsyn för att göra den lämplig för Twitter sammanhang. Denna förbättring motiveras av behovet av en enda algoritm som uppnår båda målen: analysera stora mängder av dokument, inklusive nya dokument som anländer i en bäck, och samtidigt uppnå hög kvalitet på ämnen "upptäckt i speciella fall miljöer, till exempel som Twitter. Den föreslagna algoritmen är en kombination av en online-algoritm för LDA och en övervakad variant av LDA - Labeled LDA. Prestanda och kvalitet av den föreslagna algoritmen jämförs med dessa två algoritmer. Resultaten visar att den föreslagna algoritmen har visat bättre prestanda och kvalitet i jämförelse med den övervakade varianten av LDA, och det uppnådde bättre resultat i fråga om kvalitet i jämförelse med den online-algoritmen. Dessa förbättringar gör vår algoritm till ett attraktivt alternativ när de tillämpas på dynamiska miljöer, som Twitter. En miljö för att analysera och märkning uppgifter är utformad för att förbereda dataset innan du utför experimenten. Möjliga användningsområden för den föreslagna algoritmen är tweets rekommendation och trender upptäckt.
29

The applicability and scalability of probabilistic inference in deep-learning-assisted geophysical inversion applications

Izzatullah, Muhammad 04 1900 (has links)
Probabilistic inference, especially in the Bayesian framework, is a foundation for quantifying uncertainties in geophysical inversion applications. However, due to the presence of high-dimensional datasets and the large-scale nature of geophysical inverse problems, the applicability and scalability of probabilistic inference face significant challenges for such applications. This thesis is dedicated to improving the probabilistic inference algorithms' scalability and demonstrating their applicability for large-scale geophysical inversion applications. In this thesis, I delve into three leading applied approaches in computing the Bayesian posterior distribution in geophysical inversion applications: Laplace's approximation, Markov chain Monte Carlo (MCMC), and variational Bayesian inference. The first approach, Laplace's approximation, is the simplest form of approximation for intractable Bayesian posteriors. However, its accuracy relies on the estimation of the posterior covariance matrix. I study the visualization of the misfit landscape in low-dimensional subspace and the low-rank approximations of the covariance for full waveform inversion (FWI). I demonstrate that a non-optimal Hessian's eigenvalues truncation for the low-rank approximation will affect the approximation accuracy of the standard deviation, leading to a biased statistical conclusion. Furthermore, I also demonstrate the propagation of uncertainties within the Bayesian physics-informed neural networks for hypocenter localization applications through this approach. For the MCMC approach, I develop approximate Langevin MCMC algorithms that provide fast sampling at efficient computational costs for large-scale Bayesian FWI; however, this inflates the variance due to asymptotic bias. To account for this asymptotic bias and assess their sample quality, I introduce the kernelized Stein discrepancy (KSD) as a diagnostic tool. When larger computational resources are available, exact MCMC algorithms (i.e., with a Metropolis-Hastings criterion) should be favored for an accurate posterior distribution statistical analysis. For the variational Bayesian inference, I propose a regularized variational inference framework that performs posterior inference by implicitly regularizing the Kullback-Leibler divergence loss with a deep denoiser through a Plug-and-Play method. I also developed Plug-and-Play Stein Variational Gradient Descent (PnP-SVGD), a novel algorithm to sample the regularized posterior distribution. The PnP-SVGD demonstrates its ability to produce high-resolution, trustworthy samples representative of the subsurface structures for a post-stack seismic inversion application.
30

Bayesian Structural Time Series in Marketing Mix Modelling / Bayesianska Strukturella Tidsseriemodeller inom Marketing Mix Modellering

Karlsson, Jessika January 2022 (has links)
Marketing Mix Modelling has been used since the 1950s, leveraging statistical inference to attribute media investments to sales. Typically, regression models have been used to model the relationship between the two. However, the media landscape evolves at an increasingly rapid pace, driving the need for more refined models which are able to accurately capture these changes. One class of such models are Bayesian structural time series, which are the focal point in this thesis. This class of models retains the relationship between media investments and sales, while also allowing for model parameters to vary over time. The effectiveness of these models is evaluated with respect to prediction accuracy and certainty, both in and out-of-sample. A total of four different models of varying degrees of complexity were investigated. It was concluded that the in-sample performance was similar across models, yet when it came to out-of-sample performance models with time-varying performance outperformed their static counterparts, with respect to uncertainty. Furthermore, the functional form of the intercept influenced the uncertainty of the forecasts on extended time horizons. / Marketing mix modellering har använts sedan 1950-talet för att dra slutsatser om hur mediainvesteringar påverkar försäljning, med hjälp av statistisk inferens. Vanligtvis har regressionmodeller använts för att modellera relationen mellan de två. Men medielandskapet utvecklas allt snabbare, vilket kräver mer sofistikerade modeller som kan fånga upp dessa förändringar på ett mer precist sätt. En klass av sådana modeller är Bayesianska strukturella tidsseriemodeller, som är fokus för detta arbete. Denna klass av modeller bibehåller den strukturella relationen mellan mediainvesteringar och försäljning, samtidigt som de också tillåter modellparametrarna att variera över tid. Effektiviteten hos modellerna bedöms med avseende på noggrannhet och säkerhet, både tränings- och testdata. Totalt fyra olika modeller med varierande komplexitet undersöktes. Det konstaterades att prestandan på träningsdata var likvärdig mellan modellerna, men när det gällde testdata presterade modeller med tidsvarierande parametrar bättre än sina statiska motsvarigheter, med avseende på osäkerhet. Dessutom påverkade den funktionella formen av interceptet osäkerheten hos prognoserna över längre tidshorisonter.

Page generated in 0.1213 seconds