• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 2
  • 1
  • Tagged with
  • 37
  • 37
  • 18
  • 16
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Individual differences in structure learning

Newlin, Philip 13 May 2022 (has links)
Humans have a tendency to impute structure spontaneously even in simple learning tasks, however the way they approach structure learning can vary drastically. The present study sought to determine why individuals learn structure differently. One hypothesized explanation for differences in structure learning is individual differences in cognitive control. Cognitive control allows individuals to maintain representations of a task and may interact with reinforcement learning systems. It was expected that individual differences in propensity to apply cognitive control, which shares component processes with hierarchical reinforcement learning, may explain how individuals learn structure differently in a simple structure learning task. Results showed that proactive control and model-based control explained differences in the rate at which individuals applied structure learning.
22

Causal Inference on Tactical Simulations using Bayesian Structure Learning

Lagerkvist Blomqvist, Karl January 2022 (has links)
This thesis explores the possibility of using Bayesian Structure Learning and Do-Calculus to perform causal inference on data from tactical combat simulations provided by Saab. A four-step approach is considered whose first step is to find a Bayesian Network from the data using Bayesian Structure Learning and Probability Distribution Fitting. These Bayesian Networks describe a set of conditional independencies ambiguously. This ambiguity gives rise to a set of feasible Structural Causal Models that describes feasible causal relationships in the data. The approach then continues in its second step by selecting at least one of these Structural Causal Models that can be utilized for performing causal inference using Do-Calculus and Probabilistic Inference in the approach’s third and fourth steps respectively. The thesis concludes that there exist several difficulties with the approach that together with a lack of a methodology for error estimation reduces the method’s reliability. The recommendation is thus to consider the possibility of performing randomized controlled experiments using the tactical simulator before continuing the development of this approach. / Det här examensarbetet utforskar möjligheten att använda Bayesiansk Strukturinlärning och Do-Calculus för att utföra Kausal Inferens på data från taktiska stridsimuleringar framtagna av Saab. En fyrastegsmetod beaktas vars första steg är att hitta ett Bayesiansk Nätverk genom användandet av Bayesiansk Strukturinlärning och Sannolikhetsfördelnings-anpassning. Dessa Bayesianska Nätverk beskriver en mängd betingade oberoendet i datamängden på ett icke-entydligt sett. Denna icke-entydlighet ger upphov till en mängd av möjliga Strukturella Kausala Modeller som beskriver möjliga kausala strukturer i datamängden. Metodens andra steg fortsätter med att välja minst en av dessa Strukturella Kausala Modeller som kan användas för att åstakomma Kausal Inferens med hjälp av Do-Calculus och Stokastisk Inferens i metodens tredje respektive fjärde steg. Slutsatsen från examensarbetet är att det finns ett flertal svårigheter med metoden som tillsamans med en avsaknad av en feluppskattningsmetodik minskar metodens tillförlitlighet. Rekommendationen är därför att undersöka möjligheten att genomföra kontrollerade slumpmässiga experiment innan metodiken vidareutvecklas.
23

Apprentissage de Structure de Modèles Graphiques Probabilistes : application à la Classification Multi-Label / Probabilistic Graphical Model Structure Learning : Application to Multi-Label Classification

Gasse, Maxime 13 January 2017 (has links)
Dans cette thèse, nous nous intéressons au problème spécifique de l'apprentissage de structure de modèles graphiques probabilistes, c'est-à-dire trouver la structure la plus efficace pour représenter une distribution, à partir seulement d'un ensemble d'échantillons D ∼ p(v). Dans une première partie, nous passons en revue les principaux modèles graphiques probabilistes de la littérature, des plus classiques (modèles dirigés, non-dirigés) aux plus avancés (modèles mixtes, cycliques etc.). Puis nous étudions particulièrement le problème d'apprentissage de structure de modèles dirigés (réseaux Bayésiens), et proposons une nouvelle méthode hybride pour l'apprentissage de structure, H2PC (Hybrid Hybrid Parents and Children), mêlant une approche à base de contraintes (tests statistiques d'indépendance) et une approche à base de score (probabilité postérieure de la structure). Dans un second temps, nous étudions le problème de la classification multi-label, visant à prédire un ensemble de catégories (vecteur binaire y P (0, 1)m) pour un objet (vecteur x P Rd). Dans ce contexte, l'utilisation de modèles graphiques probabilistes pour représenter la distribution conditionnelle des catégories prend tout son sens, particulièrement dans le but minimiser une fonction coût complexe. Nous passons en revue les principales approches utilisant un modèle graphique probabiliste pour la classification multi-label (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), puis nous proposons une approche générique visant à identifier une factorisation de p(y|x) en distributions marginales disjointes, en s'inspirant des méthodes d'apprentissage de structure à base de contraintes. Nous démontrons plusieurs résultats théoriques, notamment l'unicité d'une décomposition minimale, ainsi que trois procédures quadratiques sous diverses hypothèses à propos de la distribution jointe p(x, y). Enfin, nous mettons en pratique ces résultats afin d'améliorer la classification multi-label avec les fonctions coût F-loss et zero-one loss / In this thesis, we address the specific problem of probabilistic graphical model structure learning, that is, finding the most efficient structure to represent a probability distribution, given only a sample set D ∼ p(v). In the first part, we review the main families of probabilistic graphical models from the literature, from the most common (directed, undirected) to the most advanced ones (chained, mixed etc.). Then we study particularly the problem of learning the structure of directed graphs (Bayesian networks), and we propose a new hybrid structure learning method, H2PC (Hybrid Hybrid Parents and Children), which combines a constraint-based approach (statistical independence tests) with a score-based approach (posterior probability of the structure). In the second part, we address the multi-label classification problem, which aims at assigning a set of categories (binary vector y P (0, 1)m) to a given object (vector x P Rd). In this context, probabilistic graphical models provide convenient means of encoding p(y|x), particularly for the purpose of minimizing general loss functions. We review the main approaches based on PGMs for multi-label classification (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), and propose a generic approach inspired from constraint-based structure learning methods to identify the unique partition of the label set into irreducible label factors (ILFs), that is, the irreducible factorization of p(y|x) into disjoint marginal distributions. We establish several theoretical results to characterize the ILFs based on the compositional graphoid axioms, and obtain three generic procedures under various assumptions about the conditional independence properties of the joint distribution p(x, y). Our conclusions are supported by carefully designed multi-label classification experiments, under the F-loss and the zero-one loss functions
24

Identification of causality in genetics and neuroscience / Identificação de causalidade em genética e neurociência

Ribeiro, Adèle Helena 28 November 2018 (has links)
Causal inference may help us to understand the underlying mechanisms and the risk factors of diseases. In Genetics, it is crucial to understand how the connectivity among variables is influenced by genetic and environmental factors. Family data have proven to be useful in elucidating genetic and environmental influences, however, few existing approaches are able of addressing structure learning of probabilistic graphical models (PGMs) and family data analysis jointly. We propose methodologies for learning, from observational Gaussian family data, the most likely PGM and its decomposition into genetic and environmental components. They were evaluated by a simulation study and applied to the Genetic Analysis Workshop 13 simulated data, which mimic the real Framingham Heart Study data, and to the metabolic syndrome phenotypes from the Baependi Heart Study. In neuroscience, one challenge consists in identifying interactions between functional brain networks (FBNs) - graphs. We propose a method to identify Granger causality among FBNs. We show the statistical power of the proposed method by simulations and its usefulness by two applications: the identification of Granger causality between the FBNs of two musicians playing a violin duo, and the identification of a differential connectivity from the right to the left brain hemispheres of autistic subjects. / Inferência causal pode nos ajudar a compreender melhor as relações de dependência direta entre variáveis e, assim, a identificar fatores de riscos de doenças. Em Genética, a análise de dados agrupados em famílias permite investigar influências genéticas e ambientais nas relações entre as variáveis. Neste trabalho, nós propomos métodos para aprender, a partir de dados Gaussianos agrupados em famílias, o mais provável modelo gráfico probabilístico (dirigido ou não dirigido) e também sua decomposição em dois componentes: genético e ambiental. Os métodos foram avaliados por simulações e aplicados tanto aos dados simulados do Genetic Analysis Workshop 13, que imitam características dos dados do Framingham Heart Study, como aos dados da síndrome metabólica do estudo Corações de Baependi. Em Neurociência, um desafio consiste em identificar interações entre redes funcionais cerebrais - grafos. Nós propomos um método que identifica causalidade de Granger entre grafos e, por meio de simulações, mostramos que o método tem alto poder estatístico. Além disso, mostramos sua utilidade por meio de duas aplicações: 1) identificação de causalidade de Granger entre as redes cerebrais de dois músicos enquanto tocam um dueto de violino e 2) identificação de conectividade diferencial do hemisfério cerebral direito para o esquerdo em indivíduos autistas.
25

Réseaux Bayésiens pour fusion de données statiques et temporelles / Bayesian networks for static and temporal data fusion

Rahier, Thibaud 11 December 2018 (has links)
La prédiction et l'inférence sur des données temporelles sont très souvent effectuées en utilisant uniquement les séries temporelles. Nous sommes convaincus que ces tâches pourraient tirer parti de l'utilisation des métadonnées contextuelles associées aux séries temporelles, telles que l'emplacement, le type, etc. Réciproquement, les tâches de prédiction et d'inférence sur les métadonnées pourraient bénéficier des informations contenues dans les séries temporelles. Cependant, il n'existe pas de méthode standard pour modéliser conjointement les données de séries temporelles et les métadonnées descriptives. De plus, les métadonnées contiennent fréquemment des informations hautement corrélées ou redondantes et peuvent contenir des erreurs et des valeurs manquantes.Nous examinons d’abord le problème de l’apprentissage de la structure graphique probabiliste inhérente aux métadonnées en tant que réseau Bayésien. Ceci présente deux avantages principaux: (i) une fois structurées en tant que modèle graphique, les métadonnées sont plus faciles à utiliser pour améliorer les tâches sur les données temporelles et (ii) le modèle appris permet des tâches d'inférence sur les métadonnées uniquement, telles que l'imputation de données manquantes. Cependant, l'apprentissage de la structure de réseau Bayésien est un défi mathématique conséquent, impliquant un problème d'optimisation NP-difficile. Pour faire face à ce problème, nous présentons un algorithme d'apprentissage de structure sur mesure, inspiré de nouveaux résultats théoriques, qui exploite les dépendances (quasi)-déterministes généralement présentes dans les métadonnées descriptives. Cet algorithme est testé sur de nombreux jeux de données de référence et sur certains jeux de métadonnées industriels contenant des relations déterministes. Dans les deux cas, il s'est avéré nettement plus rapide que l'état de la l'art, et a même trouvé des structures plus performantes sur des données industrielles. De plus, les réseaux Bayésiens appris sont toujours plus parcimonieux et donc plus lisibles.Nous nous intéressons ensuite à la conception d'un modèle qui inclut à la fois des (méta)données statiques et des données temporelles. En nous inspirant des modèles graphiques probabilistes pour les données temporelles (réseaux Bayésiens dynamiques) et de notre approche pour la modélisation des métadonnées, nous présentons une méthodologie générale pour modéliser conjointement les métadonnées et les données temporelles sous forme de réseaux Bayésiens hybrides statiques-dynamiques. Nous proposons deux algorithmes principaux associés à cette représentation: (i) un algorithme d'apprentissage qui, bien qu'optimisé pour les données industrielles, reste généralisable à toute tâche de fusion de données statiques et dynamiques, et (ii) un algorithme d'inférence permettant les d'effectuer à la fois des requêtes sur des données temporelles ou statiques uniquement, et des requêtes utilisant ces deux types de données.%Nous fournissons ensuite des résultats sur diverses applications inter-domaines telles que les prévisions, le réapprovisionnement en métadonnées à partir de séries chronologiques et l’analyse de dépendance d’alarmes en utilisant les données de certains cas d’utilisation difficiles de Schneider Electric.Enfin, nous approfondissons certaines des notions introduites au cours de la thèse, et notamment la façon de mesurer la performance en généralisation d’un réseau Bayésien par un score inspiré de la procédure de validation croisée provenant de l’apprentissage automatique supervisé. Nous proposons également des extensions diverses aux algorithmes et aux résultats théoriques présentés dans les chapitres précédents, et formulons quelques perspectives de recherche. / Prediction and inference on temporal data is very frequently performed using timeseries data alone. We believe that these tasks could benefit from leveraging the contextual metadata associated to timeseries - such as location, type, etc. Conversely, tasks involving prediction and inference on metadata could benefit from information held within timeseries. However, there exists no standard way of jointly modeling both timeseries data and descriptive metadata. Moreover, metadata frequently contains highly correlated or redundant information, and may contain errors and missing values.We first consider the problem of learning the inherent probabilistic graphical structure of metadata as a Bayesian Network. This has two main benefits: (i) once structured as a graphical model, metadata is easier to use in order to improve tasks on temporal data and (ii) the learned model enables inference tasks on metadata alone, such as missing data imputation. However, Bayesian network structure learning is a tremendous mathematical challenge, that involves a NP-Hard optimization problem. We present a tailor-made structure learning algorithm, inspired from novel theoretical results, that exploits (quasi)-determinist dependencies that are typically present in descriptive metadata. This algorithm is tested on numerous benchmark datasets and some industrial metadatasets containing deterministic relationships. In both cases it proved to be significantly faster than state of the art, and even found more performant structures on industrial data. Moreover, learned Bayesian networks are consistently sparser and therefore more readable.We then focus on designing a model that includes both static (meta)data and dynamic data. Taking inspiration from state of the art probabilistic graphical models for temporal data (Dynamic Bayesian Networks) and from our previously described approach for metadata modeling, we present a general methodology to jointly model metadata and temporal data as a hybrid static-dynamic Bayesian network. We propose two main algorithms associated to this representation: (i) a learning algorithm, which while being optimized for industrial data, is still generalizable to any task of static and dynamic data fusion, and (ii) an inference algorithm, enabling both usual tasks on temporal or static data alone, and tasks using the two types of data.%We then provide results on diverse cross-field applications such as forecasting, metadata replenishment from timeseries and alarms dependency analysis using data from some of Schneider Electric’s challenging use-cases.Finally, we discuss some of the notions introduced during the thesis, including ways to measure the generalization performance of a Bayesian network by a score inspired from the cross-validation procedure from supervised machine learning. We also propose various extensions to the algorithms and theoretical results presented in the previous chapters, and formulate some research perspectives.
26

Dirty statistical models

Jalali, Ali, 1982- 11 July 2012 (has links)
In fields across science and engineering, we are increasingly faced with problems where the number of variables or features we need to estimate is much larger than the number of observations. Under such high-dimensional scaling, for any hope of statistically consistent estimation, it becomes vital to leverage any potential structure in the problem such as sparsity, low-rank structure or block sparsity. However, data may deviate significantly from any one such statistical model. The motivation of this thesis is: can we simultaneously leverage more than one such statistical structural model, to obtain consistency in a larger number of problems, and with fewer samples, than can be obtained by single models? Our approach involves combining via simple linear superposition, a technique we term dirty models. The idea is very simple: while any one structure might not capture the data, a superposition of structural classes might. Dirty models thus searches for a parameter that can be decomposed into a number of simpler structures such as (a) sparse plus block-sparse, (b) sparse plus low-rank and (c) low-rank plus block-sparse. In this thesis, we propose dirty model based algorithms for different problems such as multi-task learning, graph clustering and time-series analysis with latent factors. We analyze these algorithms in terms of the number of observations we need to estimate the variables. These algorithms are based on convex optimization and sometimes they are relatively slow. We provide a class of low-complexity greedy algorithms that not only can solve these optimizations faster, but also guarantee the solution. Other than theoretical results, in each case, we provide experimental results to illustrate the power of dirty models. / text
27

Learning with Markov logic networks : transfer learning, structure learning, and an application to Web query disambiguation

Mihalkova, Lilyana Simeonova 18 March 2011 (has links)
Traditionally, machine learning algorithms assume that training data is provided as a set of independent instances, each of which can be described as a feature vector. In contrast, many domains of interest are inherently multi-relational, consisting of entities connected by a rich set of relations. For example, the participants in a social network are linked by friendships, collaborations, and shared interests. Likewise, the users of a search engine are related by searches for similar items and clicks to shared sites. The ability to model and reason about such relations is essential not only because better predictive accuracy is achieved by exploiting this additional information, but also because frequently the goal is to predict whether a set of entities are related in a particular way. This thesis falls within the area of Statistical Relational Learning (SRL), which combines ideas from two traditions within artificial intelligence, first-order logic and probabilistic graphical models to address the challenge of learning from multi-relational data. We build on one particular SRL model, Markov logic networks (MLNs), which consist of a set of weighted first-order-logic formulae and provide a principled way of defining a probability distribution over possible worlds. We develop algorithms for learning of MLN structure both from scratch and by transferring a previously learned model, as well as an application of MLNs to the problem of Web query disambiguation. The ideas we present are unified by two main themes: the need to deal with limited training data and the use of bottom-up learning techniques. Structure learning, the task of automatically acquiring a set of dependencies among the relations in the domain, is a central problem in SRL. We introduce BUSL, an algorithm for learning MLN structure from scratch that proceeds in a more bottom-up fashion, breaking away from the tradition of top-down learning typical in SRL. Our approach first constructs a novel data structure called a Markov network template that is used to restrict the search space for clauses. Our experiments in three relational domains demonstrate that BUSL dramatically reduces the search space for clauses and attains a significantly higher accuracy than a structure learner that follows a top-down approach. Accurate and efficient structure learning can also be achieved by transferring a model obtained in a source domain related to the current target domain of interest. We view transfer as a revision task and present an algorithm that diagnoses a source MLN to determine which of its parts transfer directly to the target domain and which need to be updated. This analysis focuses the search for revisions on the incorrect portions of the source structure, thus speeding up learning. Transfer learning is particularly important when target-domain data is limited, such as when data on only a few individuals is available from domains with hundreds of entities connected by a variety of relations. We also address this challenging case and develop a general transfer learning approach that makes effective use of such limited target data in several social network domains. Finally, we develop an application of MLNs to the problem of Web query disambiguation in a more privacy-aware setting where the only information available about a user is that captured in a short search session of 5-6 previous queries on average. This setting contrasts with previous work that typically assumes the availability of long user-specific search histories. To compensate for the scarcity of user-specific information, our approach exploits the relations between users, search terms, and URLs. We demonstrate the effectiveness of our approach in the presence of noise and show that it outperforms several natural baselines on a large data set collected from the MSN search engine. / text
28

Nonparametric Learning in High Dimensions

Liu, Han 01 December 2010 (has links)
This thesis develops flexible and principled nonparametric learning algorithms to explore, understand, and predict high dimensional and complex datasets. Such data appear frequently in modern scientific domains and lead to numerous important applications. For example, exploring high dimensional functional magnetic resonance imaging data helps us to better understand brain functionalities; inferring large-scale gene regulatory network is crucial for new drug design and development; detecting anomalies in high dimensional transaction databases is vital for corporate and government security. Our main results include a rigorous theoretical framework and efficient nonparametric learning algorithms that exploit hidden structures to overcome the curse of dimensionality when analyzing massive high dimensional datasets. These algorithms have strong theoretical guarantees and provide high dimensional nonparametric recipes for many important learning tasks, ranging from unsupervised exploratory data analysis to supervised predictive modeling. In this thesis, we address three aspects: 1 Understanding the statistical theories of high dimensional nonparametric inference, including risk, estimation, and model selection consistency; 2 Designing new methods for different data-analysis tasks, including regression, classification, density estimation, graphical model learning, multi-task learning, spatial-temporal adaptive learning; 3 Demonstrating the usefulness of these methods in scientific applications, including functional genomics, cognitive neuroscience, and meteorology. In the last part of this thesis, we also present the future vision of high dimensional and large-scale nonparametric inference.
29

Mnoharozměrná pravděpodobnostní rozdělení: Struktura a učení / Multidimensional Probability Distributions: Structure and Learning

Bína, Vladislav January 2010 (has links)
The thesis considers a representation of a discrete multidimensional probability distribution using an apparatus of compositional models, and focuses on the theoretical background and structure of search space for structure learning algorithms in the framework of such models and particularly focuses on the subclass of decomposable models. Based on the theoretical results, proposals of basic learning techniques are introduced and compared.
30

Fully bayesian structure learning of bayesian networks and their hypergraph extensions / Estimation bayésienne de la structure des réseaux bayésiens puis d'hypergraphes

Datta, Sagnik 07 July 2016 (has links)
Dans cette thèse, j’aborde le problème important de l’estimation de la structure des réseaux complexes, à l’aide de la classe des modèles stochastiques dits réseaux Bayésiens. Les réseaux Bayésiens permettent de représenter l’ensemble des relations d’indépendance conditionnelle. L’apprentissage statistique de la structure de ces réseaux complexes par les réseaux Bayésiens peut révéler la structure causale sous-jacente. Il peut également servir pour la prédiction de quantités qui sont difficiles, coûteuses, ou non éthiques comme par exemple le calcul de la probabilité de survenance d’un cancer à partir de l’observation de quantités annexes, plus faciles à obtenir. Les contributions de ma thèse consistent en : (A) un logiciel développé en langage C pour l’apprentissage de la structure des réseaux bayésiens; (B) l’introduction d’un nouveau "jumping kernel" dans l’algorithme de "Metropolis-Hasting" pour un échantillonnage rapide de réseaux; (C) l’extension de la notion de réseaux Bayésiens aux structures incluant des boucles et (D) un logiciel spécifique pour l’apprentissage des structures cycliques. Notre principal objectif est l’apprentissage statistique de la structure de réseaux complexes représentée par un graphe et par conséquent notre objet d’intérêt est cette structure graphique. Un graphe est constitué de nœuds et d’arcs. Tous les paramètres apparaissant dans le modèle mathématique et différents de ceux qui caractérisent la structure graphique sont considérés comme des paramètres de nuisance. / In this thesis, I address the important problem of the determination of the structure of complex networks, with the widely used class of Bayesian network models as a concrete vehicle of my ideas. The structure of a Bayesian network represents a set of conditional independence relations that hold in the domain. Learning the structure of the Bayesian network model that represents a domain can reveal insights into its underlying causal structure. Moreover, it can also be used for prediction of quantities that are difficult, expensive, or unethical to measure such as the probability of cancer based on other quantities that are easier to obtain. The contributions of this thesis include (A) a software developed in C language for structure learning of Bayesian networks; (B) introduction a new jumping kernel in the Metropolis-Hasting algorithm for faster sampling of networks (C) extending the notion of Bayesian networks to structures involving loops and (D) a software developed specifically to learn cyclic structures. Our primary objective is structure learning and thus the graph structure is our parameter of interest. We intend not to perform estimation of the parameters involved in the mathematical models.

Page generated in 0.0923 seconds