• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 1
  • Tagged with
  • 14
  • 14
  • 14
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Learning with Markov logic networks : transfer learning, structure learning, and an application to Web query disambiguation

Mihalkova, Lilyana Simeonova 18 March 2011 (has links)
Traditionally, machine learning algorithms assume that training data is provided as a set of independent instances, each of which can be described as a feature vector. In contrast, many domains of interest are inherently multi-relational, consisting of entities connected by a rich set of relations. For example, the participants in a social network are linked by friendships, collaborations, and shared interests. Likewise, the users of a search engine are related by searches for similar items and clicks to shared sites. The ability to model and reason about such relations is essential not only because better predictive accuracy is achieved by exploiting this additional information, but also because frequently the goal is to predict whether a set of entities are related in a particular way. This thesis falls within the area of Statistical Relational Learning (SRL), which combines ideas from two traditions within artificial intelligence, first-order logic and probabilistic graphical models to address the challenge of learning from multi-relational data. We build on one particular SRL model, Markov logic networks (MLNs), which consist of a set of weighted first-order-logic formulae and provide a principled way of defining a probability distribution over possible worlds. We develop algorithms for learning of MLN structure both from scratch and by transferring a previously learned model, as well as an application of MLNs to the problem of Web query disambiguation. The ideas we present are unified by two main themes: the need to deal with limited training data and the use of bottom-up learning techniques. Structure learning, the task of automatically acquiring a set of dependencies among the relations in the domain, is a central problem in SRL. We introduce BUSL, an algorithm for learning MLN structure from scratch that proceeds in a more bottom-up fashion, breaking away from the tradition of top-down learning typical in SRL. Our approach first constructs a novel data structure called a Markov network template that is used to restrict the search space for clauses. Our experiments in three relational domains demonstrate that BUSL dramatically reduces the search space for clauses and attains a significantly higher accuracy than a structure learner that follows a top-down approach. Accurate and efficient structure learning can also be achieved by transferring a model obtained in a source domain related to the current target domain of interest. We view transfer as a revision task and present an algorithm that diagnoses a source MLN to determine which of its parts transfer directly to the target domain and which need to be updated. This analysis focuses the search for revisions on the incorrect portions of the source structure, thus speeding up learning. Transfer learning is particularly important when target-domain data is limited, such as when data on only a few individuals is available from domains with hundreds of entities connected by a variety of relations. We also address this challenging case and develop a general transfer learning approach that makes effective use of such limited target data in several social network domains. Finally, we develop an application of MLNs to the problem of Web query disambiguation in a more privacy-aware setting where the only information available about a user is that captured in a short search session of 5-6 previous queries on average. This setting contrasts with previous work that typically assumes the availability of long user-specific search histories. To compensate for the scarcity of user-specific information, our approach exploits the relations between users, search terms, and URLs. We demonstrate the effectiveness of our approach in the presence of noise and show that it outperforms several natural baselines on a large data set collected from the MSN search engine. / text
12

On Tractability and Consistency of Probabilistic Inference in Relational Domains

Malhotra, Sagar 10 July 2023 (has links)
Relational data is characterised by the rich structure it encodes in the dependencies between the individual entities of a given domain. Statistical Relational Learning (SRL) combines first-order logic and probability to learn and reason over relational domains by creating parametric probability distributions over relational structures. SRL models can succinctly represent the complex dependencies in relational data and admit learning and inference under uncertainty. However, these models are significantly limited when it comes to the tractability of learning and inference. This limitation emerges from the intractability of Weighted First Order Model Counting (WFOMC), as both learning and inference in SRL models can be reduced to instances of WFOMC. Hence, fragments of first-order logic that admit tractable WFOMC, widely known as domain-liftable, can significantly advance the practicality and efficiency of SRL models. Recent works have uncovered another limitation of SRL models, i.e., they lead to unintuitive behaviours when used across varying domain sizes, violating fundamental consistency conditions expected of sound probabilistic models. Such inconsistencies also mean that conventional machine learning techniques, like training with batched data, cannot be soundly used for SRL models. In this thesis, we contribute to both the tractability and consistency of probabilistic inference in SRL models. We first expand the class of domain-liftable fragments with counting quantifiers and cardinality constraints. Unlike the algorithmic approaches proposed in the literature, we present a uniform combinatorial approach, admitting analytical combinatorial formulas for WFOMC. Our approach motivates a new family of weight functions allowing us to express a larger class of probability distributions without losing domain-liftability. We further expand the class of domain-liftable fragments with constraints inexpressible in first-order logic, namely acyclicity and connectivity constraints. Finally, we present a complete characterization for a statistically consistent (a.k.a projective) models in the two-variable fragment of a widely used class of SRL models, namely Markov Logic Networks.
13

AQUISIÇÃO AUTOMATIZADA DE HIERARQUIAS DE CONCEITOS DE ONTOLOGIAS UTILIZANDO APRENDIZAGEM ESTATÍSTICA RELACIONAL / AUTOMATED ACQUISITION OF CONCEPTS OF HIERARCHIES ONTOLOGY USING STATISTICAL RELATIONAL LEARNING

Drumond, Lucas Rêgo 23 October 2009 (has links)
Made available in DSpace on 2016-08-17T14:53:05Z (GMT). No. of bitstreams: 1 Lucas Rego Drumond.pdf: 6150160 bytes, checksum: 27ad4ea0ffdf273a78782ada8f04da6b (MD5) Previous issue date: 2009-10-23 / Knowledge representation formalisms, such as ontologies, have proven to be a powerful tool for enhancing the effectiveness of natural language processing, information filtering and retrieval and so on. Besides these tasks, ontologies are also crucial for the Semantic Web, a new generation of the Web that aims at structuring its content in such a way that it can be more effectively processed by machines. However, knowledge systems suffer from the so called knowledge acquisition bottleneck, i.e. the difficulty in constructing knowledge bases. An approach for this problem is to provide automatic or semi-automatic support for ontology construction, a field of research known as ontology learning. This work discusses the state of the art of ontology learning techniques and proposes and approach for supporting the ontology construction process through the automatization of the concept hierarchy extraction from textual sources. The proposed process is composed by two techniques, namely PRECE (Probabilistic Relational Concept Extraction) and PREHE (Probabilistic Relational Hierarchy Extraction). The PRECE technique extracts ontology concepts from textual sources while the PREHE technique extracts taxonomic relationships between the concepts extracted by PRECE. Both techniques use Markov logic networks, an approach for statistical relational learning that combines first order logic with Markov networks. The PRECE and PREHE techniques were evaluated in the touristic domain and their results were compared with an ontology manually developed by a domain expert. / Os formalismos de representação do conhecimento como as ontologias têm se mostrado uma poderosa ferramenta para melhorar a efetividade de sistemas de processamento da linguagem natural, recuperação e filtragem de informação e muitas outras tarefas. Além disso, as ontologias são essenciais para a Web Semântica, uma nova geração da Web que visa estruturar o conteúdo da mesma de modo que este possa ser processado de forma mais efetiva pelas máquinas. Entretanto, os sistemas de conhecimento sofrem do problema conhecido como o gargalo da aquisição do conhecimento, que nada mais é do que a dificuldade de construção das bases de conhecimento. Uma abordagem para este problema é o suporte automático ou semi-automático à construção de ontologias. Este campo de pesquisa é conhecido como aprendizagem de ontologias. Este trabalho discute o estado da arte das técnicas de aprendizagem de ontologias e propõe uma abordagem para o suporte ao processo de construção de ontologias através da automatização da extração de hierarquias de conceitos a partir de fontes textuais. O processo proposto é composto por duas técnicas, a PRECE (Probabilistic Relational Concept Extraction) para a extração de conceitos e a PREHE (Probabilistic Relational Hierarchy Extraction) para a descoberta de relacionamentos taxonômicos entre os conceitos extraídos pela PRECE. As duas técnicas fazem uso das Redes Lógicas de Markov, uma abordagem da aprendizagem probabilística relacional que combina a lógica de primeira ordem com as redes de Markov. As técnicas PRECE e PREHE foram avaliadas no domínio turístico comparando os seus resultados com uma ontologia desenvolvida manualmente por especialistas neste domínio.
14

Neural-Symbolic Modeling for Natural Language Discourse

Maria Leonor Pacheco Gonzales (12480663) 13 May 2022 (has links)
<p>Language “in the wild” is complex and ambiguous and relies on a shared understanding of the world for its interpretation. Most current natural language processing methods represent language by learning word co-occurrence patterns from massive amounts of linguistic data. This representation can be very powerful, but it is insufficient to capture the meaning behind written and spoken communication. </p> <p> </p> <p>In this dissertation, I will motivate neural-symbolic representations for dealing with these challenges. On the one hand, symbols have inherent explanatory power, and they can help us express contextual knowledge and enforce consistency across different decisions. On the other hand, neural networks allow us to learn expressive distributed representations and make sense of large amounts of linguistic data. I will introduce a holistic framework that covers all stages of the neural-symbolic pipeline: modeling, learning, inference, and its application for diverse discourse scenarios, such as analyzing online discussions, mining argumentative structures, and understanding public discourse at scale. I will show the advantages of neural-symbolic representations with respect to end-to-end neural approaches and traditional statistical relational learning methods.</p> <p><br></p> <p>In addition to this, I will demonstrate the advantages of neural-symbolic representations for learning in low-supervision settings, as well as their capabilities to decompose and explain high-level decision. Lastly, I will explore interactive protocols to help human experts in making sense of large repositories of textual data, and leverage neural-symbolic representations as the interface to inject expert human knowledge in the process of partitioning, classifying and organizing large language resources. </p>

Page generated in 0.1617 seconds