• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Event-Based Recognition Of Lived : Experiences In User Reviews / Reconnaissance d'expériences vécues dans les avis d'utilisateurs : une méthode basée sur les événements

Hassan, Ehab 03 May 2017 (has links)
La quantité de contenu généré par l'utilisateur sur le Web croît à un rythme rapide.Une grande partie de ce contenu est constituée des opinions et avis sur des produits et services. Vu leur impact, ces avis sont un facteur important dans les décisions concernant l'achat de ces produits ou services. Les utilisateurs ont tendance à faire confiance aux autres utilisateurs, surtout s'ils peuvent se comparer à ceux qui ont écrit les avis, ou, en d'autres termes, ils sont confiants de partager certaines caractéristiques. Par exemple, les familles préféreront voyager dans les endroits qui ont été recommandés par d'autres familles. Nous supposons que les avis qui contiennent des expériences vécues sont plus précieuses, puisque les expériences donnent aux avis un aspect plus subjective, permettant aux lecteurs de se projeter dans le contexte de l'écrivain.En prenant en compte cette hypothèse, dans cette thèse, nous visons à identifier, extraire et représenter les expériences vécues rapportées dans les avis des utilisateurs en hybridant les techniques d'extraction des connaissances et de traitement du langage naturel,afin d'accélérer le processus décisionnel. Pour cela, nous avons défini opérationnellement une expérience vécue d'un utilisateur comme un événement mentionné dans un avis, où l'auteur est présent parmi les participants. Cette définition considère que les événements mentionnés dans le texte sont les éléments les plus importants dans les expériences vécues: toutes les expériences vécues sont basées sur des événements, qui sont clairement définis dans le temps et l'espace. Par conséquent, nous proposons une approche permettant d'extraire les événements à partir des avis des utilisateurs, qui constituent la base d'un système permettant d'identifier et extraire les expériences vécues.Pour l'approche d'extraction d'événements, nous avons transformé les avis des utilisateur sen leurs représentations sémantiques en utilisant des techniques de machine reading.Nous avons effectué une analyse sémantique profonde des avis et détecté les cadres linguistiques les plus appropriés capturant des relations complexes exprimées dans les avis. Le système d'extraction des expériences vécues repose sur trois étapes. La première étape opère un filtrage des avis, basé sur les événements, permettant d'identifier les avis qui peuvent contenir des expériences vécues. La deuxième étape consiste à extraire les événements pertinents avec leurs participants. La dernière étape consiste à représenter les expériences vécues extraites de chaque avis comme un sous-graphe d'événements contenant les événements pertinents et leurs participants.Afin de tester notre hypothèse, nous avons effectué quelques expériences pour vérifier si les expériences vécues peuvent être considérées comme des motivations pour les notes attribuées par les utilisateurs dans le système de notation. Par conséquent, nous avons utilisé les expériences vécues comme des caractéristiques dans un système de classification, en comparant avec les notes associées avec des avis dans un ensemble de données extraites et annotées manuellement de Tripadvisor. Les résultats montrent que les expériences vécues sont corrélées avec les notes. Cette thèse fournit des contributions intéressantes dans le domaine de l'analyse d'opinion. Tout d'abord, l'application avec succès de machine reading afin d'identifier les expériences vécues. Ensuite, La confirmation que les expériences vécues sont liées aux notations. Enfin, l'ensemble de données produit pour tester notre hypothèse constitue également une contribution importante de la thèse. / The quantity of user-generated content on the Web is constantly growing at a fast pace.A great share of this content is made of opinions and reviews on products and services.This electronic word-of-mouth is also an important factor in decisions about purchasing these products or services. Users tend to trust other users, especially if they can compare themselves to those who wrote the reviews, or, in other words, they are confident to share some characteristics. For instance, families will prefer to travel in places that have been recommended by other families. We assume that reviews that contain lived experiences are more valuable, since experiences give to the reviews a more subjective cut, allowing readers to project themselves into the context of the writer. With this hypothesis in mind, in this thesis we aim to identify, extract, and represent reported lived experiences in customer reviews by hybridizing Knowledge Extraction and Natural Language Processing techniques in order to accelerate the decision process. Forthis, we define a lived user experience as an event mentioned in a review, where the authoris among the participants. This definition considers that mentioned events in the text are the most important elements in lived experiences : all lived experiences are based on events,which on turn are clearly defined in time and space. There fore, we propose an approach to extract events from user reviews, which constitute the basis of an event-based system to identify and extract lived experiences. For the event extraction approach, we transform user reviews into their semantic representations using machine reading techniques. We perform a deep semantic parsing of reviews, detecting the linguistic frames that capture complex relations expressed in there views. The event-based lived experience system is carried out in three steps. The first step operates an event-based review filtering, which identifies reviews that may contain lived experiences. The second step consists of extracting relevant events together with their participants. The last step focuses on representing extracted lived experiences in each review as an event sub-graph.In order to test our hypothesis, we carried out some experiments to verify whether lived experiences can be considered as triggers for the ratings expressed by users. Therefore, we used lived experiences as features in a classification system, comparing with the ratings of the reviews in a dataset extracted and manually annotated from Tripadvisor. The results show that lived experiences are actually correlated with the ratings.In conclusion, this thesis provides some interesting contributions in the field of opinionmining. First of all, the successful application of machine reading to identify lived experiences. Second, the confirmation that lived experiences are correlated to ratings. Finally,the dataset produced to test our hypothesis constitutes also an important contribution of the thesis.
2

An Automatically Generated Lexical Knowledge Base with Soft Definitions

Scaiano, Martin January 2016 (has links)
There is a need for methods that understand and represent the meaning of text for use in Artificial Intelligence (AI). This thesis demonstrates a method to automatically extract a lexical knowledge base from dictionaries for the purpose of improving machine reading. Machine reading refers to a process by which a computer processes natural language text into a representation that supports inference or inter-connection with existing knowledge (Clark and Harrison, 2010).1 There are a number of linguistic ideas associated with representing and applying the meaning of words which are unaddressed in current knowledge representations. This work draws heavily from the linguistic theory of frame semantics (Fillmore, 1976). A word is not a strictly defined construct; instead, it evokes our knowledge and experiences, and this information is adapted to a given context by human intelligence. This can often be seen in dictionaries, as a word may have many senses, but some are only subtle variations of the same theme or core idea. Further unaddressed issue is that sentences may have multiple reasonable and valid interpretations (or readings). This thesis postulates that there must be algorithms that work with symbolic rep- resentations which can model how words evoke knowledge and then contextualize that knowledge. I attempt to answer this previously unaddressed question, “How can a sym- bolic representation support multiple interpretations, evoked knowledge, soft word senses, and adaptation of meaning?” Furthermore, I implement and evaluate the proposed so- lution. This thesis proposes the use of a knowledge representation called Multiple Interpre- tation Graphs (MIGs), and a lexical knowledge structure called auto-frames to support contextualization. MIG is used to store a single auto-frame, the representation of a sen- tence, or an entire text. MIGs and auto-frames are produced from dependency parse trees using an algorithm I call connection search. MIG supports representing multiple different interpretations of a text, while auto-frames combine multiple word senses and in- formation related to the word into one representation. Connection search contextualizes MIGs and auto-frames, and reduces the number of interpretations that are considered valid. In this thesis, as proof of concept and evaluation, I extracted auto-frames from Long- man Dictionary of Contemporary English (LDOCE). I take the point of view that a word’s meaning depends on what it is connected to in its definition. I do not use a 1The term machine reading was coined by Etzioni et al. (2006). ii  predetermined set of semantic roles; instead, auto-frames focus on the connections or mappings between a word’s context and its definitions. Once I have extracted the auto-frames, I demonstrate how they may be contextu- alized. I then apply the lexical knowledge base to reading comprehension. The results show that this approach can produce good precision on this task, although more re- search and refinement is needed. The knowledge base and source code is made available to the community at http://martin.scaiano.com/Auto-frames.html or by contacting martin@scaiano.com.
3

Knowledge integration in machine reading

Kim, Doo Soon 04 November 2011 (has links)
Machine reading is the artificial-intelligence task of automatically reading a corpus of texts and, from the contents, building a knowledge base that supports automated reasoning and question answering. Success at this task could fundamentally solve the knowledge acquisition bottleneck – the widely recognized problem that knowledge-based AI systems are difficult and expensive to build because of the difficulty of acquiring knowledge from authoritative sources and building useful knowledge bases. One challenge inherent in machine reading is knowledge integration – the task of correctly and coherently combining knowledge snippets extracted from texts. This dissertation shows that knowledge integration can be automated and that it can significantly improve the performance of machine reading. We specifically focus on two contributions of knowledge integration. The first contribution is for improving the coherence of learned knowledge bases to better support automated reasoning and question answering. Knowledge integration achieves this benefit by aligning knowledge snippets that contain overlapping content. The alignment is difficult because the snippets can use significantly different surface forms. In one common type of variation, two snippets might contain overlapping content that is expressed at different levels of granularity or detail. Our matcher can “see past” this difference to align knowledge snippets drawn from a single document, from multiple documents, or from a document and a background knowledge base. The second contribution is for improving text interpretation. Our approach is to delay ambiguity resolution to enable a machine-reading system to maintain multiple candidate interpretations. This is useful because typically, as the system reads through texts, evidence accumulates to help the knowledge integration system resolve ambiguities correctly. To avoid a combinatorial explosion in the number of candidate interpretations, we propose the packed representation to compactly encode all the candidates. Also, we present an algorithm that prunes interpretations from the packed representation as evidence accumulates. We evaluate our work by building and testing two prototype machine reading systems and measuring the quality of the knowledge bases they construct. The evaluation shows that our knowledge integration algorithms improve the cohesiveness of the knowledge bases, indicating their improved ability to support automated reasoning and question answering. The evaluation also shows that our approach to postponing ambiguity resolution improves the system’s accuracy at text interpretation. / text
4

Modeling Actions and State Changes for a Machine Reading Comprehension Dataset

January 2019 (has links)
abstract: Artificial general intelligence consists of many components, one of which is Natural Language Understanding (NLU). One of the applications of NLU is Reading Comprehension where it is expected that a system understand all aspects of a text. Further, understanding natural procedure-describing text that deals with existence of entities and effects of actions on these entities while doing reasoning and inference at the same time is a particularly difficult task. A recent natural language dataset by the Allen Institute of Artificial Intelligence, ProPara, attempted to address the challenges to determine entity existence and entity tracking in natural text. As part of this work, an attempt is made to address the ProPara challenge. The Knowledge Representation and Reasoning (KRR) community has developed effective techniques for modeling and reasoning about actions and similar techniques are used in this work. A system consisting of Inductive Logic Programming (ILP) and Answer Set Programming (ASP) is used to address the challenge and achieves close to state-of-the-art results and provides an explainable model. An existing semantic role label parser is modified and used to parse the dataset. On analysis of the learnt model, it was found that some of the rules were not generic enough. To overcome the issue, the Proposition Bank dataset is then used to add knowledge in an attempt to generalize the ILP learnt rules to possibly improve the results. / Dissertation/Thesis / Masters Thesis Computer Science 2019
5

Addressing the brittleness of knowledge-based question-answering

Chaw, Shaw Yi 02 April 2012 (has links)
Knowledge base systems are brittle when the users of the knowledge base are unfamiliar with its content and structure. Querying a knowledge base requires users to state their questions in precise and complete formal representations that relate the facts in the question with relevant terms and relations in the underlying knowledge base. This requirement places a heavy burden on the users to become deeply familiar with the contents of the knowledge base and prevents novice users to effectively using the knowledge base for problem solving. As a result, the utility of knowledge base systems is often restricted to the developers themselves. The goal of this work is to help users, who may possess little domain expertise, to use unfamiliar knowledge bases for problem solving. Our thesis is that the difficulty in using unfamiliar knowledge bases can be addressed by an approach that funnels natural questions, expressed in English, into formal representations appropriate for automated reasoning. The approach uses a simplified English controlled language, a domain-neutral ontology, a set of mechanisms to handle a handful of well known question types, and a software component, called the Question Mediator, to identify relevant information in the knowledge base for problem solving. With our approach, a knowledge base user can use a variety of unfamiliar knowledge bases by posing their questions with simplified English to retrieve relevant information in the knowledge base for problem solving. We studied the thesis in the context of a system called ASKME. We evaluated ASKME on the task of answering exam questions for college level biology, chemistry, and physics. The evaluation consists of successive experiments to test if ASKME can help novice users employ unfamiliar knowledge bases for problem solving. The initial experiment measures ASKME's level of performance under ideal conditions, where the knowledge base is built and used by the same knowledge engineers. Subsequent experiments measure ASKME's level of performance under increasingly realistic conditions. In the final experiment, we measure ASKME's level of performance under conditions where the knowledge base is independently built by subject matter experts and the users of the knowledge base are a group of novices who are unfamiliar with the knowledge base. Results from the evaluation show that ASKME works well on different knowledge bases and answers a broad range of questions that were posed by novice users in a variety of domains. / text
6

Bayesian Logic Programs for plan recognition and machine reading

Vijaya Raghavan, Sindhu 22 February 2013 (has links)
Several real world tasks involve data that is uncertain and relational in nature. Traditional approaches like first-order logic and probabilistic models either deal with structured data or uncertainty, but not both. To address these limitations, statistical relational learning (SRL), a new area in machine learning integrating both first-order logic and probabilistic graphical models, has emerged in the recent past. The advantage of SRL models is that they can handle both uncertainty and structured/relational data. As a result, they are widely used in domains like social network analysis, biological data analysis, and natural language processing. Bayesian Logic Programs (BLPs), which integrate both first-order logic and Bayesian net- works are a powerful SRL formalism developed in the recent past. In this dissertation, we develop approaches using BLPs to solve two real world tasks – plan recognition and machine reading. Plan recognition is the task of predicting an agent’s top-level plans based on its observed actions. It is an abductive reasoning task that involves inferring cause from effect. In the first part of the dissertation, we develop an approach to abductive plan recognition using BLPs. Since BLPs employ logical deduction to construct the networks, they cannot be used effectively for abductive plan recognition as is. Therefore, we extend BLPs to use logical abduction to construct Bayesian networks and call the resulting model Bayesian Abductive Logic Programs (BALPs). In the second part of the dissertation, we apply BLPs to the task of machine reading, which involves automatic extraction of knowledge from natural language text. Most information extraction (IE) systems identify facts that are explicitly stated in text. However, much of the information conveyed in text must be inferred from what is explicitly stated since easily inferable facts are rarely mentioned. Human readers naturally use common sense knowledge and “read between the lines” to infer such implicit information from the explicitly stated facts. Since IE systems do not have access to common sense knowledge, they cannot perform deeper reasoning to infer implicitly stated facts. Here, we first develop an approach using BLPs to infer implicitly stated facts from natural language text. It involves learning uncertain common sense knowledge in the form of probabilistic first-order rules by mining a large corpus of automatically extracted facts using an existing rule learner. These rules are then used to derive additional facts from extracted information using BLP inference. We then develop an online rule learner that handles the concise, incomplete nature of natural-language text and learns first-order rules from noisy IE extractions. Finally, we develop a novel approach to calculate the weights of the rules using a curated lexical ontology like WordNet. Both tasks described above involve inference and learning from partially observed or incomplete data. In plan recognition, the underlying cause or the top-level plan that resulted in the observed actions is not known or observed. Further, only a subset of the executed actions can be observed by the plan recognition system resulting in partially observed data. Similarly, in machine reading, since some information is implicitly stated, they are rarely observed in the data. In this dissertation, we demonstrate the efficacy of BLPs for inference and learning from incomplete data. Experimental comparison on various benchmark data sets on both tasks demonstrate the superior performance of BLPs over state-of-the-art methods. / text

Page generated in 0.0914 seconds