• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modeling Actions and State Changes for a Machine Reading Comprehension Dataset

January 2019 (has links)
abstract: Artificial general intelligence consists of many components, one of which is Natural Language Understanding (NLU). One of the applications of NLU is Reading Comprehension where it is expected that a system understand all aspects of a text. Further, understanding natural procedure-describing text that deals with existence of entities and effects of actions on these entities while doing reasoning and inference at the same time is a particularly difficult task. A recent natural language dataset by the Allen Institute of Artificial Intelligence, ProPara, attempted to address the challenges to determine entity existence and entity tracking in natural text. As part of this work, an attempt is made to address the ProPara challenge. The Knowledge Representation and Reasoning (KRR) community has developed effective techniques for modeling and reasoning about actions and similar techniques are used in this work. A system consisting of Inductive Logic Programming (ILP) and Answer Set Programming (ASP) is used to address the challenge and achieves close to state-of-the-art results and provides an explainable model. An existing semantic role label parser is modified and used to parse the dataset. On analysis of the learnt model, it was found that some of the rules were not generic enough. To overcome the issue, the Proposition Bank dataset is then used to add knowledge in an attempt to generalize the ILP learnt rules to possibly improve the results. / Dissertation/Thesis / Masters Thesis Computer Science 2019
2

Using Bidirectional Encoder Representations from Transformers for Conversational Machine Comprehension / Användning av BERT-språkmodell för konversationsförståelse

Gogoulou, Evangelina January 2019 (has links)
Bidirectional Encoder Representations from Transformers (BERT) is a recently proposed language representation model, designed to pre-train deep bidirectional representations, with the goal of extracting context-sensitive features from an input text [1]. One of the challenging problems in the field of Natural Language Processing is Conversational Machine Comprehension (CMC). Given a context passage, a conversational question and the conversational history, the system should predict the answer span of the question in the context passage. The main challenge in this task is how to effectively encode the conversational history into the prediction of the next answer. In this thesis work, we investigate the use of the BERT language model for the CMC task. We propose a new architecture, named BERT-CMC, using the BERT model as a base. This architecture includes a new module for encoding the conversational history, inspired by the Transformer-XL model [2]. This module serves the role of memory throughout the conversation. The proposed model is trained and evaluated on the Conversational Question Answering dataset (CoQA) [3]. Our hypothesis is that the BERT-CMC model will effectively learn the underlying context of the conversation, leading to better performance than the baseline model proposed for CoQA. Our results of evaluating the BERT-CMC on the CoQA dataset show that the model performs poorly (44.7% F1 score), comparing to the CoQA baseline model (66.2% F1 score). In the light of model explainability, we also perform a qualitative analysis of the model behavior in questions with various linguistic phenomena eg coreference, pragmatic reasoning. Additionally, we motivate the critical design choices made, by performing an ablation study of the effect of these choices on the model performance. The results suggest that fine tuning the BERT layers boost the model performance. Moreover, it is shown that increasing the number of extra layers on top of BERT leads to bigger capacity of the conversational memory. / Bidirectional Encoder Representations from Transformers (BERT) är en nyligen föreslagen språkrepresentationsmodell, utformad för att förträna djupa dubbelriktade representationer, med målet att extrahera kontextkänsliga särdrag från en inmatningstext [1]. Ett utmanande problem inom området naturligtspråkbehandling är konversationsförståelse (förkortat CMC). Givet en bakgrundstext, en fråga och konversationshistoriken ska systemet förutsäga vilken del av bakgrundstexten som utgör svaret på frågan. Den viktigaste utmaningen i denna uppgift är hur man effektivt kan kodifiera konversationshistoriken i förutsägelsen av nästa svar. I detta examensarbete undersöker vi användningen av BERT-språkmodellen för CMC-uppgiften. Vi föreslår en ny arkitektur med namnet BERT-CMC med BERT-modellen som bas. Denna arkitektur innehåller en ny modul för kodning av konversationshistoriken, inspirerad av Transformer-XL-modellen [2]. Den här modulen tjänar minnets roll under hela konversationen. Den föreslagna modellen tränas och utvärderas på en datamängd för samtalsfrågesvar (CoQA) [3]. Vår hypotes är att BERT-CMC-modellen effektivt kommer att lära sig det underliggande sammanhanget för konversationen, vilket leder till bättre resultat än basmodellen som har föreslagits för CoQA. Våra resultat av utvärdering av BERT-CMC på CoQA-datasetet visar att modellen fungerar dåligt (44.7% F1 resultat), jämfört med CoQAbasmodellen (66.2% F1 resultat). För att bättre kunna förklara modellen utför vi också en kvalitativ analys av modellbeteendet i frågor med olika språkliga fenomen, t.ex. koreferens, pragmatiska resonemang. Dessutom motiverar vi de kritiska designvalen som gjorts genom att utföra en ablationsstudie av effekten av dessa val på modellens prestanda. Resultaten tyder på att finjustering av BERT-lager ökar modellens prestanda. Dessutom visas att ökning av antalet extra lager ovanpå BERT leder till större konversationsminne.

Page generated in 0.534 seconds