131 |
Deep learning for reading and understanding languageKočiský, Tomáš January 2017 (has links)
This thesis presents novel tasks and deep learning methods for machine reading comprehension and question answering with the goal of achieving natural language understanding. First, we consider a semantic parsing task where the model understands sentences and translates them into a logical form or instructions. We present a novel semi-supervised sequential autoencoder that considers language as a discrete sequential latent variable and semantic parses as the observations. This model allows us to leverage synthetically generated unpaired logical forms, and thereby alleviate the lack of supervised training data. We show the semi-supervised model outperforms a supervised model when trained with the additional generated data. Second, reading comprehension requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess reading comprehension ability, in both artificial agents and children learning to read. We propose a new, challenging, supervised reading comprehension task. We gather a large-scale dataset of news stories from the CNN and Daily Mail websites with Cloze-style questions created from the highlights. This dataset allows for the first time training deep learning models for reading comprehension. We also introduce novel attention-based models for this task and present qualitative analysis of the attention mechanism. Finally, following the recent advances in reading comprehension in both models and task design, we further propose a new task for understanding complex narratives, NarrativeQA, consisting of full texts of books and movie scripts. We collect human written questions and answers based on high-level plot summaries. This task is designed to encourage development of models for language understanding; it is designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard reading comprehension models struggle on the tasks presented here.
|
132 |
Cross-Lingual Transfer of Natural Language Processing SystemsRasooli, Mohammad Sadegh January 2019 (has links)
Accurate natural language processing systems rely heavily on annotated datasets. In the absence of such datasets, transfer methods can help to develop a model by transferring annotations from one or more rich-resource languages to the target language of interest. These methods are generally divided into two approaches: 1) annotation projection from translation data, aka parallel data, using supervised models in rich-resource languages, and 2) direct model transfer from annotated datasets in rich-resource languages.
In this thesis, we demonstrate different methods for transfer of dependency parsers and sentiment analysis systems. We propose an annotation projection method that performs well in the scenarios for which a large amount of in-domain parallel data is available. We also propose a method which is a combination of annotation projection and direct transfer that can leverage a minimal amount of information from a small out-of-domain parallel dataset to develop highly accurate transfer models. Furthermore, we propose an unsupervised syntactic reordering model to improve the accuracy of dependency parser transfer for non-European languages. Finally, we conduct a diverse set of experiments for the transfer of sentiment analysis systems in different data settings.
A summary of our contributions are as follows:
* We develop accurate dependency parsers using parallel text in an annotation projection framework. We make use of the fact that the density of word alignments is a valuable indicator of reliability in annotation projection.
* We develop accurate dependency parsers in the absence of a large amount of parallel data. We use the Bible data, which is in orders of magnitude smaller than a conventional parallel dataset, to provide minimal cues for creating cross-lingual word representations. Our model is also capable of boosting the performance of annotation projection with a large amount of parallel data. Our model develops cross-lingual word representations for going beyond the traditional delexicalized direct transfer methods. Moreover, we propose a simple but effective word translation approach that brings in explicit lexical features from the target language in our direct transfer method.
* We develop different syntactic reordering models that can change the source treebanks in rich-resource languages, thus preventing learning a wrong model for a non-related language. Our experimental results show substantial improvements over non-European languages.
* We develop transfer methods for sentiment analysis in different data availability scenarios. We show that we can leverage cross-lingual word embeddings to create accurate sentiment analysis systems in the absence of annotated data in the target language of interest.
We believe that the novelties that we introduce in this thesis indicate the usefulness of transfer methods. This is appealing in practice, especially since we suggest eliminating the requirement for annotating new datasets for low-resource languages which is expensive, if not impossible, to obtain.
|
133 |
A robust unification-based parser for Chinese natural language processing.January 2001 (has links)
Chan Shuen-ti Roy. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 168-175). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.12 / Chapter 1.1. --- The nature of natural language processing --- p.12 / Chapter 1.2. --- Applications of natural language processing --- p.14 / Chapter 1.3. --- Purpose of study --- p.17 / Chapter 1.4. --- Organization of this thesis --- p.18 / Chapter 2. --- Organization and methods in natural language processing --- p.20 / Chapter 2.1. --- Organization of natural language processing system --- p.20 / Chapter 2.2. --- Methods employed --- p.22 / Chapter 2.3. --- Unification-based grammar processing --- p.22 / Chapter 2.3.1. --- Generalized Phase Structure Grammar (GPSG) --- p.27 / Chapter 2.3.2. --- Head-driven Phrase Structure Grammar (HPSG) --- p.31 / Chapter 2.3.3. --- Common drawbacks of UBGs --- p.33 / Chapter 2.4. --- Corpus-based processing --- p.34 / Chapter 2.4.1. --- Drawback of corpus-based processing --- p.35 / Chapter 3. --- Difficulties in Chinese language processing and its related works --- p.37 / Chapter 3.1. --- A glance at the history --- p.37 / Chapter 3.2. --- Difficulties in syntactic analysis of Chinese --- p.37 / Chapter 3.2.1. --- Writing system of Chinese causes segmentation problem --- p.38 / Chapter 3.2.2. --- Words serving multiple grammatical functions without inflection --- p.40 / Chapter 3.2.3. --- Word order of Chinese --- p.42 / Chapter 3.2.4. --- The Chinese grammatical word --- p.43 / Chapter 3.3. --- Related works --- p.45 / Chapter 3.3.1. --- Unification grammar processing approach --- p.45 / Chapter 3.3.2. --- Corpus-based processing approach --- p.48 / Chapter 3.4. --- Restatement of goal --- p.50 / Chapter 4. --- SERUP: Statistical-Enhanced Robust Unification Parser --- p.54 / Chapter 5. --- Step One: automatic preprocessing --- p.57 / Chapter 5.1. --- Segmentation of lexical tokens --- p.57 / Chapter 5.2. --- "Conversion of date, time and numerals" --- p.61 / Chapter 5.3. --- Identification of new words --- p.62 / Chapter 5.3.1. --- Proper nouns ´ؤ Chinese names --- p.63 / Chapter 5.3.2. --- Other proper nouns and multi-syllabic words --- p.67 / Chapter 5.4. --- Defining smallest parsing unit --- p.82 / Chapter 5.4.1. --- The Chinese sentence --- p.82 / Chapter 5.4.2. --- Breaking down the paragraphs --- p.84 / Chapter 5.4.3. --- Implementation --- p.87 / Chapter 6. --- Step Two: grammar construction --- p.91 / Chapter 6.1. --- Criteria in choosing a UBG model --- p.91 / Chapter 6.2. --- The grammar in details --- p.92 / Chapter 6.2.1. --- The PHON feature --- p.93 / Chapter 6.2.2. --- The SYN feature --- p.94 / Chapter 6.2.3. --- The SEM feature --- p.98 / Chapter 6.2.4. --- Grammar rules and features principles --- p.99 / Chapter 6.2.5. --- Verb phrases --- p.101 / Chapter 6.2.6. --- Noun phrases --- p.104 / Chapter 6.2.7. --- Prepositional phrases --- p.113 / Chapter 6.2.8. --- """Ba2"" and ""Bei4"" constructions" --- p.115 / Chapter 6.2.9. --- The terminal node S --- p.119 / Chapter 6.2.10. --- Summary of phrasal rules --- p.121 / Chapter 6.2.11. --- Morphological rules --- p.122 / Chapter 7. --- Step Three: resolving structural ambiguities --- p.128 / Chapter 7.1. --- Sources of ambiguities --- p.128 / Chapter 7.2. --- The traditional practices: an illustration --- p.132 / Chapter 7.3. --- Deficiency of current practices --- p.134 / Chapter 7.4. --- A new point of view: Wu (1999) --- p.140 / Chapter 7.5. --- Improvement over Wu (1999) --- p.142 / Chapter 7.6. --- Conclusion on semantic features --- p.146 / Chapter 8. --- "Implementation, performance and evaluation" --- p.148 / Chapter 8.1. --- Implementation --- p.148 / Chapter 8.2. --- Performance and evaluation --- p.150 / Chapter 8.2.1. --- The test set --- p.150 / Chapter 8.2.2. --- Segmentation of lexical tokens --- p.150 / Chapter 8.2.3. --- New word identification --- p.152 / Chapter 8.2.4. --- Parsing unit segmentation --- p.156 / Chapter 8.2.5. --- The grammar --- p.158 / Chapter 8.3. --- Overall performance of SERUP --- p.162 / Chapter 9. --- Conclusion --- p.164 / Chapter 9.1. --- Summary of this thesis --- p.164 / Chapter 9.2. --- Contribution of this thesis --- p.165 / Chapter 9.3. --- Future work --- p.166 / References --- p.168 / Appendix I --- p.176 / Appendix II --- p.181 / Appendix III --- p.183
|
134 |
[en] TRANSITIONBASED DEPENDENCY PARSING APPLIED ON UNIVERSAL DEPENDENCIES / [pt] ANÁLISE DE DEPENDÊNCIA BASEADA EM TRANSIÇÃO APLICADA A UNIVERSAL DEPENDENCIESCESAR DE SOUZA BOUCAS 11 February 2019 (has links)
[pt] Análise de dependência consiste em obter uma estrutura sintática
correspondente a determinado texto da linguagem natural. Tal estrutura,
usualmente uma árvore de dependência, representa relações hierárquicas
entre palavras. Representação computacionalmente eficiente que vem sendo
utilizada para lidar com desafios que surgem com o crescente volume de
informação textual online. Podendo ser utilizada, por exemplo, para inferir
computacionalmente o significado de palavras das mais diversas línguas.
Este trabalho apresenta a análise de dependência com enfoque em uma de
suas modelagens mais populares em aprendizado de máquina: o método
baseado em transição. Desenvolvemos uma implementação gulosa deste
modelo com um classificador neural simples para executar experimentos.
Datasets da iniciativa Universal Dependencies são utilizados para treinar e
posteriormente testar o sistema com a validação disponibilizada na tarefa
compartilhada da CoNLL-2017. Os resultados mostram empiricamente que
se pode obter ganho de performance inicializando a camada de entrada
da rede neural com uma representação de palavras obtida com pré-treino.
Chegando a uma performance de 84,51 LAS no conjunto de teste da
língua portuguesa do Brasil e 75,19 LAS no conjunto da língua inglesa.
Ficando cerca de 4 pontos atrás da performance do melhor resultado para
analisadores de dependência baseados em sistemas de transição. / [en] Dependency parsing is the task that transforms a sentence into a
syntactic structure, usually a dependency tree, that represents relations
between words. This representations are useful to deal with several tasks
that arises with the increasing volume of textual online information and
the need for technologies that depends on NLP tasks to work. It can be
used, for example, to enable computers to infer the meaning of words
of multiple natural languages. This paper presents dependency parsing
with focus on one of its most popular modeling in machine learning: the
transition-based method. A greedy implementation of this model with
a simple neural network-based classifier is used to perform experiments.
Universal Dependencies treebanks are used to train and then test the system
using the validation script published in the CoNLL-2017 shared task. The
results empirically indicate the benefits of initializing the input layer of the
network with word embeddings obtained through pre-training. It reached
84.51 LAS in the Portuguese of Brazil test set and 75.19 LAS in the English
test set. This result is nearly 4 points behind the performance of the best
results of transition-based parsers.
|
135 |
Lexical approaches to backoff in statistical parsingLakeland, Corrin, n/a January 2006 (has links)
This thesis develops a new method for predicting probabilities in a statistical parser so that more sophisticated probabilistic grammars can be used. A statistical parser uses a probabilistic grammar derived from a training corpus of hand-parsed sentences. The grammar is represented as a set of constructions - in a simple case these might be context-free rules. The probability of each construction in the grammar is then estimated by counting its relative frequency in the corpus.
A crucial problem when building a probabilistic grammar is to select an appropriate level of granularity for describing the constructions being learned. The more constructions we include in our grammar, the more sophisticated a model of the language we produce. However, if too many different constructions are included, then our corpus is unlikely to contain reliable information about the relative frequency of many constructions.
In existing statistical parsers two main approaches have been taken to choosing an appropriate granularity. In a non-lexicalised parser constructions are specified as structures involving particular parts-of-speech, thereby abstracting over individual words. Thus, in the training corpus two syntactic structures involving the same parts-of-speech but different words would be treated as two instances of the same event. In a lexicalised grammar the assumption is that the individual words in a sentence carry information about its syntactic analysis over and above what is carried by its part-of-speech tags. Lexicalised grammars have the potential to provide extremely detailed syntactic analyses; however, Zipf�s law makes it hard for such grammars to be learned.
In this thesis, we propose a method for optimising the trade-off between informative and learnable constructions in statistical parsing. We implement a grammar which works at a level of granularity in between single words and parts-of-speech, by grouping words together using unsupervised clustering based on bigram statistics. We begin by implementing a statistical parser to serve as the basis for our experiments. The parser, based on that of Michael Collins (1999), contains a number of new features of general interest. We then implement a model of word clustering, which we believe is the first to deliver vector-based word representations for an arbitrarily large lexicon. Finally, we describe a series of experiments in which the statistical parser is trained using categories based on these word representations.
|
136 |
UNITRAN: An Interlingual Machine Translation SystemDorr, Bonnie Jean 01 December 1987 (has links)
This report describes the UNITRAN (UNIversal TRANslator) system, an implementation of a principle-based approach to natural language translation. The system is "interlingual", i.e., the model is based on universal principles that hold across all languages; the distinctions among languages are handled by settings of parameters associated with the universal principles. Interaction effects of linguistic principles are handled by the syste so that the programmer does not need to specifically spell out the details of rule applications. Only a small set of principles covers all languages; thus, the unmanageable grammar size of alternative approaches is no longer a problem.
|
137 |
Automated Program RecognitionWills, Linda M. 01 February 1987 (has links)
The key to understanding a program is recognizing familiar algorithmic fragments and data structures in it. Automating this recognition process will make it easier to perform many tasks which require program understanding, e.g., maintenance, modification, and debugging. This report describes a recognition system, called the Recognizer, which automatically identifies occurrences of stereotyped computational fragments and data structures in programs. The Recognizer is able to identify these familiar fragments and structures, even though they may be expressed in a wide range of syntactic forms. It does so systematically and efficiently by using a parsing technique. Two important advances have made this possible. The first is a language-independent graphical representation for programs and programming structures which canonicalizes many syntactic features of programs. The second is an efficient graph parsing algorithm.
|
138 |
Robustes Parsing und Disambiguierung mit gewichteten TransduktorenDidakowski, Jörg January 2005 (has links)
In dieser Arbeit wird ein Verfahren für robustes Parsing von uneingeschränktem natürlichsprachlichen Text mit gewichteten Transduktoren erarbeitet. Es werden zwei linguistische Theorien, das Chunking und das syntaktische Tagging, vorgestellt, die sich besonders für die praktische Anwendung mit Finite-State Maschinen eignen. Über die formalen Grundlagen, die es möglich machen, Finite-State Maschinen
zu modellieren, werden existierende Ansätze vorgestellt, die diese linguistischen Theorien mit Finite-State Maschinen realisieren. <br>Jedoch sind diese Ansätze in vieler Hinsicht problematisch. Es wird gezeigt,
dass sich Probleme lösen lassen, indem Disambiguierungsstrategien durch Constraints realisiert werden, die als Gewicht bzw. Semiring vorliegen. Durch die Bestimmung des besten Pfades ist dann eine
Disambiguierung möglich. Das Verfahren bewegt sich zwischen einem Low- und High-Level Parsing und behandelt flache Dependenzstrukturen. Für die Analyse wird eine rudimentäre Grammatik für das Deutsche entwickelt. Durch eine Implementierung wird letztlich der Ansatz getestet.
|
139 |
Parsing optimal pour la compression du texte par dictionnaireLangiu, Alessio 03 April 2012 (has links) (PDF)
Les algorithmes de compression de données basés sur les dictionnaires incluent une stratégie de parsing pour transformer le texte d'entrée en une séquence de phrases du dictionnaire. Etant donné un texte, un tel processus n'est généralement pas unique et, pour comprimer, il est logique de trouver, parmi les parsing possibles, celui qui minimise le plus le taux de compression finale. C'est ce qu'on appelle le problème du parsing. Un parsing optimal est une stratégie de parsing ou un algorithme de parsing qui résout ce problème en tenant compte de toutes les contraintes d'un algorithme de compression ou d'une classe d'algorithmes de compression homogène. Les contraintes de l'algorithme de compression sont, par exemple, le dictionnaire lui-même, c'est-à-dire l'ensemble dynamique de phrases disponibles, et combien une phrase pèse sur le texte comprimé, c'est-à-dire quelle est la longueur du mot de code qui représente la phrase, appelée aussi le coût du codage d'un pointeur de dictionnaire. En plus de 30 ans d'histoire de la compression de texte par dictionnaire, une grande quantité d'algorithmes, de variantes et d'extensions sont apparus. Cependant, alors qu'une telle approche de la compression du texte est devenue l'une des plus appréciées et utilisées dans presque tous les processus de stockage et de communication, seuls quelques algorithmes de parsing optimaux ont été présentés. Beaucoup d'algorithmes de compression manquent encore d'optimalité pour leur parsing, ou du moins de la preuve de l'optimalité. Cela se produit parce qu'il n'y a pas un modèle général pour le problème de parsing qui inclut tous les algorithmes par dictionnaire et parce que
les parsing optimaux existants travaillent sous des hypothèses trop restrictives. Ce travail focalise sur le problème de parsing et présente à la fois un modèle général pour la compression des textes basée sur les dictionnaires appelé la théorie Dictionary-Symbolwise et un algorithme général de parsing qui a été prouvé être optimal sous certaines hypothèses réalistes. Cet algorithme est appelé Dictionary-Symbolwise Flexible Parsing et couvre pratiquement tous les cas des algorithmes de compression de texte basés sur dictionnaire ainsi que la grande classe de leurs variantes où le texte est décomposé en une séquence de symboles et de phrases du dictionnaire. Dans ce travail, nous avons aussi considéré le cas d'un mélange libre d'un compresseur par dictionnaire et d'un compresseur symbolwise. Notre Dictionary-Symbolwise Flexible Parsing couvre également ce cas-ci. Nous avons bien un algorithme de parsing optimal dans le cas de compression Dictionary-Symbolwise où le dictionnaire est fermé par préfixe et le coût d'encodage des pointeurs du dictionnaire est variable. Le compresseur symbolwise est un compresseur symbolwise classique qui fonctionne en temps linéaire, comme le sont de nombreux codeurs communs à longueur variable. Notre algorithme fonctionne sous l'hypothèse qu'un graphe spécial, qui sera décrit par la suite, soit bien défini. Même si cette condition n'est pas remplie, il est possible d'utiliser la même méthode pour obtenir des parsing presque optimaux. Dans le détail, lorsque le dictionnaire est comme LZ78, nous montrons comment mettre en œuvre notre algorithme en temps linéaire. Lorsque le dictionnaire est comme LZ77 notre algorithme peut être mis en œuvre en temps O (n log n) où n est le longueur du texte. Dans les deux cas, la complexité en espace est O (n). Même si l'objectif principal de ce travail est de nature théorique, des résultats expérimentaux seront présentés pour souligner certains effets pratiques de l'optimalité du parsing sur les performances de compression et quelques résultats expérimentaux plus détaillés sont mis dans une annexe appropriée.
|
140 |
Probabilistic Shape Parsing and Action Recognition Through Binary Spatio-Temporal Feature DescriptionWhiten, Christopher J. 09 April 2013 (has links)
In this thesis, contributions are presented in the areas of shape parsing for view-based object recognition and spatio-temporal feature description for action recognition. A probabilistic model for parsing shapes into several distinguishable parts for accurate shape recognition is presented. This approach is based on robust geometric features that permit high recognition accuracy.
As the second contribution in this thesis, a binary spatio-temporal feature descriptor is presented. Recent work shows that binary spatial feature descriptors are effective for increasing the efficiency of object recognition, while retaining comparable performance to state of the art descriptors. An extension of these approaches to action recognition is presented, facilitating huge gains in efficiency due to the computational advantage of computing a bag-of-words representation with the Hamming distance. A scene's motion and appearance is encoded with a short binary string. Exploiting the binary makeup of this descriptor greatly increases the efficiency while retaining competitive recognition performance.
|
Page generated in 0.0707 seconds