Spelling suggestions: "subject:"batural language aprocessing"" "subject:"batural language eprocessing""
551 |
Solving Arabic Math Word Problems via Deep LearningAlghamdi, Reem A. 14 November 2021 (has links)
This thesis studies to automatically solve Arabic Math Word Problems (MWPs) by deep learning models. MWP is a text description of a mathematical problem, which should be solved by deriving a math equation and reach the answer. Due to their strong learning capacity, deep learning based models can learn from the given problem description and generate the correct math equation for solving the problem. Effective models have been developed for solving MWPs in English and Chinese. However, Arabic MWPs are rarely studied. To initiate the study in Arabic MWPs, this thesis contributes the first large-scale dataset for Arabic MWPs, which contain 6,000 samples. Each sample is composed of an Arabic MWP description and the corresponding equation to solve this MWP. Arabic MWP solvers are then built with deep learning models, and verified on this dataset for their effectiveness. In addition, a transfer learning model is built to let the high-resource Chinese MWP solver to promote the performance of the low-resource Arabic MWP solver. This work is the first to use deep learning methods to solve Arabic MWP and the first to use transfer learning to solve MWP across different languages. The solver enhanced by transfer learning has accuracy 74.15%, which is 3% higher than the baseline that does not use transfer learning. In addition, the accuracy is more than 7% higher than the baseline for templates with few samples representing them. Furthermore, The model can generate new sequences that were not seen before during the training with an accuracy of 27% (11% higher than the baseline).
|
552 |
Extracting Salient Named Entities from Financial News Articles / Extrahering av centrala entiteter från finansiella nyhetsartiklarGrönberg, David January 2021 (has links)
This thesis explores approaches for extracting company mentions from financial newsarticles that carry a central role in the news. The thesis introduces the task of salient named entity extraction (SNEE): extract all salient named entity mentions in a text document. Moreover, a neural sequence labeling approach is explored to address the SNEE task in an end-to-end fashion, both using a single-task and a multi-task learning setup. In order to train the models, a new procedure for automatically creating SNEE annotations for an existing news article corpus is explored. The neural sequence labeling approaches are compared against a two-stage approach utilizing NLP parsers, a knowledge base and a salience classifier. Textual features inspired from related work in salient entity detection are evaluated to determine what combination of features results in the highest performance on the SNEE task when used by a salience classifier. The experiments show that the difference in performance between the two-stage approach and the best performing sequence labeling approach is marginal, demonstrating the potential of the end-to-end sequence labeling approach on the SNEE task.
|
553 |
Reconnaissance des procédés de traduction sous-phrastiques : des ressources aux validations / Recognition of sub-sentential translation techniques : from resources to validationZhai, Yuming 19 December 2019 (has links)
Les procédés de traduction constituent un sujet important pour les traductologues et les linguistes. Face à un certain mot ou segment difficile à traduire, les traducteurs humains doivent appliquer les solutions particulières au lieu de la traduction littérale, telles que l'équivalence idiomatique, la généralisation, la particularisation, la modulation syntaxique ou sémantique, etc.En revanche, ce sujet a reçu peu d'attention dans le domaine du Traitement Automatique des Langues (TAL). Notre problématique de recherche se décline en deux questions : est-il possible de reconnaître automatiquement les procédés de traduction ? Certaines tâches en TAL peuvent-elles bénéficier de la reconnaissance des procédés de traduction ?Notre hypothèse de travail est qu'il est possible de reconnaître automatiquement les différents procédés de traduction (par exemple littéral versus non littéral). Pour vérifier notre hypothèse, nous avons annoté un corpus parallèle anglais-français en procédés de traduction, tout en établissant un guide d'annotation. Notre typologie de procédés est proposée en nous appuyant sur des typologies précédentes, et est adaptée à notre corpus. L'accord inter-annotateur (0,67) est significatif mais dépasse peu le seuil d'un accord fort (0,61), ce qui reflète la difficulté de la tâche d'annotation. En nous fondant sur des exemples annotés, nous avons ensuite travaillé sur la classification automatique des procédés de traduction. Même si le jeu de données est limité, les résultats expérimentaux valident notre hypothèse de travail concernant la possibilité de reconnaître les différents procédés de traduction. Nous avons aussi montré que l'ajout des traits sensibles au contexte est pertinent pour améliorer la classification automatique.En vue de tester la généricité de notre typologie de procédés de traduction et du guide d'annotation, nos études sur l'annotation manuelle ont été étendues au couple de langues anglais-chinois. Ce couple de langues partagent beaucoup moins de points communs par rapport au couple anglais-français au niveau linguistique et culturel. Le guide d'annotation a été adapté et enrichi. La typologie de procédés de traduction reste identique à celle utilisée pour le couple anglais-français, ce qui justifie d'étudier le transfert des expériences menées pour le couple anglais-français au couple anglais-chinois.Dans le but de valider l'intérêt de ces études, nous avons conçu un outil d'aide à la compréhension écrite pour les apprenants de français langue étrangère. Une expérience sur la compréhension écrite avec des étudiants chinois confirme notre hypothèse de travail et permet de modéliser l'outil. D'autres perspectives de recherche incluent l'aide à la construction de ressource de paraphrases, l'évaluation de l'alignement automatique de mots et l'évaluation de la qualité de la traduction automatique. / Translation techniques constitute an important subject in translation studies and in linguistics. When confronted with a certain word or segment that is difficult to translate, human translators must apply particular solutions instead of literal translation, such as idiomatic equivalence, generalization, particularization, syntactic or semantic modulation, etc.However, this subject has received little attention in the field of Natural Language Processing (NLP). Our research problem is twofold: is it possible to automatically recognize translation techniques? Can some NLP tasks benefit from the recognition of translation techniques?Our working hypothesis is that it is possible to automatically recognize the different translation techniques (e.g. literal versus non-literal). To verify our hypothesis, we annotated a parallel English-French corpus with translation techniques, while establishing an annotation guide. Our typology of techniques is proposed based on previous typologies, and is adapted to our corpus. The inter-annotator agreement (0.67) is significant but slightly exceeds the threshold of a strong agreement (0.61), reflecting the difficulty of the annotation task. Based on annotated examples, we then worked on the automatic classification of translation techniques. Even if the dataset is limited, the experimental results validate our working hypothesis regarding the possibility of recognizing the different translation techniques. We have also shown that adding context-sensitive features is relevant to improve the automatic classification.In order to test the genericity of our typology of translation techniques and the annotation guide, our studies of manual annotation have been extended to the English-Chinese language pair. This pair shares far fewer linguistic and cultural similarities than the English-French pair. The annotation guide has been adapted and enriched. The typology of translation techniques remains the same as that used for the English-French pair, which justifies studying the transfer of the experiments conducted for the English-French pair to the English-Chinese pair.With the aim to validate the benefits of these studies, we have designed a tool to help learners of French as a foreign language in reading comprehension. An experiment on reading comprehension with Chinese students confirms our working hypothesis and allows us to model the tool. Other research perspectives include helping to build paraphrase resources, evaluating automatic word alignment and evaluating the quality of machine translation.
|
554 |
Can Knowledge Rich Sentences Help Language Models To Solve Common Sense Reasoning Problems?January 2019 (has links)
abstract: Significance of real-world knowledge for Natural Language Understanding(NLU) is well-known for decades. With advancements in technology, challenging tasks like question-answering, text-summarizing, and machine translation are made possible with continuous efforts in the field of Natural Language Processing(NLP). Yet, knowledge integration to answer common sense questions is still a daunting task. Logical reasoning has been a resort for many of the problems in NLP and has achieved considerable results in the field, but it is difficult to resolve the ambiguities in a natural language. Co-reference resolution is one of the problems where ambiguity arises due to the semantics of the sentence. Another such problem is the cause and result statements which require causal commonsense reasoning to resolve the ambiguity. Modeling these type of problems is not a simple task with rules or logic. State-of-the-art systems addressing these problems use a trained neural network model, which claims to have overall knowledge from a huge trained corpus. These systems answer the questions by using the knowledge embedded in their trained language model. Although the language models embed the knowledge from the data, they use occurrences of words and frequency of co-existing words to solve the prevailing ambiguity. This limits the performance of language models to solve the problems in common-sense reasoning task as it generalizes the concept rather than trying to answer the problem specific to its context. For example, "The painting in Mark's living room shows an oak tree. It is to the right of a house", is a co-reference resolution problem which requires knowledge. Language models can resolve whether "it" refers to "painting" or "tree", since "house" and "tree" are two common co-occurring words so the models can resolve "tree" to be the co-reference. On the other hand, "The large ball crashed right through the table. Because it was made of Styrofoam ." to resolve for "it" which can be either "table" or "ball", is difficult for a language model as it requires more information about the problem.
In this work, I have built an end-to-end framework, which uses the automatically extracted knowledge based on the problem. This knowledge is augmented with the language models using an explicit reasoning module to resolve the ambiguity. This system is built to improve the accuracy of the language models based approaches for commonsense reasoning. This system has proved to achieve the state of the art accuracy on the Winograd Schema Challenge. / Dissertation/Thesis / Masters Thesis Computer Science 2019
|
555 |
Longitudinal Comparison of Word Associations in Shallow Word EmbeddingsGeetanjali Bihani (8815607) 08 May 2020 (has links)
Word embeddings are utilized in various natural language processing tasks. Although effective in helping computers learn linguistic patterns employed in natural language, word embeddings also tend to learn unwanted word associations. This affects the performance of NLP tasks, as unwanted word associations propagate and amplify biases. Current word association evaluation methods for word embeddings do not account for changes in word embedding models and training corpora, when creating the rubric for word association evaluation. Current literature also lacks a consistent training and evaluation protocol for comparison of word associations across varying word embedding models and varying training corpora. In order to address this gap in prior literature, this research aims to evaluate different types of word associations, not limited to gender, racial or religious attributes, incorporating and evaluating the diachronic and variable nature of words over text data collected over a period of 200 years. This thesis introduces a framework to track changes in word associations between neutral words (proper nouns) and attributes (adjectives), across different word embedding models, over a temporal dimension, by evaluating clustering tendencies between neutral words (proper nouns) and attributive words (adjectives) over five different word embedding frameworks: Word2vec (CBOW), Word2vec (Skip-gram), GloVe, fastText (CBOW) and fastText (Skip-gram) and 20 decades of text data from 1810s to 2000s. <a>Finally, various cluster level and corpus level measurements will be compared across aforementioned word embedding frameworks, to find how</a> word associations evolve with changes in the embedding model and the training corpus.
|
556 |
Automatic Poetry Classification and Chronological Semantic AnalysisRahgozar, Arya 15 May 2020 (has links)
The correction, authentication, validation and identification of the original texts in Hafez’s poetry among 16 or so old versions of his Divan has been a challenge for scholars. The semantic analysis of poetry with modern Digital Humanities techniques is also challenging. Analyzing latent semantics is more challenging in poetry than in prose for evident reasons, such as conciseness, imaginary and metaphorical constructions. Hafez’s poetry is, on the one hand, cryptic and complex because of his era’s restricting social properties and censorship impediments, and on the other hand, sophisticated because of his encapsulation of high-calibre world-views, mystical and philosophical attributes, artistically knitted within majestic decorations.
Our research is strongly influenced by and is a continuation of, Mahmoud Houman’s instrumental and essential chronological classification of ghazals by Hafez. Houman’s chronological classification method (Houman, 1938), which we have adopted here, provides guidance to choose the correct version of Hafez’s poem among multiple manuscripts. Houman’s semantic analysis of Hafez’s poetry is unique in that the central concept of his classification is based on intelligent scrutiny of meanings, careful observation the evolutionary psychology of Hafez through his remarkable body of work. Houman’s analysis has provided the annotated data for the classification algorithms we will develop to classify the poems. We pursue to understand Hafez through the Houman’s perspective. In addition, we asked a contemporary expert to annotate Hafez ghazals (Raad, 2019). The rationale behind our research is also to satisfy the need for more efficient means of scholarly research, and to bring literature and computer science together as much as possible. Our research will support semantic analysis, and help with the design and development of tools for poetry research.
We have developed a digital corpus of Hafez’s ghazals and applied proper word forms and punctuation. We digitized and extended chronological criteria to guide the correction and validation of Hafez’s poetry. To our knowledge, no automatic chronological classification has been conducted for Hafez poetry.
Other than the meticulous preparation of our bilingual Hafez corpus for computational use, the innovative aspect of our classification research is two-fold. The first objective of our work is to develop semantic features to better train automatic classifiers for annotated poems and to apply the classifiers to unannotated poems, which is to classify the rest of the poems by applying machine learning (ML) methodology. The second task is to extract semantic information and properties to help design a visualization scheme to assist with providing a link between the prediction’s rationale and Houman’s perception of Hafez’s chronological properties of Hafez’s poetry.
We identified and used effective Natural Language Processing (NLP) techniques such as classification, word-embedding features, and visualization to facilitate and automate semantic analysis of Hafez’s poetry. We defined and applied rigorous and repeatable procedures that can potentially be applied to other kinds of poetry. We showed that the chronological segments identified automatically were coherent. We presented and compared two independent chronological labellings of Hafez’s ghazals in digital form, pro- duced their ontologies and explained the inter-annotator-agreement and distributional semantic properties using relevant NLP techniques to help guide future corrections, authentication, and interpretation of Hafez’s poetry. Chronological labelling of the whole corpus not only helps better understand Hafez’s poetry, but it is a rigorous guide to better recognition of the correct versions of Hafez’s poems among multiple manuscripts. Such a small volume of complex poetic text required careful selection when choosing and developing appropriate ML techniques for the task. Through many classification and clustering experiments, we have achieved state-of-the-art prediction of chronological poems, trained and evaluated against our hand-made Hafez corpus. Our selected classification algorithm was a Support Vector Machine (SVM), trained with Latent Dirichlet Allocation (LDA)-based similarity features. We used clustering to produce an alternative perspective to classification.
For our visualization methodology, we used the LDA features but also passed the results to a Principal Component Analysis (PCA) module to reduce the number of dimensions to two, thereby enabling graphical presentations. We believe that applying this method to poetry classifications, and showing the topic relations between poems in the same classes, will help us better understand the interrelated topics within the poems. Many of our methods can potentially be used in similar cases in which the intention is to semantically classify poetry.
|
557 |
ASSESSING THE QUALITY OF SOFTWARE DEVELOPMENT TUTORIALS AVAILABLE ON THE WEBNishi, Manziba A 01 January 2019 (has links)
Both expert and novice software developers frequently access software development resources available on the Web in order to lookup or learn new APIs, tools and techniques. Software quality is affected negatively when developers fail to find high-quality information relevant to their problem. While there is a substantial amount of freely available resources that can be accessed online, some of the available resources contain information that suffers from error proneness, copyright infringement, security concerns, and incompatible versions. Use of such toxic information can have a strong negative effect on developer’s efficacy. This dissertation focuses specifically on software tutorials, aiming to automatically evaluate the quality of such documents available on the Web. In order to achieve this goal, we present two contributions: 1) scalable detection of duplicated code snippets; 2) automatic identification of valid version ranges.
Software tutorials consist of a combination of source code snippets and natural language text. The code snippets in a tutorial can originate from different sources, perhaps carrying stringent licensing requirements or known security vulnerabilities. Developers, typically unaware of this, can reuse these code snippets in their project. First, in this thesis, we present our work on a Web-scale code clone search technique that is able to detect duplicate code snippets between large scale document and source code corpora in order to trace toxic code snippets.
As software libraries and APIs evolve over time, existing software development tutorials can become outdated. It is difficult for software developers and especially novices to determine the expected version of the software implicit in a specific tutorial in order to decide whether the tutorial is applicable to their software development environment. To overcome this challenge, in this thesis we present a novel technique for automatic identification of the valid version range of software development tutorials on the Web.
|
558 |
Teaching natural language processing (NLP): a report from academic practiceMunson, Matthew 25 January 2018 (has links)
My experience teaching Natural Language Processing (NLP) methods with biblical sources is quite varied. I have taught both novice and advanced students in full semester courses, week-long summer school sessions, and even shorter eight or sixteen hour block sessions. I have also taught students in both the humanities and in computer science. I will thus organize the following article as a report of these experiences focusing especially on the things that I have done that I believe have worked well and those which I think did not worked so well. I should preface all of these remarks also by saying that the methods I use for teaching NLP are only one way to do it. I have had good results using them and I believe that they work, but I also believe that there are other pedagogical methods that could work equally well for a different instructor in a different context.
|
559 |
Information Retrieval using Markov random Fields and Restricted Boltzmann MachinesMonika Kamma (10276277) 06 April 2021 (has links)
<div>When a user types in a search query in an Information Retrieval system, a list of top ‘n’ ranked documents relevant to the query are returned by the system. Relevant means not just returning documents that belong to the same category as that of the search query, but also returning documents that provide a concise answer to the search query. Determining the relevance of the documents is a significant challenge as the classic indexing techniques that use term/word frequencies do not consider the term (word) dependencies or the impact of previous terms on the current words or the meaning of the words in the document. There is a need to model the dependencies of the terms in the text data and learn the underlying statistical patterns to find the similarity between the user query and the documents to determine the relevancy.</div><div><br></div><div>This research proposes a solution based on Markov Random Fields (MRF) and Restricted Boltzmann Machines (RBM) to solve the problem of term dependencies and learn the underlying patterns to return documents that are very similar to the user query.</div>
|
560 |
Language modeling for personality predictionCutler, Andrew 22 January 2021 (has links)
This dissertation can be divided into two large questions. The first is a supervised learning problem: given text from an individual, how much can be said about their personality? The second is more fundamental: what personality structure is embedded in modern language models?
To address the first question, three language models are used to predict many traits from Facebook Statuses. Traits include: gender, religion, politics, Big5 personality, sensational interests, impulsiveness, IQ, fair-mindedness, and self-disclosure. Linguistic Inquiry Word Count (Pennebaker et al., 2015), the dominant model used in psychology, explains close to zero variance on many labels. Bag of Words performs well and the model weights provide valuable insight about why predictions are made. Neural Nets perform the best by a wide margin on personality traits especially when few training samples are available. A pretrained personality model is made available online that can explain 10% of the variance of a trait with as little as 400 samples, within the range of normal psychology studies. This is a good replacement for Linguistic Inquiry Word Count in predictive settings. In psychology, personality structure is defined by dimensionality reduction of word vectors (Goldberg, 1993). To address the second question, factor analysis is performed on embeddings of personality words produced by the language model RoBERTa (Liu et al., 2019). This recovers two factors that look like Digman’s α and β (Digman, 1997) and not the more popular Big Five. The structure is shown to be robust to choice of context around an embedded word, language model, factorization method, word set and English vs Spanish. This is a flexible tool for exploring personality structure that can easily be applied to other languages.
|
Page generated in 0.0901 seconds