• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 552
  • 43
  • 40
  • 18
  • 13
  • 11
  • 8
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 806
  • 806
  • 567
  • 341
  • 326
  • 320
  • 320
  • 245
  • 207
  • 197
  • 130
  • 123
  • 116
  • 100
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Distributed representations for compositional semantics

Hermann, Karl Moritz January 2014 (has links)
The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. Distributional approaches—meaning distributed representations that exploit co-occurrence statistics of large corpora—have proved popular and successful across a number of tasks. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is an equally fundamental task of NLP. This dissertation explores methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units. Our underlying hypothesis is that neural models are a suitable vehicle for learning semantically rich representations and that such representations in turn are suitable vehicles for solving important tasks in natural language processing. The contribution of this thesis is a thorough evaluation of our hypothesis, as part of which we introduce several new approaches to representation learning and compositional semantics, as well as multiple state-of-the-art models which apply distributed semantic representations to various tasks in NLP. Part I focuses on distributed representations and their application. In particular, in Chapter 3 we explore the semantic usefulness of distributed representations by evaluating their use in the task of semantic frame identification. Part II describes the transition from semantic representations for words to compositional semantics. Chapter 4 covers the relevant literature in this field. Following this, Chapter 5 investigates the role of syntax in semantic composition. For this, we discuss a series of neural network-based models and learning mechanisms, and demonstrate how syntactic information can be incorporated into semantic composition. This study allows us to establish the effectiveness of syntactic information as a guiding parameter for semantic composition, and answer questions about the link between syntax and semantics. Following these discoveries regarding the role of syntax, Chapter 6 investigates whether it is possible to further reduce the impact of monolingual surface forms and syntax when attempting to capture semantics. Asking how machines can best approximate human signals of semantics, we propose multilingual information as one method for grounding semantics, and develop an extension to the distributional hypothesis for multilingual representations. Finally, Part III summarizes our findings and discusses future work.
252

Egyptian Arabic Plurals in Theory and Computation

Winchester, Lindley 01 January 2014 (has links)
This paper examines the plural inflectional processes present in Egyptian Arabic, with specific focus on the complex broken plural system. The data used in this examination is a set of 114 lexemes from a dictionary of the Egyptian Arabic variety by Badawi and Hinds (1984) collected through comparison of singular to plural template correspondences proposed by Gadalla (2004). The theoretical side of this analysis tests the proposed realizational approach in Kihm (2006) named the “Root-and-Site Hypothesis” against a variety of broken plural constructions in Egyptian Arabic. Categorizing concatenative and non-concatenative morphological processes as approachable in the same manner, this framework discusses inflection as not only represented by segments but also by “sites” where inflectional operations may take place. In order to organize the data through a computational lens, I emulate features of this approach in a DATR theorem that generates the grammatical forms for a set of both broken and sound plural nominals. The hierarchically-structured inheritance of the program’s language allows for default templates to be defined as well as overridden, permitting a wide scope of variation to be represented with little code content.
253

High-Performance Knowledge-Based Entity Extraction

Middleton, Anthony M. 01 January 2009 (has links)
Human language records most of the information and knowledge produced by organizations and individuals. The machine-based process of analyzing information in natural language form is called natural language processing (NLP). Information extraction (IE) is the process of analyzing machine-readable text and identifying and collecting information about specified types of entities, events, and relationships. Named entity extraction is an area of IE concerned specifically with recognizing and classifying proper names for persons, organizations, and locations from natural language. Extant approaches to the design and implementation named entity extraction systems include: (a) knowledge-engineering approaches which utilize domain experts to hand-craft NLP rules to recognize and classify named entities; (b) supervised machine-learning approaches in which a previously tagged corpus of named entities is used to train algorithms which incorporate statistical and probabilistic methods for NLP; or (c) hybrid approaches which incorporate aspects of both methods described in (a) and (b). Performance for IE systems is evaluated using the metrics of precision and recall which measure the accuracy and completeness of the IE task. Previous research has shown that utilizing a large knowledge base of known entities has the potential to improve overall entity extraction precision and recall performance. Although existing methods typically incorporate dictionary-based features, these dictionaries have been limited in size and scope. The problem addressed by this research was the design, implementation, and evaluation of a new high-performance knowledge-based hybrid processing approach and associated algorithms for named entity extraction, combining rule-based natural language parsing and memory-based machine learning classification facilitated by an extensive knowledge base of existing named entities. The hybrid approach implemented by this research resulted in improved precision and recall performance approaching human-level capability compared to existing methods measured using a standard test corpus. The system design incorporated a parallel processing system architecture with capabilities for managing a large knowledge base and providing high throughput potential for processing large collections of natural language text documents.
254

An empirical study of semantic similarity in WordNet and Word2Vec

Handler, Abram 18 December 2014 (has links)
This thesis performs an empirical analysis of Word2Vec by comparing its output to WordNet, a well-known, human-curated lexical database. It finds that Word2Vec tends to uncover more of certain types of semantic relations than others -- with Word2Vec returning more hypernyms, synonomyns and hyponyms than hyponyms or holonyms. It also shows the probability that neighbors separated by a given cosine distance in Word2Vec are semantically related in WordNet. This result both adds to our understanding of the still-unknown Word2Vec and helps to benchmark new semantic tools built from word vectors.
255

A Study on Text Classification Methods and Text Features

Danielsson, Benjamin January 2019 (has links)
When it comes to the task of classification the data used for training is the most crucial part. It follows that how this data is processed and presented for the classifier plays an equally important role. This thesis attempts to investigate the performance of multiple classifiers depending on the features that are used, the type of classes to classify and the optimization of said classifiers. The classifiers of interest are support-vector machines (SMO) and multilayer perceptron (MLP), the features tested are word vector spaces and text complexity measures, along with principal component analysis on the complexity measures. The features are created based on the Stockholm-Umeå-Corpus (SUC) and DigInclude, a dataset containing standard and easy-to-read sentences. For the SUC dataset the classifiers attempted to classify texts into nine different text categories, while for the DigInclude dataset the sentences were classified into either standard or simplified classes. The classification tasks on the DigInclude dataset showed poor performance in all trials. The SUC dataset showed best performance when using SMO in combination with word vector spaces. Comparing the SMO classifier on the text complexity measures when using or not using PCA showed that the performance was largely unchanged between the two, although not using PCA had slightly better performance
256

Automatic, adaptive, and applicative sentiment analysis / Analyse de sentiments automatique, adaptative et applicative

Pak, Alexander 13 June 2012 (has links)
L'analyse de sentiments est un des nouveaux défis apparus en traitement automatique des langues avec l'avènement des réseaux sociaux sur le WEB. Profitant de la quantité d'information maintenant disponible, la recherche et l'industrie se sont mises en quête de moyens pour analyser automatiquement les opinions exprimées dans les textes. Pour nos travaux, nous nous plaçons dans un contexte multilingue et multi-domaine afin d'explorer la classification automatique et adaptative de polarité.Nous proposons dans un premier temps de répondre au manque de ressources lexicales par une méthode de construction automatique de lexiques affectifs multilingues à partir de microblogs. Pour valider notre approche, nous avons collecté plus de 2 millions de messages de Twitter, la plus grande plate-forme de microblogging et avons construit à partir de ces données des lexiques affectifs pour l'anglais, le français, l'espagnol et le chinois.Pour une meilleure analyse des textes, nous proposons aussi de remplacer le traditionnel modèle n-gramme par une représentation à base d'arbres de dépendances syntaxiques. Dans notre modèles, les n-grammes ne sont plus construits à partir des mots mais des triplets constitutifs des dépendances syntaxiques. Cette manière de procéder permet d'éviter la perte d'information que l'on obtient avec les approches classiques à base de sacs de mots qui supposent que les mots sont indépendants.Finalement, nous étudions l'impact que les traits spécifiques aux entités nommées ont sur la classification des opinions minoritaires et proposons une méthode de normalisation des décomptes d'observables, qui améliore la classification de ce type d'opinion en renforçant le poids des termes affectifs.Nos propositions ont fait l'objet d'évaluations quantitatives pour différents domaines d'applications (les films, les revues de produits commerciaux, les nouvelles et les blogs) et pour plusieurs langues (anglais, français, russe, espagnol et chinois), avec en particulier une participation officielle à plusieurs campagnes d'évaluation internationales (SemEval 2010, ROMIP 2011, I2B2 2011). / Sentiment analysis is a challenging task today for computational linguistics. Because of the rise of the social Web, both the research and the industry are interested in automatic processing of opinions in text. In this work, we assume a multilingual and multidomain environment and aim at automatic and adaptive polarity classification.We propose a method for automatic construction of multilingual affective lexicons from microblogging to cover the lack of lexical resources. To test our method, we have collected over 2 million messages from Twitter, the largest microblogging platform, and have constructed affective resources in English, French, Spanish, and Chinese.We propose a text representation model based on dependency parse trees to replace a traditional n-grams model. In our model, we use dependency triples to form n-gram like features. We believe this representation covers the loss of information when assuming independence of words in the bag-of-words approach.Finally, we investigate the impact of entity-specific features on classification of minor opinions and propose normalization schemes for improving polarity classification. The proposed normalization schemes gives more weight to terms expressing sentiments and lower the importance of noisy features.The effectiveness of our approach has been proved in experimental evaluations that we have performed across multiple domains (movies, product reviews, news, blog posts) and multiple languages (English, French, Russian, Spanish, Chinese) including official participation in several international evaluation campaigns (SemEval'10, ROMIP'11, I2B2'11).
257

No bones about it (or are there?): evaluating markedness constraints on structural representations of the phonology skeleton

Unknown Date (has links)
Linguistic research suggests that speakers represent syllable structure by a CV-frame. CVC syllables are more frequent than VCC ones. Further, the presence of VCC syllables in a language asymmetrically implies the presence of CVC syllables. These typological facts may reflect grammatical constraints. Alternatively, people's preferences may be due solely to their sensitivity to the statistical properties of sound combinations in their language. I demonstrate that participants in an auditory lexical decision task reject VCC nonwords faster than CVC nonwords, suggesting that the marked VCC syllables are dispreferred relative to CVC syllables. In a second experiment, I show that people are also sensitive to the distribution of these frames in the experiment. Findings indicate that syllable structure is represented at the phonological level, that individuals have preferences for certain syllables, and that these preferences can not be accounted for by the statistical properties of the stimuli. / by Kayla Causey. / Thesis (M.A.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2008. Mode of access: World Wide Web.
258

Abordagem computacional para a questão do acento no português brasileiro / Computational approach for the matter of stress in Brazilian Portuguese

Guide, Bruno Ferrari 31 August 2016 (has links)
O objetivo central do projeto foi investigar a questão do acento no português brasileiro por meio do uso de ferramentas computacionais, a fim de encontrar possíveis relações entre traços segmentais, prosódicos ou morfológicos com o acento. Tal análise foi realizada a partir do estudo crítico das principais soluções propostas para a questão advindas da Fonologia Teórica. Isso foi considerado o primeiro passo para desenvolver uma abordagem que traga inovação para a área. A discussão teórica foi concluída com a implementação de algoritmos que representam modelizações das propostas para o tratamento da questão do acento. Estas foram, posteriormente, testadas em corpora relevantes do português com o objetivo de analisar tanto os casos considerados como padrão pelas propostas, quanto aqueles que são considerados exceções ao comportamento do idioma. Simultaneamente, foi desenvolvido um corpus anotado de palavras acentuadas do português brasileiro, a partir do qual foram implementados os dois grupos de modelos de natureza probabilística que formam o quadro de abordagens desenhado pelo projeto. O primeiro grupo se baseia na noção de N-gramas, em que a atribuição de acento a uma palavra ocorre a partir da probabilidade das cadeias de tamanho \" que a compõem, configurando-se, assim, um modelo que enxerga padrões simples de coocorrência e que é computacionalmente eficiente. O segundo grupo de modelos foi chamado de classificador bayesiano ingênuo, que é uma abordagem probabilística mais sofisticada e exigente em termos de corpus e que leva em consideração um vetor de traços a serem definidos para, no caso, atribuir o acento de uma palavra. Esses traços englobaram tanto características morfológicas, quanto prosódicas e segmentais das palavras. / The main goal of this project was to provide insight into the behavior of stress patterns of Brazilian Portuguese using computational tools in order to find eventual relationships between segmental, prosodic or morphologic features and word stress. Such analysis was based on a critical reading of some of the main proposals from theoretical phonology regarding the matter. This was considered the first step towards an innovative approach for this field of research. Such discussion was concluded by implementing algorithms representing models of the theoretical proposals for treating the behavior of stress. Afterward, those solutions were tested in relevant corpora of Portuguese aiming to analyze both the words which fell inside what was considered standard and the words that should be considered exceptions to the typical behavior in the language. Simultaneously, a noted corpus of Brazilian Portuguese words was compiled, from which were implemented both groups of models that have probabilistic nature that completes the frame of approaches drawn from this project. The first group is composed of models based on the notion of N-grams, in which the attribution of stress to a word happens based on the probability attributed to the `n\' sized chains that compose this word, which results in a model that is sensitive to patterns of co-occurrence and computationally efficient. The second group of models is called Naive Bayes Classifier, which is a more sophisticated probabilistic approach that is more corpus demanding, this approach takes into account a vector of features that was defined in order to attribute stress to a word. Those features were morphological, prosodic and segmental characteristics of the words.
259

Word embeddings and Patient records : The identification of MRI risk patients

Kindberg, Erik January 2019 (has links)
Identification of risks ahead of MRI examinations is identified as a cumbersome and time-consuming process at the Linköping University Hospital radiology clinic. The hospital staff often have to search through large amounts of unstructured patient data to find information about implants. Word embeddings has been identified as a possible tool to speed up this process. The purpose of this thesis is to evaluate this method, and that is done by training a Word2Vec model on patient journal data and analyzing the close neighbours of key search words by calculating cosine similarity. The 50 closest neighbours of each search words are categorized and annotated as relevant to the task of identifying risk patients ahead of MRI examinations or not. 10 search words were explored, leading to a total of 500 terms being annotated. In total, 14 different categories were observed in the result and out of these 8 were considered relevant. Out of the 500 terms, 340 (68%) were considered relevant. In addition, 48 implant models could be observed which are particularly interesting because if a patient have an implant, hospital staff needs to determine it’s exact model and the MRI conditions of that model. Overall these findings points towards a positive answer for the aim of the thesis, although further developments are needed.
260

Abordagem computacional para a questão do acento no português brasileiro / Computational approach for the matter of stress in Brazilian Portuguese

Bruno Ferrari Guide 31 August 2016 (has links)
O objetivo central do projeto foi investigar a questão do acento no português brasileiro por meio do uso de ferramentas computacionais, a fim de encontrar possíveis relações entre traços segmentais, prosódicos ou morfológicos com o acento. Tal análise foi realizada a partir do estudo crítico das principais soluções propostas para a questão advindas da Fonologia Teórica. Isso foi considerado o primeiro passo para desenvolver uma abordagem que traga inovação para a área. A discussão teórica foi concluída com a implementação de algoritmos que representam modelizações das propostas para o tratamento da questão do acento. Estas foram, posteriormente, testadas em corpora relevantes do português com o objetivo de analisar tanto os casos considerados como padrão pelas propostas, quanto aqueles que são considerados exceções ao comportamento do idioma. Simultaneamente, foi desenvolvido um corpus anotado de palavras acentuadas do português brasileiro, a partir do qual foram implementados os dois grupos de modelos de natureza probabilística que formam o quadro de abordagens desenhado pelo projeto. O primeiro grupo se baseia na noção de N-gramas, em que a atribuição de acento a uma palavra ocorre a partir da probabilidade das cadeias de tamanho \" que a compõem, configurando-se, assim, um modelo que enxerga padrões simples de coocorrência e que é computacionalmente eficiente. O segundo grupo de modelos foi chamado de classificador bayesiano ingênuo, que é uma abordagem probabilística mais sofisticada e exigente em termos de corpus e que leva em consideração um vetor de traços a serem definidos para, no caso, atribuir o acento de uma palavra. Esses traços englobaram tanto características morfológicas, quanto prosódicas e segmentais das palavras. / The main goal of this project was to provide insight into the behavior of stress patterns of Brazilian Portuguese using computational tools in order to find eventual relationships between segmental, prosodic or morphologic features and word stress. Such analysis was based on a critical reading of some of the main proposals from theoretical phonology regarding the matter. This was considered the first step towards an innovative approach for this field of research. Such discussion was concluded by implementing algorithms representing models of the theoretical proposals for treating the behavior of stress. Afterward, those solutions were tested in relevant corpora of Portuguese aiming to analyze both the words which fell inside what was considered standard and the words that should be considered exceptions to the typical behavior in the language. Simultaneously, a noted corpus of Brazilian Portuguese words was compiled, from which were implemented both groups of models that have probabilistic nature that completes the frame of approaches drawn from this project. The first group is composed of models based on the notion of N-grams, in which the attribution of stress to a word happens based on the probability attributed to the `n\' sized chains that compose this word, which results in a model that is sensitive to patterns of co-occurrence and computationally efficient. The second group of models is called Naive Bayes Classifier, which is a more sophisticated probabilistic approach that is more corpus demanding, this approach takes into account a vector of features that was defined in order to attribute stress to a word. Those features were morphological, prosodic and segmental characteristics of the words.

Page generated in 0.0334 seconds