Spelling suggestions: "subject:"work sense disambiguation.""
1 |
Multilingual Word Sense Disambiguation Using WikipediaDandala, Bharath 08 1900 (has links)
Ambiguity is inherent to human language. In particular, word sense ambiguity is prevalent in all natural languages, with a large number of the words in any given language carrying more than one meaning. Word sense disambiguation is the task of automatically assigning the most appropriate meaning to a polysemous word within a given context. Generally the problem of resolving ambiguity in literature has revolved around the famous quote “you shall know the meaning of the word by the company it keeps.” In this thesis, we investigate the role of context for resolving ambiguity through three different approaches. Instead of using a predefined monolingual sense inventory such as WordNet, we use a language-independent framework where the word senses and sense-tagged data are derived automatically from Wikipedia. Using Wikipedia as a source of sense-annotations provides the much needed solution for knowledge acquisition bottleneck. In order to evaluate the viability of Wikipedia based sense-annotations, we cast the task of disambiguating polysemous nouns as a monolingual classification task and experimented on lexical samples from four different languages (viz. English, German, Italian and Spanish). The experiments confirm that the Wikipedia based sense annotations are reliable and can be used to construct accurate monolingual sense classifiers. It is a long belief that exploiting multiple languages helps in building accurate word sense disambiguation systems. Subsequently, we developed two approaches that recast the task of disambiguating polysemous nouns as a multilingual classification task. The first approach for multilingual word sense disambiguation attempts to effectively use a machine translation system to leverage two relevant multilingual aspects of the semantics of text. First, the various senses of a target word may be translated into different words, which constitute unique, yet highly salient signal that effectively expand the target word’s feature space. Second, the translated context words themselves embed co-occurrence information that a translation engine gathers from very large parallel corpora. The second approach for multlingual word sense disambiguation attempts to reduce the reliance on the machine translation system during training by using the multilingual knowledge available in Wikipedia through its interlingual links. Finally, the experiments on a lexical sample from four different languages confirm that the multilingual systems perform better than the monolingual system and significantly improve the disambiguation accuracy.
|
2 |
Dynamic topic adaptation for improved contextual modelling in statistical machine translationHasler, Eva Cornelia January 2015 (has links)
In recent years there has been an increased interest in domain adaptation techniques for statistical machine translation (SMT) to deal with the growing amount of data from different sources. Topic modelling techniques applied to SMT are closely related to the field of domain adaptation but more flexible in dealing with unstructured text. Topic models can capture latent structure in texts and are therefore particularly suitable for modelling structure in between and beyond corpus boundaries, which are often arbitrary. In this thesis, the main focus is on dynamic translation model adaptation to texts of unknown origin, which is a typical scenario for an online MT engine translating web documents. We introduce a new bilingual topic model for SMT that takes the entire document context into account and for the first time directly estimates topic-dependent phrase translation probabilities in a Bayesian fashion. We demonstrate our model’s ability to improve over several domain adaptation baselines and further provide evidence for the advantages of bilingual topic modelling for SMT over the more common monolingual topic modelling. We also show improved performance when deriving further adapted translation features from the same model which measure different aspects of topical relatedness. We introduce another new topic model for SMT which exploits the distributional nature of phrase pair meaning by modelling topic distributions over phrase pairs using their distributional profiles. Using this model, we explore combinations of local and global contextual information and demonstrate the usefulness of different levels of contextual information, which had not been previously examined for SMT. We also show that combining this model with a topic model trained at the document-level further improves performance. Our dynamic topic adaptation approach performs competitively in comparison with two supervised domain-adapted systems. Finally, we shed light on the relationship between domain adaptation and topic adaptation and propose to combine multi-domain adaptation and topic adaptation in a framework that entails automatic prediction of domain labels at the document level. We show that while each technique provides complementary benefits to the overall performance, there is an amount of overlap between domain and topic adaptation. This can be exploited to build systems that require less adaptation effort at runtime.
|
3 |
Semantic disambiguation using Distributional Semantics / Semantic disambiguation using Distributional SemanticsProdanovic, Srdjan January 2012 (has links)
Ve statistických modelů sémantiky jsou významy slov pouze na základě jejich distribuční vlastnosti.Základní zdroj je zde jeden slovník, který lze použít pro různé úkoly, kde se význam slov reprezentovány jako vektory v vektorového prostoru, a slovní podoby jako vzdálenosti mezi jejich vektorových osobnosti. Pomocí silných podobnosti, může vhodnost podmínek uvedených zejména v souvislosti se vypočítá a používá pro celou řadu úkolů, jeden z nich je slovo smysl Disambiguation. V této práci bylo vyšetřeno několik různých přístupů k modelům z vektorového prostoru a prováděny tak, aby k překročení vyhodnocení vlastního výkonu na Word Sense disambiguation úkolem Prague Dependency Treebank.
|
4 |
Improving Intent Classication By Automatic Data Augmentation Using Word Sense DisambiguationJanuary 2018 (has links)
abstract: Virtual digital assistants are automated software systems which assist humans by understanding natural languages such as English, either in voice or textual form. In recent times, a lot of digital applications have shifted towards providing a user experience using natural language interface. The change is brought up by the degree of ease with which the virtual digital assistants such as Google Assistant and Amazon Alexa can be integrated into your application. These assistants make use of a Natural Language Understanding (NLU) system which acts as an interface to translate unstructured natural language data into a structured form. Such an NLU system uses an intent finding algorithm which gives a high-level idea or meaning of a user query, termed as intent classification. The intent classification step identifies the action(s) that a user wants the assistant to perform. The intent classification step is followed by an entity recognition step in which the entities in the utterance are identified on which the intended action is performed. This step can be viewed as a sequence labeling task which maps an input word sequence into a corresponding sequence of slot labels. This step is also termed as slot filling.
In this thesis, we improve the intent classification and slot filling in the virtual voice agents by automatic data augmentation. Spoken Language Understanding systems face the issue of data sparsity. The reason behind this is that it is hard for a human-created training sample to represent all the patterns in the language. Due to the lack of relevant data, deep learning methods are unable to generalize the Spoken Language Understanding model. This thesis expounds a way to overcome the issue of data sparsity in deep learning approaches on Spoken Language Understanding tasks. Here we have described the limitations in the current intent classifiers and how the proposed algorithm uses existing knowledge bases to overcome those limitations. The method helps in creating a more robust intent classifier and slot filling system. / Dissertation/Thesis / Masters Thesis Computer Science 2018
|
5 |
Corpus-Based Techniques for Word Sense DisambiguationLevow, Gina-Anne 27 May 1998 (has links)
The need for robust and easily extensible systems for word sense disambiguation coupled with successes in training systems for a variety of tasks using large on-line corpora has led to extensive research into corpus-based statistical approaches to this problem. Promising results have been achieved by vector space representations of context, clustering combined with a semantic knowledge base, and decision lists based on collocational relations. We evaluate these techniques with respect to three important criteria: how their definition of context affects their ability to incorporate different types of disambiguating information, how they define similarity among senses, and how easily they can generalize to new senses. The strengths and weaknesses of these systems provide guidance for future systems which must capture and model a variety of disambiguating information, both syntactic and semantic.
|
6 |
Word Sense Disambiguation Using WordNet and Conceptual ExpansionGuo, Jian-Yi 24 January 2006 (has links)
As a single English word can have several different meanings, a single meaning can be expressed by several different English words. The meaning of a word depends on the sense intended. Thus to select the most appropriate meaning for an ambiguous word within a context is a critical problem for the applications using the technologies of natural language processing. However, at present, most word sense disambiguation methods either disambiguate only restricted parts of speech words such as only nouns or the accuracy in disambiguating word senses is not satisfiable. The ambiguous situation often bothers users.
In this study, a new word sense disambiguation method using WordNet lexicon database, SemCor text files, and the Web is presented. In addition to nouns, the proposed method also attempts to disambiguate verbs, adjectives, and adverbs in sentences. The text files and sentences investigated in the experiments were randomly selected from SemCor. The semantic similarity between the senses of individually semantically ambiguous words in a word pair is measured to select the applicable candidate senses of a target word in that word pair. By a synonym weighting method, the possible sense diversity in synonym sets is considered based on the synonym sets WordNet provides. Thus corresponding synonym sets of the candidate senses are determined. The candidate senses expanded with the senses in the corresponding synonym sets, and enhanced by the context window technique form new queries. After the new queries are submitted to a search engine to search for the matching documents on the Web, the candidate senses are ranked by the number of the matching documents found. The first sense in the list of the ranked candidate senses is viewed as the most appropriate sense of the target word.
The proposed method as well as Stetina et al.¡¦s and Mihalcea et al.¡¦s methods are evaluated based on the SemCor text files. The experimental results show that for the top sense selected this method having the average accuracy of disambiguating word senses with 81.3% for nouns, verbs, adjectives, and adverbs is slightly better than Stetina et al.¡¦s method of 80% and Mihalcea et al.¡¦s method of 80.1%. Furthermore, the proposed method is the only method with the accuracy of disambiguating word senses for verbs achieving 70% for the top one sense selected. Moreover, for the top three senses selected this method is superior to the other two methods by an average accuracy of the four parts of speech exceeding 96%. It is expected that the proposed method can improve the performance of the word sense disambiguation applications in machine translation, document classification, or information retrieval.
|
7 |
Closing the gap in WSD : supervised results with unsupervised methodsBrody, Samuel January 2009 (has links)
Word-Sense Disambiguation (WSD), holds promise for many NLP applications requiring broad-coverage language understanding, such as summarization (Barzilay and Elhadad, 1997) and question answering (Ramakrishnan et al., 2003). Recent studies have also shown that WSD can benefit machine translation (Vickrey et al., 2005) and information retrieval (Stokoe, 2005). Much work has focused on the computational treatment of sense ambiguity, primarily using data-driven methods. The most accurate WSD systems to date are supervised and rely on the availability of sense-labeled training data. This restriction poses a significant barrier to widespread use of WSD in practice, since such data is extremely expensive to acquire for new languages and domains. Unsupervised WSD holds the key to enable such application, as it does not require sense-labeled data. However, unsupervised methods fall far behind supervised ones in terms of accuracy and ease of use. In this thesis we explore the reasons for this, and present solutions to remedy this situation. We hypothesize that one of the main problems with unsupervised WSD is its lack of a standard formulation and general purpose tools common to supervised methods. As a first step, we examine existing approaches to unsupervised WSD, with the aim of detecting independent principles that can be utilized in a general framework. We investigate ways of leveraging the diversity of existing methods, using ensembles, a common tool in the supervised learning framework. This approach allows us to achieve accuracy beyond that of the individual methods, without need for extensive modification of the underlying systems. Our examination of existing unsupervised approaches highlights the importance of using the predominant sense in case of uncertainty, and the effectiveness of statistical similarity methods as a tool for WSD. However, it also serves to emphasize the need for a way to merge and combine learning elements, and the potential of a supervised-style approach to the problem. Relying on existing methods does not take full advantage of the insights gained from the supervised framework. We therefore present an unsupervised WSD system which circumvents the question of actual disambiguation method, which is the main source of discrepancy in unsupervised WSD, and deals directly with the data. Our method uses statistical and semantic similarity measures to produce labeled training data in a completely unsupervised fashion. This allows the training and use of any standard supervised classifier for the actual disambiguation. Classifiers trained with our method significantly outperform those using other methods of data generation, and represent a big step in bridging the accuracy gap between supervised and unsupervised methods. Finally, we address a major drawback of classical unsupervised systems – their reliance on a fixed sense inventory and lexical resources. This dependence represents a substantial setback for unsupervised methods in cases where such resources are unavailable. Unfortunately, these are exactly the areas in which unsupervised methods are most needed. Unsupervised sense-discrimination, which does not share those restrictions, presents a promising solution to the problem. We therefore develop an unsupervised sense discrimination system. We base our system on a well-studied probabilistic generative model, Latent Dirichlet Allocation (Blei et al., 2003), which has many of the advantages of supervised frameworks. The model’s probabilistic nature lends itself to easy combination and extension, and its generative aspect is well suited to linguistic tasks. Our model achieves state-of-the-art performance on the unsupervised sense induction task, while remaining independent of any fixed sense inventory, and thus represents a fully unsupervised, general purpose, WSD tool.
|
8 |
AXEL : a framework to deal with ambiguity in three-noun compoundsMartinez, Jorge Matadamas January 2010 (has links)
Cognitive Linguistics has been widely used to deal with the ambiguity generated by words in combination. Although this domain offers many solutions to address this challenge, not all of them can be implemented in a computational environment. The Dynamic Construal of Meaning framework is argued to have this ability because it describes an intrinsic degree of association of meanings, which in turn, can be translated into computational programs. A limitation towards a computational approach, however, has been the lack of syntactic parameters. This research argues that this limitation could be overcome with the aid of the Generative Lexicon Theory (GLT). Specifically, this dissertation formulated possible means to marry the GLT and Cognitive Linguistics in a novel rapprochement between the two. This bond between opposing theories provided the means to design a computational template (the AXEL System) by realising syntax and semantics at software levels. An instance of the AXEL system was created using a Design Research approach. Planned iterations were involved in the development to improve artefact performance. Such iterations boosted performance-improving, which accounted for the degree of association of meanings in three-noun compounds. This dissertation delivered three major contributions on the brink of a so-called turning point in Computational Linguistics (CL). First, the AXEL system was used to disclose hidden lexical patterns on ambiguity. These patterns are difficult, if not impossible, to be identified without automatic techniques. This research claimed that these patterns can assist audiences of linguists to review lexical knowledge on a software-based viewpoint. Following linguistic awareness, the second result advocated for the adoption of improved resources by decreasing electronic space of Sense Enumerative Lexicons (SELs). The AXEL system deployed the generation of “at the moment of use” interpretations, optimising the way the space is needed for lexical storage. Finally, this research introduced a subsystem of metrics to characterise an ambiguous degree of association of three-noun compounds enabling ranking methods. Weighing methods delivered mechanisms of classification of meanings towards Word Sense Disambiguation (WSD). Overall these results attempted to tackle difficulties in understanding studies of Lexical Semantics via software tools.
|
9 |
Desambiguação lexical de sentidos para o português por meio de uma abordagem multilíngue mono e multidocumento / Word Sense Disambiguation for portuguese through multilingual mono and multi-documentNóbrega, Fernando Antônio Asevêdo 28 May 2013 (has links)
A ambiguidade lexical é considerada uma das principais barreiras para melhoria de aplicações do Processamento de Língua Natural (PLN). Neste contexto, tem-se a área de Desambiguação Lexical de Sentido (DLS), cujo objetivo é desenvolver e avaliar métodos que determinem o sentido correto de uma palavra em um determinado contexto por meio de um conjunto finito de possíveis significados. A DLS é empregada, principalmente, no intuito de prover recursos e ferramentas para diminuir problemas de ambiguidade e, consequentemente, contribuir para melhorias de resultados em outras áreas do PLN. Para o Português do Brasil, pouco se tem pesquisado nesta área, havendo alguns trabalhos bem específicos de domínio. Outro fator importante é que diversas áreas do PLN engajam-se no cenário multidocumento, onde a computação é efetuada sobre uma coleção de textos, todavia, não há relato de trabalhos de DLS direcionados a este cenário, tampouco experimentos de desambiguação neste domínio. Portanto, neste trabalho de mestrado, objetivou-se o desenvolvimento de métodos de DLS de domínio geral voltado à língua Portuguesa do Brasil e o desenvolvimento de algoritmos de desambiguação que façam uso de informações multidocumento, bem como a experimentação e avaliação destes no cenário multidocumento. Para tanto, a fim de subsidiar experimentos, desenvolvimento e avaliação deste projeto, anotou-se manualmente o córpus CSTNews, caracterizado como um córpus multidocumento, utilizando a WordNet de Princeton como repositório de sentidos, que organiza os significados por meio de conjuntos de sinônimos ( synsets) e relações linguísticas entre estes. Foram desenvolvidos quatro métodos de DLS e algumas variações, sendo: um método heurístico (para aferir valores de baseline); variações do algoritmo de Lesk (1986); adaptação do algoritmo de Mihalcea and Moldovan (1999); e uma variação do método de Lesk para o cenário multidocumento. Foram realizados três experimentos para avaliação dos métodos, cujos objetivos foram: determinar o desempenho geral dos algoritmos em todo o córpus; avaliar a qualidade de desambiguação de palavras mais ambíguas no córpus; e verificar o ganho de qualidade da desambiguação ao empregar informação multidocumento. Após estes experimentos, pôde-se observar que o método heurístico apresenta um melhor resultado geral. Contudo, é importante ressaltar que a maioria das palavras anotadas no córpus tiveram apenas um synset, que, normalmente, era o mais frequente, o que, consequentemente, apresenta um cenário mais propício ao método heurístico. Outro fato importante foi que, neste cenário, a diferença de desempenho entre o método de DLS multidocumento e o heurístico é estatisticamente irrelevante. Já para a desambiguação de palavras mais ambíguas, o método heurístico foi inferior, evidenciando que, para a desambiguação de palavras mais ambíguas, são necessários métodos mais sofisticados de DLS. Por fim, verificou-se que a utilização de informação multidocumento auxilia o processo de desambiguação. As contribuições deste trabalho podem ser agrupadas entre teóricas e técnicas. Nas teóricas, tem-se a investigação e análises da DLS no cenário multidocumento. Entre as contribuições técnicas, foram desenvolvidos métodos de DLS, um córpus anotado e uma ferramenta de anotação direcionados à língua Portuguesa do Brasil, que podem avançar as pesquisas em DLS para o idioma / The lexical ambiguity is considered one of the main barries to improving applications of Natural Language Processing (NLP). In this context, it has benn the area of Word Sense Disambiguation (WSD), whose goal is to develop and evaluate methods to determine the correct sense of a word in a give context by a nite set of possible meanings. The DLS is used mainly in order to provide resources and tools to reduce problems of ambiguity and thus contribute to improved results in other areas of NLP. In the Portuguese of Brazil, little has been researched in this area, with some work and specic domain. Another important factor is that many areas of NLP commit themselves in multidocument scenario, where the computation is performed on a collection of texts, however, there is no report of WSD work directed to this scenario, either disambiguation experiments in this eld. Therefore, this master thesis aimed to develop methods of WSD general domain facing the Portuguese language in Brazil and the development of algorithms that make use of disambiguation multidocument informations, as well as experimentation and evaluation of the multidocument scenario. Therefore, in order to support experiments, development and evaluation of this project, the corpus CSTNews with 50 document collections, was manually annotated by means of synsets of the WordNet Princeton. Four methods were developed: A heuristic method (to measure values fo baseline); variations of the Lesk (1986) algorithm; a adaptation of the Mihalcea and Moldovan (1999) algorithm; and a variation of the Lesk method for multidocument scenario. Three experiments were conducted to evaluate the methods, whose objectives were to determine the general performance algorithms across the corpus; evaluate the quality of disambiguation of most ambiguous words in the corpus, and check the gain quality of disambiguation by employing information multidocumento. After these experiments, it was observed that the heuristic method presents a better overall result. However, it is important to note that most of the words in the annotated corpus had only one synset, which usually was the most frequent, which, in turn, presents a scenario more conducive to the heuristic method. Another important fact was that in this scenario, the performance dierence between the heuristic method and multidocument algorithm was statistically irrelevant. As for the disambiguation of most ambiguous words, the heuristic method was lower, indicating that, for the disambiguation of ambiguous words, more sophisticated WSD methods are needed. Finally, it has been found that the use of multidocument information assists the disambiguation process. The contributions of this work can be divided between theoretical and technical. In theory, there is the research and analysis of WSD in multidocument scenario. Among the techniques contributions, WSD methods have been developed an annotated corpus and annotation tool targeted to the Portuguese language in Brazil that can advance research in WSD for the language
|
10 |
Desambiguación léxica mediante marcas de especificidadMontoyo, Andres 21 June 2002 (has links)
No description available.
|
Page generated in 0.1436 seconds