• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 929
  • 156
  • 74
  • 55
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1601
  • 1601
  • 1601
  • 622
  • 565
  • 464
  • 383
  • 376
  • 266
  • 256
  • 245
  • 228
  • 221
  • 208
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

Automatická oprava pravopisu / Natural Language Correction

Náplava, Jakub January 2017 (has links)
The goal of this thesis is to explore the area of natural language correction and to design and implement neural network models for a range of tasks ranging from general grammar correction to the specific task of diacritization. The thesis opens with a description of existing approaches to natural language correction. Existing datasets are reviewed and two new datasets are introduced: a manually annotated dataset for grammatical error correction based on CzeSL (Czech as a Second Language) and an automatically created spelling correction dataset. The main part of the thesis then presents design and implementation of three models, and evaluates them on several natural language correction datasets. In comparison to existing statistical systems, the proposed models learn all knowledge from training data; therefore, they do not require an error model or a candidate generation mechanism to be manually set, neither they need any additional language information such as a part of speech tags. Our models significantly outperform existing systems on the diacritization task. Considering the spelling and basic grammar correction tasks for Czech, our models achieve the best results for two out of the three datasets. Finally, considering the general grammatical correction for English, our models achieve results which are...
852

Využití adverzálních příkladů pro zpracování přirozeného jazyka / Using Adversarial Examples in Natural Language Processing

Bělohlávek, Petr January 2017 (has links)
Machine learning has been paid a lot of attention in recent years. One of the studied fields is employment of adversarial examples. These are artifi- cially constructed examples which evince two main features. They resemble the real training data and they deceive already trained model. The ad- versarial examples have been comprehensively investigated in the context of deep convolutional neural networks which process images. Nevertheless, their properties have been rarely examined in connection with NLP-processing networks. This thesis evaluates the effect of using the adversarial examples during the training of the recurrent neural networks. More specifically, the main focus is put on the recurrent networks whose text input is in the form of a sequence of word/character embeddings, which have not been pretrained in advance. The effects of the adversarial training are studied by evaluating multiple NLP datasets with various characteristics.
853

Concept Based Knowledge Discovery from Biomedical Literature

Radovanovic, Aleksandar. January 2009 (has links)
Philosophiae Doctor - PhD / This thesis describes and introduces novel methods for knowledge discovery and presents a software system that is able to extract information from biomedical literature, review interesting connections between various biomedical concepts and in so doing, generates new hypotheses. The experimental results obtained by using methods described in this thesis, are compared to currently published results obtained by other methods and a number of case studies are described. This thesis shows how the technology, resented can be integrated with the researchers own knowledge, experimentation and observations for optimal progression of scientific research. / South Africa
854

Graph Models For Query Focused Text Summarization And Assessment Of Machine Translation Using Stopwords

Rama, B 06 1900 (has links) (PDF)
Text summarization is the task of generating a shortened version of the original text where core ideas of the original text are retained. In this work, we focus on query focused summarization. The task is to generate the summary from a set of documents which answers the query. Query focused summarization is a hard task because it expects the summary to be biased towards the query and at the same time important concepts in the original documents must be preserved with high degree of novelty. Graph based ranking algorithms which use biased random surfer model like Topic-sensitive LexRank have been applied to query focused summarization. In our work, we propose look-ahead version of Topic-sensitive LexRank. We incorporate the option of look-ahead in the random walk model and we show that it helps in generating better quality summaries. Next, we consider assessment of machine translation. Assessment of a machine translation output is important for establishing benchmarks for translation quality. An obvious way to assess the quality of machine translation is through the perception of human subjects. Though highly reliable, this approach is not scalable and is time consuming. Hence mechanisms have been devised to automate the assessment process. All such assessment methods are essentially a study of correlations between human translation and the machine translation. In this work, we present a scalable approach to assess the quality of machine translation that borrows features from the study of writing styles, popularly known as Stylometry. Towards this, we quantify the characteristic styles of individual machine translators and compare them with that of human generated text. The translator whose style is closest to human style is deemed to generate a higher quality translation. We show that our approach is scalable and does not require actual source text translations for evaluation.
855

Inferring Aspect-Specific Opinion Structure in Product Reviews

Carter, David January 2015 (has links)
Identifying differing opinions on a given topic as expressed by multiple people (as in a set of written reviews for a given product, for example) presents challenges. Opinions about a particular subject are often nuanced: a person may have both negative and positive opinions about different aspects of the subject of interest, and these aspect-specific opinions can be independent of the overall opinion on the subject. Being able to identify, collect, and count these nuanced opinions in a large set of data offers more insight into the strengths and weaknesses of competing products and services than does aggregating the overall ratings of such products and services. I make two useful and useable contributions in working with opinionated text. First, I present my implementation of a semi-supervised co-training machine classification method for identifying both product aspects (features of products) and sentiments expressed about such aspects. It offers better precision than fully-supervised methods while requiring much less text to be manually tagged (a time-consuming process). This algorithm can also be run in a fully supervised manner when more data is available. Second, I apply this co-training approach to reviews of restaurants and various electronic devices; such text contains both factual statements and opinions about features/aspects of products. The algorithm automatically identifies the product aspects and the words that indicate aspect-specific opinion polarity, while largely avoiding the problem of misclassifying the products themselves as inherently positive or negative. This method performs well compared to other approaches. When run on a set of reviews of five technology products collected from Amazon, the system performed with some demonstrated competence (with an average precision of 0.83) at the difficult task of simultaneously identifying aspects and sentiments, though comparison to contemporaries' simpler rules-based approaches was difficult. When run on a set of opinionated sentences about laptops and restaurants that formed the basis of a shared challenge in the SemEval-2014 Task 4 competition, it was able to classify the sentiments expressed about aspects of laptops better than any team that competed in the task (achieving 0.72 accuracy). It was above the mean in its ability to identify the aspects of restaurants about which people expressed opinions, even when co-training using only half of the labelled training data at the outset. While the SemEval-2014 aspect-based sentiment extraction task considered only separately the tasks of identifying product aspects and determining their polarities, I take an extra step and evaluate sentences as a whole, inferring aspects and the aspect-specific sentiments expressed simultaneously, a more difficult task that seems more applicable to real-world tasks. I present first results of this sentence-level task. The algorithm uses both lexical and syntactic information in a manner that is shown to be able to handle new words that it has never before seen. It offers some demonstrated ability to adapt to new subject domains for which it has no training data. The system is characterizable by very high precision and weak-to-average recall and it estimates its own confidence in its predictions; this characteristic should make the algorithm suitable for use on its own or for combination in a confidence-based voting ensemble. The software created for and described in the course of this dissertation is made available online.
856

Using Social Media Networks for Measuring Consumer Confidence: Problems, Issues and Prospects

Igboayaka, Jane-Vivian Chinelo Ezinne January 2015 (has links)
This research examines the confluence of consumers’ use of social media to share information with the ever-present need for innovative research that yields insight into consumers’ economic decisions. Social media networks have become ubiquitous in the new millennium. These networks, including, among others: Facebook, Twitter, Blog, and Reddit, are brimming with conversations on an expansive array of topics between people, private and public organizations, governments and global institutions. Preliminary findings from initial research confirms the existence of online conversations and posts related to matters of personal finance and consumers’ economic outlook. Meanwhile, the Consumer Confidence Index (CCI) continues to make headline news. The issue of consumer confidence (or sentiment) in anticipating future economic activity generates significant interest from major players in the news media industry, who scrutinize its every detail and report its implications for key players in the economy. Though the CCI originated in the United States in 1946, variants of the survey are now used to track and measure consumer confidence in nations worldwide. In light of the fact that the CCI is a quantified representation of consumer sentiments, it is possible that the level of confidence consumers have in the economy could be deduced by tracking the sentiments or opinions they express in social media posts. Systematic study of these posts could then be transformed into insights that could improve the accuracy of an index like the CCI. Herein lies the focus of the current research—to analyze the attributes of data from social media posts, in order to assess their capacity to generate insights that are novel and/or complementary to traditional CCI methods. The link between data gained from social media and the survey-based CCI is perhaps not an obvious one. But our research will use a data extraction tool called NetBase Insight Workbench to mine data from the social media networks and then apply natural language processing to analyze the social media content. Also, KH Coder software will be used to perform a set of statistical analyses on samples of social media posts to examine the co-occurrence and clustering of words. The findings will be used to expose the strengths and weaknesses of the data and to assess the validity and cohesion of the NetBase data extraction tool and its suitability for future research. In conclusion, our research findings support the analysis of opinions expressed in social media posts as a complement to traditional survey-based CCI approaches. Our findings also identified a key weakness with regards to the degree of ‘noisiness’ of the data. Although this could be attributed to the ‘modeling’ error of the data mining tool, there is room for improvement in the area of association—of discerning the context and intention of posts in online conversations.
857

Unsupervised Entity Classification with Wikipedia and WordNet / Klasifikace entit pomocí Wikipedie a WordNetu

Kliegr, Tomáš January 2007 (has links)
This dissertation addresses the problem of classification of entities in text represented by noun phrases. The goal of this thesis is to develop a method for automated classification of entities appearing in datasets consisting of short textual fragments. The emphasis is on unsupervised and semi-supervised methods that will allow for fine-grained character of the assigned classes and require no labeled instances for training. The set of target classes is either user-defined or determined automatically. Our initial attempt to address the entity classification problem is called Semantic Concept Mapping (SCM) algorithm. SCM maps the noun phrases representing the entities as well as the target classes to WordNet. Graph-based WordNet similarity measures are used to assign the closest class to the noun phrase. If a noun phrase does not match any WordNet concept, a Targeted Hypernym Discovery (THD) algorithm is executed. The THD algorithm extracts a hypernym from a Wikipedia article defining the noun phrase using lexico-syntactic patterns. This hypernym is then used to map the noun phrase to a WordNet synset, but it can also be perceived as the classification result by itself, resulting in an unsupervised classification system. SCM and THD algorithms were designed for English. While adaptation of these algorithms for other languages is conceivable, we decided to develop the Bag of Articles (BOA) algorithm, which is language agnostic as it is based on the statistical Rocchio classifier. Since this algorithm utilizes Wikipedia as a source of data for classification, it does not require any labeled training instances. WordNet is used in a novel way to compute term weights. It is also used as a positive term list and for lemmatization. A disambiguation algorithm utilizing global context is also proposed. We consider the BOA algorithm to be the main contribution of this dissertation. Experimental evaluation of the proposed algorithms is performed on the WordSim353 dataset, which is used for evaluation in the Word Similarity Computation (WSC) task, and on the Czech Traveler dataset, the latter being specifically designed for the purpose of our research. BOA performance on WordSim353 achieves Spearman correlation of 0.72 with human judgment, which is close to the 0.75 correlation for the ESA algorithm, to the author's knowledge the best performing algorithm for this gold-standard dataset, which does not require training data. The advantage of BOA over ESA is that it has smaller requirements on preprocessing of the Wikipedia data. While SCM underperforms on the WordSim353 dataset, it overtakes BOA on the Czech Traveler dataset, which was designed specifically for our entity classification problem. This discrepancy requires further investigation. In a standalone evaluation of THD on Czech Traveler dataset the algorithm returned a correct hypernym for 62% of entities.
858

Natural Language Interfaces to Databases

Chandra, Yohan 12 1900 (has links)
Natural language interfaces to databases (NLIDB) are systems that aim to bridge the gap between the languages used by humans and computers, and automatically translate natural language sentences to database queries. This thesis proposes a novel approach to NLIDB, using graph-based models. The system starts by collecting as much information as possible from existing databases and sentences, and transforms this information into a knowledge base for the system. Given a new question, the system will use this knowledge to analyze and translate the sentence into its corresponding database query statement. The graph-based NLIDB system uses English as the natural language, a relational database model, and SQL as the formal query language. In experiments performed with natural language questions ran against a large database containing information about U.S. geography, the system showed good performance compared to the state-of-the-art in the field.
859

Anotação automática semissupervisionada de papéis semânticos para o português do Brasil / Automatic semi-supervised semantic role labeling for Brazilian Portuguese

Fernando Emilio Alva Manchego 22 January 2013 (has links)
A anotac~ao de papeis sem^anticos (APS) e uma tarefa do processamento de lngua natural (PLN) que permite analisar parte do signicado das sentencas atraves da detecc~ao dos participantes dos eventos (e dos eventos em si) que est~ao sendo descritos nelas, o que e essencial para que os computadores possam usar efetivamente a informac~ao codicada no texto. A maior parte das pesquisas desenvolvidas em APS tem sido feita para textos em ingl^es, considerando as particularidades gramaticais e sem^anticas dessa lngua, o que impede que essas ferramentas e resultados sejam diretamente transportaveis para outras lnguas como o portugu^es. A maioria dos sistemas de APS atuais emprega metodos de aprendizado de maquina supervisionado e, portanto, precisa de um corpus grande de senten cas anotadas com papeis sem^anticos para aprender corretamente a tarefa. No caso do portugu^es do Brasil, um recurso lexical que prov^e este tipo de informac~ao foi recentemente disponibilizado: o PropBank.Br. Contudo, em comparac~ao com os corpora para outras lnguas como o ingl^es, o corpus fornecido por este projeto e pequeno e, portanto, n~ao permitiria que um classicador treinado supervisionadamente realizasse a tarefa de anotac~ao com alto desempenho. Para tratar esta diculdade, neste trabalho emprega-se uma abordagem semissupervisionada capaz de extrair informac~ao relevante tanto dos dados anotados disponveis como de dados n~ao anotados, tornando-a menos dependente do corpus de treinamento. Implementa-se o algoritmo self-training com modelos de regress~ ao logstica (ou maxima entropia) como classicador base, para anotar o corpus Bosque (a sec~ao correspondente ao CETENFolha) da Floresta Sinta(c)tica com as etiquetas do PropBank.Br. Ao algoritmo original se incorpora balanceamento e medidas de similaridade entre os argumentos de um verbo especco para melhorar o desempenho na tarefa de classicac~ao de argumentos. Usando um benchmark de avaliac~ao implementado neste trabalho, a abordagem semissupervisonada proposta obteve um desempenho estatisticamente comparavel ao de um classicador treinado supervisionadamente com uma maior quantidade de dados anotados (80,5 vs. 82,3 de \'F IND. 1\', p > 0, 01) / Semantic role labeling (SRL) is a natural language processing (NLP) task able to analyze part of the meaning of sentences through the detection of the events they describe and the participants involved, which is essential for computers to eectively understand the information coded in text. Most of the research carried out in SRL has been done for texts in English, considering the grammatical and semantic particularities of that language, which prevents those tools and results to be directly transported to other languages such as Portuguese. Most current SRL systems use supervised machine learning methods and require a big corpus of sentences annotated with semantic roles in order to learn how to perform the task properly. For Brazilian Portuguese, a lexical resource that provides this type of information has recently become available: PropBank.Br. However, in comparison with corpora for other languages such as English, the corpus provided by that project is small and it wouldn\'t allow a supervised classier to perform the labeling task with good performance. To deal with this problem, in this dissertation we use a semi-supervised approach capable of extracting relevant information both from annotated and non-annotated data available, making it less dependent on the training corpus. We implemented the self-training algorithm with logistic regression (or maximum entropy) models as base classier to label the corpus Bosque (section CETENFolha) from the Floresta Sintá(c)tica with the PropBank.Br semantic role tags. To the original algorithm, we incorporated balancing and similarity measures between verb-specic arguments so as to improve the performance of the system in the argument classication task. Using an evaluation benchmark implemented in this research project, the proposed semi-supervised approach has a statistical comparable performance as the one of a supervised classier trained with more annotated data (80,5 vs. 82,3 de \'F IND. 1\', p > 0, 01).
860

Resolução de correferência em múltiplos documentos utilizando aprendizado não supervisionado / Co-reference resolution in multiples documents through unsupervised learning

Jefferson Fontinele da Silva 05 May 2011 (has links)
Um dos problemas encontrados em sistemas de Processamento de Línguas Naturais (PLN) é a dificuldade de se identificar que elementos textuais referem-se à mesma entidade. Esse fenômeno, no qual o conjunto de elementos textuais remete a uma mesma entidade, é denominado de correferência. Sistemas de resolução de correferência podem melhorar o desempenho de diversas aplicações do PLN, como: sumarização, extração de informação, sistemas de perguntas e respostas. Recentemente, pesquisas em PLN têm explorado a possibilidade de identificar os elementos correferentes em múltiplos documentos. Neste contexto, este trabalho tem como foco o desenvolvimento de um método aprendizado não supervisionado para resolução de correferência em múltiplos documentos, utilizando como língua-alvo o português. Não se conhece, até o momento, nenhum sistema com essa finalidade para o português. Os resultados dos experimentos feitos com o sistema sugerem que o método desenvolvido é superior a métodos baseados em concordância de cadeias de caracteres / One of the problems found in Natural Language Processing (NLP) systems is the difficulty of identifying textual elements that refer to the same entity. This phenomenon, in which the set of textual elements refers to a single entity, is called coreference. Coreference resolution systems can improve the performance of various NLP applications, such as automatic summarization, information extraction systems, question answering systems. Recently, research in NLP has explored the possibility of identifying the coreferent elements in multiple documents. In this context, this work focuses on the development of an unsupervised method for coreference resolution in multiple documents, using Portuguese as the target language. Until now, it is not known any system for this purpose for the Portuguese. The results of the experiments with the system suggest that the developed method is superior to methods based on string matching

Page generated in 0.1522 seconds