• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 22
  • 20
  • 11
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 176
  • 124
  • 107
  • 63
  • 54
  • 50
  • 49
  • 48
  • 38
  • 32
  • 30
  • 28
  • 27
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Adaptive Semantic Annotation of Entity and Concept Mentions in Text

Mendes, Pablo N. 05 June 2014 (has links)
No description available.
142

Improving a Few-shot Named Entity Recognition Model Using Data Augmentation / Förbättring av en existerande försöksmodell för namnidentifiering med få exempel genom databerikande åtgärder

Mellin, David January 2022 (has links)
To label words of interest into a predefined set of named entities have traditionally required a large amount of labeled in-domain data. Recently, the availability of pre-trained transformer-based language models have enabled multiple natural language processing problems to utilize transfer learning techniques to construct machine learning models with less task-specific labeled data. In this thesis, the impact of data augmentation when training a pre-trained transformer-based model to adapt to a named entity recognition task with few labeled sentences is explored. The experimental results indicate that data augmentation increases performance of the trained models, however the data augmentation is shown to have less impact when more labeled data is available. In conclusion, data augmentation has been shown to improve performance of pre-trained named entity recognition models when few labeled sentences are available for training. / Att kategorisera ord som tillhör någon av en mängd förangivna entiteter har traditionellt krävt stora mängder förkategoriserad områdesspecifik data. På senare år har det tillgängliggjorts förtränade språkmodeller som möjliggjort för språkprocesseringsproblem att lösas med en mindre mängd områdesspecifik kategoriserad data. I den här uppsatsen utforskas datautöknings påverkan på en maskininlärningsmodell för identifiering av namngivna entiteter. De experimentella resultaten indikerar att datautökning förbättrar modellerna, men att inverkan blir mindre när mer kategoriserad data är tillgänglig. Sammanfattningsvis så kan datautökning förbättra modeller för identifiering av namngivna entiteter när få förkategoriserade meningar finns tillgängliga för träning.
143

L'identification des entités nommées en arabe en vue de leur extraction et classification automatiques : la construction d’un système à base de règles syntactico-sémantique / Identification of arabic named entities with a view to their automatique extraction an classification : a syntactico-semantic rule based system

Asbayou, Omar 01 December 2016 (has links)
Cette thèse explique et présente notre démarche de la réalisation d’un système à base de règles de reconnaissance et de classification automatique des EN en arabe. C’est un travail qui implique deux disciplines : la linguistique et l’informatique. L’outil informatique et les règles la linguistiques s’accouplent pour donner naissance à une nouvelle discipline ; celle de « traitement automatique des langues », qui opère sur des niveaux différents (morphosyntaxique, syntaxique, sémantique, syntactico-sémantique etc.). Nous avons donc, dans ce qui nous concerne, mis en œuvre des informations et règles linguistiques nécessaires au service du logiciel informatique, qui doit être en mesure de les appliquer, pour extraire et classifier, par des annotations syntaxiques et/ou sémantiques, les différentes classes d’entités nommées.Ce travail de thèse s’inscrit donc dans un cadre général de traitement automatique des langues, mais plus particulièrement dans la continuité des travaux réalisés au niveau de l’analyse morphosyntaxique par la conception et la réalisation des bases des données lexicales SAMIA et ensuite DIINAR avec l’ensemble de résultats de recherches qui en découlent. C’est une tâche qui vise à l’enrichissement lexical par des entités nommées simples et complexes, et qui veut établir la transition de l’analyse morphosyntaxique vers l’analyse syntaxique, et syntatico-sémantique dans une visée plus générale de l’analyse du contenu textuel. Pour comprendre de quoi il s’agit, il nous était important de commencer par la définition de l’entité nommée. Et pour mener à bien notre démarche, nous avons distingué entre deux types principaux : pur nom propre et EN descriptive. Nous avons aussi établi une classification référentielle en se basant sur diverses classes et sous-classes qui constituent la référence de nos annotations sémantiques. Cependant, nous avons dû faire face à deux difficultés majeures : l’ambiguïté lexicale et les frontières des entités nommées complexes. Notre système adopte une approche à base de règles syntactico-sémantiques. Il est constitué, après le Niveau 0 d’analyse morphosyntaxique, de cinq niveaux de construction de patrons syntaxiques et syntactico-sémantiques basés sur les informations linguistique nécessaires (morphosyntaxiques, syntaxiques, sémantique, et syntactico-sémantique). Ce travail, après évaluation en utilisant deux corpus, a abouti à de très bons résultats en précision, en rappel et en F–mesure. Les résultats de notre système ont un apport intéressant dans différents application du traitement automatique des langues notamment les deux tâches de recherche et d’extraction d’informations. En effet, on les a concrètement exploités dans les deux applications (recherche et extraction d’informations). En plus de cette expérience unique, nous envisageons par la suite étendre notre système à l’extraction et la classification des phrases dans lesquelles, les entités classifiées, principalement les entités nommées et les verbes, jouent respectivement le rôle d’arguments et de prédicats. Un deuxième objectif consiste à l’enrichissement des différents types de ressources lexicales à l’instar des ontologies. / This thesis explains and presents our approach of rule-based system of arabic named entity recognition and classification. This work involves two disciplines : linguistics and computer science. Computer tools and linguistic rules are merged to give birth to a new discipline : Natural Languge Processsing, which operates in different levels (morphosyntactic, syntactic, semantic, syntactico-semantic…). So, in our particular case, we have put the necessary linguistic information and rules to software sevice. This later should be able to apply and implement them in order to recognise and classify, by syntactic and semantic annotations, the different named entity classes.This work of thesis is incorporated within the general domain of natural language processing, but it particularly falls within the scope of the continuity of the accomplished work in terms of morphosyntactic analysis and the realisation of lexical data bases of SAMIA and then DIINAR as well as the accompanying scientific recearch. This task aimes at lexical enrichement with simple and complex named entities and at establishing the transition from the morphological analysis into syntactic and syntactico-semantic analysis. The ultimate objective is text analysis. To understand what it is about, it was important to start with named entity definition. To carry out this task, we distinguished between two main named entity types : pur proper name and descriptive named entities. We have also established a referential classification on the basis of different classes and sub-classes which constitue the reference for our semantic annotations. Nevertheless, we are confronted with two major difficulties : lexical ambiguity and the frontiers of complex named entities. Our system adoptes a syntactico-semantic rule-based approach. After Level 0 of morpho-syntactic analysis, the system is made up of five levels of syntactic and syntactico-semantic patterns based on tne necessary linguisic information (i.e. morphosyntactic, syntactic, semantic and syntactico-semantic information).This work has obtained very good results in termes of precision, recall and F-measure. The output of our system has an interesting contribution in different applications of the natural language processing especially in both tasks of information retrieval and information extraction. In fact, we have concretely exploited our system output in both applications (information retrieval and information extraction). In addition to this unique experience, we envisage in the future work to extend our system into the sentence extraction and classification, in which classified entities, mainly named entities and verbs, play respectively the role of arguments and predicates. The second objective consists in the enrichment of different types of lexical resources such as ontologies.
144

Extracting and Aggregating Temporal Events from Texts

Döhling, Lars 11 October 2017 (has links)
Das Finden von zuverlässigen Informationen über gegebene Ereignisse aus großen und dynamischen Textsammlungen, wie dem Web, ist ein wichtiges Thema. Zum Beispiel sind Rettungsteams und Versicherungsunternehmen an prägnanten Fakten über Schäden nach Katastrophen interessiert, die heutzutage online in Web-Blogs, Zeitungsartikeln, Social Media etc. zu finden sind. Solche Fakten helfen, die erforderlichen Hilfsmaßnahmen zu bestimmen und unterstützen deren Koordination. Allerdings ist das Finden, Extrahieren und Aggregieren nützlicher Informationen ein hochkomplexes Unterfangen: Es erfordert die Ermittlung geeigneter Textquellen und deren zeitliche Einordung, die Extraktion relevanter Fakten in diesen Texten und deren Aggregation zu einer verdichteten Sicht auf die Ereignisse, trotz Inkonsistenzen, vagen Angaben und Veränderungen über die Zeit. In dieser Arbeit präsentieren und evaluieren wir Techniken und Lösungen für jedes dieser Probleme, eingebettet in ein vierstufiges Framework. Die angewandten Methoden beruhen auf Verfahren des Musterabgleichs, der Verarbeitung natürlicher Sprache und des maschinellen Lernens. Zusätzlich berichten wir über die Ergebnisse zweier Fallstudien, basierend auf dem Einsatz des gesamten Frameworks: Die Ermittlung von Daten über Erdbeben und Überschwemmungen aus Webdokumenten. Unsere Ergebnisse zeigen, dass es unter bestimmten Umständen möglich ist, automatisch zuverlässige und zeitgerechte Daten aus dem Internet zu erhalten. / Finding reliable information about given events from large and dynamic text collections, such as the web, is a topic of great interest. For instance, rescue teams and insurance companies are interested in concise facts about damages after disasters, which can be found today in web blogs, online newspaper articles, social media, etc. Knowing these facts helps to determine the required scale of relief operations and supports their coordination. However, finding, extracting, and condensing specific facts is a highly complex undertaking: It requires identifying appropriate textual sources and their temporal alignment, recognizing relevant facts within these texts, and aggregating extracted facts into a condensed answer despite inconsistencies, uncertainty, and changes over time. In this thesis, we present and evaluate techniques and solutions for each of these problems, embedded in a four-step framework. Applied methods are pattern matching, natural language processing, and machine learning. We also report the results for two case studies applying our entire framework: gathering data on earthquakes and floods from web documents. Our results show that it is, under certain circumstances, possible to automatically obtain reliable and timely data from the web.
145

Zpracování turkických jazyků / Processing of Turkic Languages

Ciddi, Sibel January 2014 (has links)
Title: Processing of Turkic Languages Author: Sibel Ciddi Department: Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics, Charles University in Prague Supervisor: RNDr. Daniel Zeman, Ph.D. Abstract: This thesis presents several methods for the morpholog- ical processing of Turkic languages, such as Turkish, which pose a specific set of challenges for natural language processing. In order to alleviate the problems with lack of large language resources, it makes the data sets used for morphological processing and expansion of lex- icons publicly available for further use by researchers. Data sparsity, caused by highly productive and agglutinative morphology in Turkish, imposes difficulties in processing of Turkish text, especially for meth- ods using purely statistical natural language processing. Therefore, we evaluated a publicly available rule-based morphological analyzer, TRmorph, based on finite state methods and technologies. In order to enhance the efficiency of this analyzer, we worked on expansion of lexicons, by employing heuristics-based methods for the extraction of named entities and multi-word expressions. Furthermore, as a prepro- cessing step, we introduced a dictionary-based recognition method for tokenization of multi-word expressions. This method complements...
146

Predicting Linguistic Structure with Incomplete and Cross-Lingual Supervision

Täckström, Oscar January 2013 (has links)
Contemporary approaches to natural language processing are predominantly based on statistical machine learning from large amounts of text, which has been manually annotated with the linguistic structure of interest. However, such complete supervision is currently only available for the world's major languages, in a limited number of domains and for a limited range of tasks. As an alternative, this dissertation considers methods for linguistic structure prediction that can make use of incomplete and cross-lingual supervision, with the prospect of making linguistic processing tools more widely available at a lower cost. An overarching theme of this work is the use of structured discriminative latent variable models for learning with indirect and ambiguous supervision; as instantiated, these models admit rich model features while retaining efficient learning and inference properties. The first contribution to this end is a latent-variable model for fine-grained sentiment analysis with coarse-grained indirect supervision. The second is a model for cross-lingual word-cluster induction and the application thereof to cross-lingual model transfer. The third is a method for adapting multi-source discriminative cross-lingual transfer models to target languages, by means of typologically informed selective parameter sharing. The fourth is an ambiguity-aware self- and ensemble-training algorithm, which is applied to target language adaptation and relexicalization of delexicalized cross-lingual transfer parsers. The fifth is a set of sequence-labeling models that combine constraints at the level of tokens and types, and an instantiation of these models for part-of-speech tagging with incomplete cross-lingual and crowdsourced supervision. In addition to these contributions, comprehensive overviews are provided of structured prediction with no or incomplete supervision, as well as of learning in the multilingual and cross-lingual settings. Through careful empirical evaluation, it is established that the proposed methods can be used to create substantially more accurate tools for linguistic processing, compared to both unsupervised methods and to recently proposed cross-lingual methods. The empirical support for this claim is particularly strong in the latter case; our models for syntactic dependency parsing and part-of-speech tagging achieve the hitherto best published results for a wide number of target languages, in the setting where no annotated training data is available in the target language.
147

Approche multi-niveaux pour l'analyse des données textuelles non-standardisées : corpus de textes en moyen français / Multi-level approach for the analysis of non-standardized textual data : corpus of texts in middle french

Aouini, Mourad 19 March 2018 (has links)
Cette thèse présente une approche d'analyse des textes non-standardisé qui consiste à modéliser une chaine de traitement permettant l’annotation automatique de textes à savoir l’annotation grammaticale en utilisant une méthode d’étiquetage morphosyntaxique et l’annotation sémantique en mettant en œuvre un système de reconnaissance des entités nommées. Dans ce contexte, nous présentons un système d'analyse du Moyen Français qui est une langue en pleine évolution dont l’orthographe, le système flexionnel et la syntaxe ne sont pas stables. Les textes en Moyen Français se singularisent principalement par l’absence d’orthographe normalisée et par la variabilité tant géographique que chronologique des lexiques médiévaux.L’objectif est de mettre en évidence un système dédié à la construction de ressources linguistiques, notamment la construction des dictionnaires électroniques, se basant sur des règles de morphologie. Ensuite, nous présenterons les instructions que nous avons établies pour construire un étiqueteur morphosyntaxique qui vise à produire automatiquement des analyses contextuelles à l’aide de grammaires de désambiguïsation. Finalement, nous retracerons le chemin qui nous a conduits à mettre en place des grammaires locales permettant de retrouver les entités nommées. De ce fait, nous avons été amenés à constituer un corpus MEDITEXT regroupant des textes en Moyen Français apparus entre le fin du XIIIème et XVème siècle. / This thesis presents a non-standardized text analysis approach which consists a chain process modeling allowing the automatic annotation of texts: grammar annotation using a morphosyntactic tagging method and semantic annotation by putting in operates a system of named-entity recognition. In this context, we present a system analysis of the Middle French which is a language in the course of evolution including: spelling, the flexional system and the syntax are not stable. The texts in Middle French are mainly distinguished by the absence of normalized orthography and the geographical and chronological variability of medieval lexicons.The main objective is to highlight a system dedicated to the construction of linguistic resources, in particular the construction of electronic dictionaries, based on rules of morphology. Then, we will present the instructions that we have carried out to construct a morphosyntactic tagging which aims at automatically producing contextual analyzes using the disambiguation grammars. Finally, we will retrace the path that led us to set up local grammars to find the named entities. Hence, we were asked to create a MEDITEXT corpus of texts in Middle French between the end of the thirteenth and fifteenth centuries.
148

Auxílio à leitura de textos em português facilitado: questões de acessibilidade / Reading assistance for texts in facilitated portuguese: accessibility issues

Watanabe, Willian Massami 05 August 2010 (has links)
A grande capacidade de disponibilização de informações que a Web possibilita se traduz em múltiplas possibilidades e oportunidades para seus usuários. Essas pessoas são capazes de acessar conteúdos provenientes de todas as partes do planeta, independentemente de onde elas estejam. Mas essas possibilidades não são estendidas a todos, sendo necessário mais que o acesso a um computador e a Internet para que sejam realizadas. Indivíduos que apresentem necessidades especiais (deficiência visual, cognitiva, dificuldade de locomoção, entre outras) são privados do acesso a sites e aplicações web que façam mal emprego de tecnologias web ou possuam o conteúdo sem os devidos cuidados para com a acessibilidade. Um dos grupos que é privado do acesso a esse ambiente é o de pessoas com dificuldade de leitura (analfabetos funcionais). A ampla utilização de recursos textuais nas aplicações pode tornar difícil ou mesmo impedir as interações desses indivíduos com os sistemas computacionais. Nesse contexto, este trabalho tem por finalidade o desenvolvimento de tecnologias assistivas que atuem como facilitadoras de leitura e compreensão de sites e aplicações web a esses indivíduos (analfabetos funcionais). Essas tecnologias assistivas utilizam recursos de processamento de língua natural visando maximizar a compreensão do conteúdo pelos usuários. Dentre as técnicas utilizadas são destacadas: simplificação sintática, sumarização automática, elaboração léxica e reconhecimento das entidades nomeadas. Essas técnicas são utilizadas com a finalidade de promover a adaptação automática de conteúdos disponíveis na Web para usuários com baixo nível de alfabetização. São descritas características referentes à acessibilidade de aplicações web e princípios de design para usuários com baixo nível de alfabetização, para garantir a identificação e entendimento das funcionalidades que são implementadas nas duas tecnologias assistivas resultado deste trabalho (Facilita e Facilita Educacional). Este trabalho contribuiu com a identificação de requisitos de acessibilidade para usuários com baixo nível de alfabetização, modelo de acessibilidade para automatizar a conformidade com a WCAG e desenvolvimento de soluções de acessibilidade na camada de agentes de usuários / The large capacity of Web for providing information leads to multiple possibilities and opportunities for users. The development of high performance networks and ubiquitous devices allow users to retrieve content from any location and in different scenarios or situations they might face in their lives. Unfortunately the possibilities offered by the Web are not necessarily currently available to all. Individuals who do not have completely compliant software or hardware that are able to deal with the latest technologies, or have some kind of physical or cognitive disability, find it difficult to interact with web pages, depending on the page structure and the ways in which the content is made available. When specifically considering the cognitive disabilities, users classified as functionally illiterate face severe difficulties accessing web content. The heavy use of texts on interfaces design creates an accessibility barrier to those who cannot read fluently in their mother tongue due to both text length and linguistic complexity. In this context, this work aims at developing an assistive technologies that assists functionally illiterate users during their reading and understanding of websites textual content. These assistive technologies make use of natural language processing (NLP) techniques that maximize reading comprehension for users. The natural language techniques that this work uses are: syntactic simplification, automatic summarization, lexical elaboration and named entities recognition. The techniques are used with the goal of automatically adapting textual content available on the Web for users with low literacy levels. This work describes the accessibility characteristics incorporated into both resultant applications (Facilita and Educational Facilita) that focus on low literacy users limitations towards computer usage and experience. This work contributed with the identification of accessibility requirements for low-literacy users, elaboration of an accessibility model for automatizing WCAG conformance and development of accessible solutions in the user agents layer of web applications
149

Anotação e classificação automática de entidades nomeadas em notícias esportivas em Português Brasileiro / Automatic named entity recognition and classification for brazilian portuguese sport news

Zaccara, Rodrigo Constantin Ctenas 11 July 2012 (has links)
O objetivo deste trabalho é desenvolver uma plataforma para anotação e classificação automática de entidades nomeadas para notícias escritas em português do Brasil. Para restringir um pouco o escopo do treinamento e análise foram utilizadas notícias esportivas do Campeonato Paulista de 2011 do portal UOL (Universo Online). O primeiro artefato desenvolvido desta plataforma foi a ferramenta WebCorpus. Esta tem como principal intuito facilitar o processo de adição de metainformações a palavras através do uso de uma interface rica web, elaborada para deixar o trabalho ágil e simples. Desta forma as entidades nomeadas das notícias são anotadas e classificadas manualmente. A base de dados foi alimentada pela ferramenta de aquisição e extração de conteúdo desenvolvida também para esta plataforma. O segundo artefato desenvolvido foi o córpus UOLCP2011 (UOL Campeonato Paulista 2011). Este córpus foi anotado e classificado manualmente através do uso da ferramenta WebCorpus utilizando sete tipos de entidades: pessoa, lugar, organização, time, campeonato, estádio e torcida. Para o desenvolvimento do motor de anotação e classificação automática de entidades nomeadas foram utilizadas três diferentes técnicas: maximização de entropia, índices invertidos e métodos de mesclagem das duas técnicas anteriores. Para cada uma destas foram executados três passos: desenvolvimento do algoritmo, treinamento utilizando técnicas de aprendizado de máquina e análise dos melhores resultados. / The main target of this research is to develop an automatic named entity classification tool to sport news written in Brazilian Portuguese. To reduce this scope, during training and analysis only sport news about São Paulo Championship of 2011 written by UOL2 (Universo Online) was used. The first artefact developed was the WebCorpus tool, which aims to make easier the process of add meta informations to words, through a rich web interface. Using this, all the corpora news are tagged manually. The database used by this tool was fed by the crawler tool, also developed during this research. The second artefact developed was the corpora UOLCP2011 (UOL Campeonato Paulista 2011). This corpora was manually tagged using the WebCorpus tool. During this process, seven classification concepts were used: person, place, organization, team, championship, stadium and fans. To develop the automatic named entity classification tool, three different approaches were analysed: maximum entropy, inverted index and merge tecniques using both. Each approach had three steps: algorithm development, training using machine learning tecniques and best score analysis.
150

Použití hlubokých kontextualizovaných slovních reprezentací založených na znacích pro neuronové sekvenční značkování / Deep contextualized word embeddings from character language models for neural sequence labeling

Lief, Eric January 2019 (has links)
A family of Natural Language Processing (NLP) tasks such as part-of- speech (PoS) tagging, Named Entity Recognition (NER), and Multiword Expression (MWE) identification all involve assigning labels to sequences of words in text (sequence labeling). Most modern machine learning approaches to sequence labeling utilize word embeddings, learned representations of text, in which words with similar meanings have similar representations. Quite recently, contextualized word embeddings have garnered much attention because, unlike pretrained context- insensitive embeddings such as word2vec, they are able to capture word meaning in context. In this thesis, I evaluate the performance of different embedding setups (context-sensitive, context-insensitive word, as well as task-specific word, character, lemma, and PoS) on the three abovementioned sequence labeling tasks using a deep learning model (BiLSTM) and Portuguese datasets. v

Page generated in 0.0395 seconds