• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 12
  • 11
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 112
  • 112
  • 96
  • 49
  • 41
  • 40
  • 40
  • 39
  • 32
  • 24
  • 24
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Amélioration des systèmes de traduction par analyse linguistique et thématique : application à la traduction depuis l'arabe / Improvements for Machine Translation Systems Using Linguistic and Thematic Analysis : an Application to the Translation from Arabic

Gahbiche-Braham, Souhir 30 September 2013 (has links)
La traduction automatique des documents est considérée comme l’une des tâches les plus difficiles en traitement automatique des langues et de la parole. Les particularités linguistiques de certaines langues, comme la langue arabe, rendent la tâche de traduction automatique plus difficile. Notre objectif dans cette thèse est d'améliorer les systèmes de traduction de l'arabe vers le français et vers l'anglais. Nous proposons donc une étude détaillée sur ces systèmes. Les principales recherches portent à la fois sur la construction de corpus parallèles, le prétraitement de l'arabe et sur l'adaptation des modèles de traduction et de langue.Tout d'abord, un corpus comparable journalistique a été exploré pour en extraire automatiquement un corpus parallèle. Ensuite, différentes approches d’adaptation du modèle de traduction sont exploitées, soit en utilisant le corpus parallèle extrait automatiquement soit en utilisant un corpus parallèle construit automatiquement.Nous démontrons que l'adaptation des données du système de traduction permet d'améliorer la traduction. Un texte en arabe doit être prétraité avant de le traduire et ceci à cause du caractère agglutinatif de la langue arabe. Nous présentons notre outil de segmentation de l'arabe, SAPA (Segmentor and Part-of-speech tagger for Arabic), indépendant de toute ressource externe et permettant de réduire les temps de calcul. Cet outil permet de prédire simultanément l’étiquette morpho-syntaxique ainsi que les proclitiques (conjonctions, prépositions, etc.) pour chaque mot, ensuite de séparer les proclitiques du lemme (ou mot de base). Nous décrivons également dans cette thèse notre outil de détection des entités nommées, NERAr (Named Entity Recognition for Arabic), et nous examions l'impact de l'intégration de la détection des entités nommées dans la tâche de prétraitement et la pré-traduction de ces entités nommées en utilisant des dictionnaires bilingues. Nous présentons par la suite plusieurs méthodes pour l'adaptation thématique des modèles de traduction et de langue expérimentées sur une application réelle contenant un corpus constitué d’un ensemble de phrases multicatégoriques.Ces expériences ouvrent des perspectives importantes de recherche comme par exemple la combinaison de plusieurs systèmes lors de la traduction pour l'adaptation thématique. Il serait également intéressant d'effectuer une adaptation temporelle des modèles de traduction et de langue. Finalement, les systèmes de traduction améliorés arabe-français et arabe-anglais sont intégrés dans une plateforme d'analyse multimédia et montrent une amélioration des performances par rapport aux systèmes de traduction de base. / Machine Translation is one of the most difficult tasks in natural language and speech processing. The linguistic peculiarities of some languages makes the machine translation task more difficult. In this thesis, we present a detailed study of machine translation systems from arabic to french and to english.Our principle researches carry on building parallel corpora, arabic preprocessing and adapting translation and language models. We propose a method for automatic extraction of parallel news corpora from a comparable corpora. Two approaches for translation model adaptation are explored using whether parallel corpora extracted automatically or parallel corpora constructed automatically. We demonstrate that adapting data used to build machine translation system improves translation.Arabic texts have to be preprocessed before machine translation and this because of the agglutinative character of arabic language. A prepocessing tool for arabic, SAPA (Segmentor and Part-of-speech tagger for Arabic), much faster than the state of the art tools and totally independant of any other external resource was developed. This tool predicts simultaneously morphosyntactic tags and proclitics (conjunctions, prepositions, etc.) for every word, then splits off words into lemma and proclitics.We describe also in this thesis, our named entity recognition tool for arabic, NERAr, and we focus on the impact of integrating named entity recognition in the preprocessing task. We used bilingual dictionaries to propose translations of the detected named entities. We present then many approaches to adapt thematically translation and language models using a corpora consists of a set of multicategoric sentences.These experiments open important research perspectives such as combining many systems when translating. It would be interesting also to focus on a temporal adaptation of translation and language models.Finally, improved machine translation systems from arabic to french and english are integrated in a multimedia platform analysis and shows improvements compared to basic machine translation systems.
62

Leis de Escala nos gastos com saneamento básico: dados do SIOP e DOU / Scaling Patterns in Basic Sanitation Expenditure: data from SIOP and DOU

Ribeiro, Ludmila Deute 14 March 2019 (has links)
A partir do final do século 20, o governo federal criou vários programas visando a ampliação de acesso ao saneamento básico. Embora esses programas tenham trazido o abastecimento de água potável e a coleta de resíduos sólidos para a maioria dos municípios brasileiros, o esgotamento sanitário ainda está espacialmente concentrado na região Sudeste e nas áreas mais urbanizadas. Para explicar esse padrão espacialmente concentrado, é frequentemente assumido que o tamanho das cidades realmente importa para o saneamento básico, especialmente para o esgotamento sanitário. De fato, à medida que as cidades crescem em tamanho, devemos esperar economias de escala no volume de infraestrutura de saneamento. Economias de escala na infra-estrutura implicam uma redução nos custos de saneamento básico, de forma proporcional ao tamanho da cidade, levando também a uma (esperada) relação de lei de escala (ou de potência) entre os gastos com saneamento básico e o tamanho da cidade. Usando a população, N(t), como medida do tamanho da cidade no momento t, a lei de escala para infraestrutura assume o formato Y(t) = Y0N(t)&#946 onde &#946 &#8776 0.8 < 1, Y denota o volume de infraestrutura e Y0 é uma constante. Diversas propriedades das cidades, desde a produção de patentes e renda até a extensão da rede elétrica, são funções de lei de potência do tamanho da população com expoentes de escalamento, &#946, que se enquadram em classes distintas. As quantidades que refletem a criação de riqueza e a inovação têm &#946 &#8776 1.2 > 1 (retornos crescentes), enquanto aquelas responsáveis pela infraestrutura exibem &#946 &#8776 0.8 < 1 (economias de escala). Verificamos essa relação com base em dados extraídos do Sistema Integrado de Planejamento e Orçamento (SIOP), que abrangem transferências com recursos não onerosos, previstos na Lei Orçamentária Anual (LOA), na modalidade saneamento básico. No conjunto, os valores estimados de &#946 mostram redução das transferências da União Federal para saneamento básico, de forma proporcional ao tamanho dos municípios beneficiários. Para a dotação inicial, valores programados na LOA, estimado foi de aproximadamente: 0.63 para municípios com população superior a dois mil habitantes; 0.92 para municípios acima de vinte mil habitantes; e 1.18 para municípios com mais de cinquenta mil habitantes. A segunda fonte de dados identificada foi o Diário Oficial da União (DOU), periódico do governo federal para publicação de atos oficiais. Os dados fornecidos pelo DOU referem-se aos recursos não onerosos e também aos empréstimos com recursos do Fundo de Garantia por Tempo de Serviço (FGTS). Para extração dos dados textuais foram utilizadas técnicas de Processamento de Linguagem Natural(PLN). Essas técnicas funcionam melhor quando os algoritmos são alimentados com anotações - metadados que fornecem informações adicionais sobre o texto. Por isso geramos uma base de dados, a partir de textos anotados do DOU, para treinar uma rede LSTM bidirecional aplicada à etiquetagem morfossintática e ao reconhecimento de entidades nomeadas. Os resultados preliminares obtidos dessa forma estão relatados no texto / Starting in the late 20th century, the Brazilian federal government created several programs to increase the access to water and sanitation. However, although these programs made improvements in water access, sanitation was generally overlooked. While water supply, and waste collection are available in the majority of the Brazilian municipalities, the sewage system is still spatially concentrated in the Southeast region and in the most urbanized areas. In order to explain this spatially concentrated pattern it is frequently assumed that the size of cities does really matter for sanitation services provision, specially for sewage collection. As a matter of fact, as cities grow in size, one should expect economies of scale in sanitation infrastructure volume. Economies of scale in sanitation infrastructure means a decrease in basic sanitation costs, proportional to the city size, leading also to a (expected) power law relationship between the expenditure on sanitation and city size.Using population, N(t), as the measure of city size at time t, power law scaling for infrastructure takes the form Y(t) = Y0N(t)&#946 where &#946 &#8776 0.8 < 1, Y denotes infrastructure volume and is a constant. Many diverse properties of cities from patent production and personal income to electrical cable length are shown to be power law functions of population size with scaling exponents, &#946, that fall into distinct universality classes. Quantities reflecting wealth creation and innovation have &#946 &#8776 1.2 > 1 (increasing returns), whereas those accounting for infrastructure display &#946 &#8776 0.8 < 1 (economies of scale). We verified this relationship using data from federal government databases, called Integrated Planning and Budgeting System, known as SIOP. SIOP data refers only to grants, funds given to municipalities by the federal government to run programs within defined guidelines. Preliminary results from SIOP show decrease in Federal Grants to Brazilian Municipalities, proportional to the city size. For the initial budget allocation, &#946 was found to be roughly 0.63 for municipalities above twenty thousand inhabitants; to be roughly 0.92 for municipalities above twenty thousand inhabitants; and to be roughly 1.18 for municipalities above fifty thousand inhabitants. The second data source is DOU, government journal for publishing official acts. DOU data should give us information not only about grants, but also about FGTS funds for basic sanitation loans. In order to extract data from DOU we have applied Natural Language Processing (NLP) tools. These techniques often work better when the algorithms are provided with annotations metadata that provides additional information about the text. In particular, we fed a database with annotations into a bidirectional LSTM model applied to POS Tagging and Named-entity Recognition. Preliminary results are reported in the paper
63

Reconhecimento de entidades nomeadas na ?rea da geologia : bacias sedimentares brasileiras

Amaral, Daniela Oliveira Ferreira do 14 September 2017 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-05-03T18:01:24Z No. of bitstreams: 1 DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-05-14T19:20:24Z (GMT) No. of bitstreams: 1 DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5) / Made available in DSpace on 2018-05-14T19:35:09Z (GMT). No. of bitstreams: 1 DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5) Previous issue date: 2017-09-14 / The treatment of textual information has been increasingly relevant in many do- mains. One of the first tasks for extracting information from texts is the Named Entities Recognition (NER), which consists of identifying references to certain entities and finding out their classification. There are many NER domains, among them the most usual are medicine and biology. One of the challenging domains in the recognition of Named Entities (NE) is the Geology domain, which is an area lacking computational linguistic resources. This thesis proposes a method for the recognition of relevant NE in the field of Geology, specifically to the subarea of Brazilian Sedimentary Basin, in Portuguese texts. Generic and geological features were defined for the generation of a machine learning model. Among the automatic approaches to NE classification, the most prominent is the Conditional Ran- dom Fields (CRF) probabilistic model. CRF has been effectively used for word processing in natural language. To generate our model, we created GeoCorpus, a reference corpus for Geological NER, annotated by specialists. Experimental evaluations were performed to compare the proposed method with other classifiers. The best results were achieved by CRF, which shows 76,78% of Precision and 54,33% of F-Measure. / O tratamento da informa??o textual torna-se cada vez mais relevante para muitos dom?nios. Nesse sentido, uma das primeira tarefas para Extra??o de Informa??es a partir de textos ? o Reconhecimento de Entidades Nomeadas (REN), que consiste na identifica??o de refer?ncias feitas a determinadas entidades e sua classifica??o. REN compreende muitos dom?nios, entre eles os mais usuais s?o medicina e biologia. Um dos dom?nios desafiadores no reconhecimento de EN ? o de Geologia, sendo essa uma ?rea carente de recursos lingu?sticos computacionais. A presente tese prop?e um m?todo para o reconhecimento de EN relevantes no dom?nio da Geologia, sub?rea Bacia Sedimentar Brasileira, em textos da l?ngua portuguesa. Definiram-se features gen?ricas e geol?gicas para a gera??o do modelo de aprendizado. Entre as abordagens autom?ticas para classifica??o de EN, a mais proeminente ? o modelo probabil?stico Conditional Random Fields (CRF). O CRF tem sido utilizado eficazmente no processamento de textos em linguagem natural. A fim de gerar um modelo de aprendizado foi criado o GeoCorpus, um corpus de refer?ncia para REN Geol?gicas, anotado por especialistas. Avalia??es experimentais foram realizadas com o objetivo de comparar o m?todo proposto com outros classificadores. Destacam-se os melhores resultados para o CRF, o qual alcan?ou 76,78% e 54,33% em Precis?o e Medida-F.
64

Estudo comparativo de diferentes classificadores baseados em aprendizagem de m?quina para o processo de Reconhecimento de Entidades Nomeadas

Santos, Jadson da Silva 09 September 2016 (has links)
Submitted by Jadson Francisco de Jesus SILVA (jadson@uefs.br) on 2018-01-24T22:42:26Z No. of bitstreams: 1 JadsonDisst.pdf: 3499973 bytes, checksum: 5deaf9020f758e9c07f86e9e62890129 (MD5) / Made available in DSpace on 2018-01-24T22:42:26Z (GMT). No. of bitstreams: 1 JadsonDisst.pdf: 3499973 bytes, checksum: 5deaf9020f758e9c07f86e9e62890129 (MD5) Previous issue date: 2016-09-09 / The Named Entity Recognition (NER) process is the task of identifying relevant termsintextsandassigningthemalabel.Suchwordscanreferencenamesofpeople, organizations, and places. The variety of techniques that can be used in the named entityrecognitionprocessislarge.Thetechniquescanbeclassifiedintothreedistinct approaches: rule-based, machine learning and hybrid. Concerning to the machine learningapproaches,severalfactorsmayinfluenceitsaccuracy,includingtheselected classifier, the set of features extracted from the terms, the characteristics of the textual bases, and the number of entity labels. In this work, we compared classifiers that use machine learning applied to the NER task. The comparative study includes classifiers based on CRF (Conditional Random Fields), MEMM (MaximumEntropy Markov Model) and HMM (Hidden Markov Model), which are compared in two corpora in Portuguese derived from WikiNer, and HAREM, and two corporas in English derived from CoNLL-03 and WikiNer. The comparison of the classifiers shows that the CRF is superior to the other classifiers, both with Portuguese and English texts. This study also includes the comparison of the individual and joint contribution of features, including contextual features, besides the comparison ofthe NER per named entity labels, between classifiers andcorpora. / O processo de Reconhecimento de Entidades Nomeadas (REN) ? a tarefa de iden- tificar termos relevantes em textos e atribu?-los um r?tulo. Tais palavras podem referenciar nomes de pessoas, organiza??es e locais. A variedade de t?cnicas que podem ser usadas no processo de reconhecimento de entidades nomeadas ? grande. As t?cnicas podem ser classificadas em tr?s abordagens distintas: baseadas em regras, baseadas em aprendizagem de m?quina e h?bridas. No que diz respeito as abordagens de aprendizagem de m?quina, diversos fatores podem influenciar sua exatida?, incluindo o classificador selecionado, o conjunto de features extra?das dos termos, as caracter?sticas das bases textuais e o n?mero de r?tulos de entidades. Neste trabalho, comparamos classificadores que utilizam aprendizagem de m?quina aplicadas a tarefa do REN. O estudo comparativo inclui classificadores baseados no CRF (Condicional Random Fields), MEMM (Maximum Entropy Markov Model) e HMM (Hidden Markov Model), os quais s?o comparados em dois corporas em portugu?s derivados do WikiNer, e HAREM, e dois corporas em ingl?s derivados doCoNLL-03 e WikiNer. A compara??o dos classificadores demonstra que o CRF ? superior aos demais classificadores, tanto com textos em portugu?s, quanto ingl?s. Este estudo tamb?m inclui a compara??o da contribui??o, individual e em conjunto de features, incluindo features de contexto, al?m da compara??o do REN por r?otulos de entidades nomeadas, entre os classificadores e os corpora.
65

Extraction en langue chinoise d'actions spatiotemporalisées réalisées par des personnes ou des organismes / Extraction of spatiotemporally located actions performed by individuals or organizations from Chinese texts

Wang, Zhen 09 June 2016 (has links)
La thèse a deux objectifs : le premier est de développer un analyseur qui permet d'analyser automatiquement des sources textuelles en chinois simplifié afin de segmenter les textes en mots et de les étiqueter par catégories grammaticales, ainsi que de construire les relations syntaxiques entre les mots. Le deuxième est d'extraire des informations autour des entités et des actions qui nous intéressent à partir des textes analysés. Afin d'atteindre ces deux objectifs, nous avons traité principalement les problématiques suivantes : les ambiguïtés de segmentation, la catégorisation ; le traitement des mots inconnus dans les textes chinois ; l'ambiguïté de l'analyse syntaxique ; la reconnaissance et le typage des entités nommées. Le texte d'entrée est traité phrase par phrase. L'analyseur commence par un traitement typographique au sein des phrases afin d'identifier les écritures latines et les chiffres. Ensuite, nous segmentons la phrase en mots à l'aide de dictionnaires. Grâce aux règles linguistiques, nous créons des hypothèses de noms propres, changeons les poids des catégories ou des mots selon leur contextes gauches ou/et droits. Un modèle de langue n-gramme élaboré à partir d'un corpus d'apprentissage permet de sélectionner le meilleur résultat de segmentation et de catégorisation. Une analyse en dépendance est utilisée pour marquer les relations entre les mots. Nous effectuons une première identification d'entités nommées à la fin de l'analyse syntaxique. Ceci permet d'identifier les entités nommées en unité ou en groupe nominal et également de leur attribuer un type. Ces entités nommées sont ensuite utilisées dans l'extraction. Les règles d'extraction permettent de valider ou de changer les types des entités nommées. L'extraction des connaissances est composée des deux étapes : extraire et annoter automatiquement des contenus à partir des textes analysés ; vérifier les contenus extraits et résoudre la cohérence à travers une ontologie. / We have developed an automatic analyser and an extraction module for Chinese langage processing. The analyser performs automatic Chinese word segmentation based on linguistic rules and dictionaries, part-of-speech tagging based on n-gram statistics and dependency grammar parsing. The module allows to extract information around named entities and activities. In order to achieve these goals, we have tackled the following main issues: segmentation and part-of-speech ambiguity; unknown word identification in Chinese text; attachment ambiguity in parsing. Chinese texts are analysed sentence by sentence. Given a sentence, the analyzer begins with typographic processing to identify sequences of Latin characters and numbers. Then, dictionaries are used for preliminary segmentation into words. Linguistic-based rules are used to create proper noun hypotheses and change the weight of some word categories. These rules take into account word context. An n-gram language model is created from a training corpus and selects the best word segmentation and parts-of-speech. Dependency grammar parsing is used to annotate relations between words. A first step of named entity recognition is performed after parsing. Its goal is to identify single-word named entities and noun-phrase-based named entities and to determine their semantic type. These named entities are then used in knowledge extraction. Knowledge extraction rules are used to validate named entities or to change their types. Knowledge extraction consists of two steps: automatic content extraction and tagging from analysed text; extracted contents control and ontology-based co-reference resolution.
66

Reconhecimento de entidades mencionadas em português utilizando aprendizado de máquina / Portuguese named entity recognition using machine learning

Carvalho, Wesley Seidel 24 February 2012 (has links)
O Reconhecimento de Entidades Mencionadas (REM) é uma subtarefa da extração de informações e tem como objetivo localizar e classificar elementos do texto em categorias pré-definidas tais como nome de pessoas, organizações, lugares, datas e outras classes de interesse. Esse conhecimento obtido possibilita a execução de outras tarefas mais avançadas. O REM pode ser considerado um dos primeiros passos para a análise semântica de textos, além de ser uma subtarefa crucial para sistemas de gerenciamento de documentos, mineração de textos, extração da informação, entre outros. Neste trabalho, estudamos alguns métodos de Aprendizado de Máquina aplicados na tarefa de REM que estão relacionados ao atual estado da arte, dentre eles, dois métodos aplicados na tarefa de REM para a língua portuguesa. Apresentamos três diferentes formas de avaliação destes tipos de sistemas presentes na literatura da área. Além disso, desenvolvemos um sistema de REM para língua portuguesa utilizando Aprendizado de Máquina, mais especificamente, o arcabouço de máxima entropia. Os resultados obtidos com o nosso sistema alcançaram resultados equiparáveis aos melhores sistemas de REM para a língua portuguesa desenvolvidos utilizando outras abordagens de aprendizado de máquina. / Named Entity Recognition (NER), a task related to information extraction, aims to classify textual elements according to predefined categories such as names, places, dates etc. This enables the execution of more advanced tasks. NER is a first step towards semantic textual analysis and is also a crucial task for systems of information extraction and other types of systems. In this thesis, I analyze some Machine Learning methods applied to NER tasks, including two methods applied to Portuguese language. I present three ways of evaluating these types of systems found in the literature. I also develop an NER system for the Portuguese language utilizing Machine Learning that entails working with a maximum entropy framework. The results are comparable to the best NER systems for the Portuguese language developed with other Machine Learning alternatives.
67

Automatic Extraction and Assessment of Entities from the Web

Urbansky, David 23 October 2012 (has links) (PDF)
The search for information about entities, such as people or movies, plays an increasingly important role on the Web. This information is still scattered across many Web pages, making it more time consuming for a user to find all relevant information about an entity. This thesis describes techniques to extract entities and information about these entities from the Web, such as facts, opinions, questions and answers, interactive multimedia objects, and events. The findings of this thesis are that it is possible to create a large knowledge base automatically using a manually-crafted ontology. The precision of the extracted information was found to be between 75–90 % (facts and entities respectively) after using assessment algorithms. The algorithms from this thesis can be used to create such a knowledge base, which can be used in various research fields, such as question answering, named entity recognition, and information retrieval.
68

Serviceorientiertes Text Mining am Beispiel von Entitätsextrahierenden Diensten

Pfeifer, Katja 08 September 2014 (has links) (PDF)
Der Großteil des geschäftsrelevanten Wissens liegt heute als unstrukturierte Information in Form von Textdaten auf Internetseiten, in Office-Dokumenten oder Foreneinträgen vor. Zur Extraktion und Verwertung dieser unstrukturierten Informationen wurde eine Vielzahl von Text-Mining-Lösungen entwickelt. Viele dieser Systeme wurden in der jüngeren Vergangenheit als Webdienste zugänglich gemacht, um die Verwertung und Integration zu vereinfachen. Die Kombination verschiedener solcher Text-Mining-Dienste zur Lösung konkreter Extraktionsaufgaben erscheint vielversprechend, da so bestehende Stärken ausgenutzt, Schwächen der Systeme minimiert werden können und die Nutzung von Text-Mining-Lösungen vereinfacht werden kann. Die vorliegende Arbeit adressiert die flexible Kombination von Text-Mining-Diensten in einem serviceorientierten System und erweitert den Stand der Technik um gezielte Methoden zur Auswahl der Text-Mining-Dienste, zur Aggregation der Ergebnisse und zur Abbildung der eingesetzten Klassifikationsschemata. Zunächst wird die derzeit existierende Dienstlandschaft analysiert und aufbauend darauf eine Ontologie zur funktionalen Beschreibung der Dienste bereitgestellt, so dass die funktionsgesteuerte Auswahl und Kombination der Text-Mining-Dienste ermöglicht wird. Des Weiteren werden am Beispiel entitätsextrahierender Dienste Algorithmen zur qualitätssteigernden Kombination von Extraktionsergebnissen erarbeitet und umfangreich evaluiert. Die Arbeit wird durch zusätzliche Abbildungs- und Integrationsprozesse ergänzt, die eine Anwendbarkeit auch in heterogenen Dienstlandschaften, bei denen unterschiedliche Klassifikationsschemata zum Einsatz kommen, gewährleisten. Zudem werden Möglichkeiten der Übertragbarkeit auf andere Text-Mining-Methoden erörtert.
69

La structuration dans les entités nommées / Structuration in named entities

Dupont, Yoann 23 November 2017 (has links)
La reconnaissance des entités nommées et une discipline cruciale du domaine du TAL. Elle sert à l'extraction de relations entre entités nommées, ce qui permet la construction d'une base de connaissance (Surdeanu and Ji, 2014), le résumé automatique (Nobata et al., 2002), etc... Nous nous intéressons ici aux phénomènes de structurations qui les entourent.Nous distinguons ici deux types d'éléments structurels dans une entité nommée. Les premiers sont des sous-chaînes récurrentes, que nous appelerons les affixes caractéristiques d'une entité nommée. Le second type d'éléments est les tokens ayant un fort pouvoir discriminant, appelés des tokens déclencheurs. Nous détaillerons l'algorithme que nous avons mis en place pour extraire les affixes caractéristiques, que nous comparerons à Morfessor (Creutz and Lagus, 2005b). Nous appliquerons ensuite notre méthode pour extraire les tokens déclencheurs, utilisés pour l'extraction d'entités nommées du Français et d'adresses postales.Une autre forme de structuration pour les entités nommées est de nature syntaxique, qui suit généralement une structure d'imbrications ou arborée. Nous proposons un type de cascade d'étiqueteurs linéaires qui n'avait jusqu'à présent jamais été utilisé pour la reconnaissance d'entités nommées, généralisant les approches précédentes qui ne sont capables de reconnaître des entités de profondeur finie ou ne pouvant modéliser certaines particularités des entités nommées structurées.Tout au long de cette thèse, nous comparons deux méthodes par apprentissage automatique, à savoir les CRF et les réseaux de neurones, dont nous présenterons les avantages et inconvénients de chacune des méthodes. / Named entity recognition is a crucial discipline of NLP. It is used to extract relations between named entities, which allows the construction of knowledge bases (Surdeanu and Ji, 2014), automatic summary (Nobata et al., 2002) and so on. Our interest in this thesis revolves around structuration phenomena that surround them.We distinguish here two kinds of structural elements in named entities. The first one are recurrent substrings, that we will call the caracteristic affixes of a named entity. The second type of element is tokens with a good discriminative power, which we call trigger tokens of named entities. We will explain here the algorithm we provided to extract such affixes, which we will compare to Morfessor (Creutz and Lagus, 2005b). We will then apply the same algorithm to extract trigger tokens, which we will use for French named entity recognition and postal address extraction.Another form of structuration for named entities is of a syntactic nature. It follows an overlapping or tree structure. We propose a novel kind of linear tagger cascade which have not been used before for structured named entity recognition, generalising other previous methods that are only able to recognise named entities of a fixed depth or being unable to model certain characteristics of the structure. Ours, however, can do both.Throughout this thesis, we compare two machine learning methods, CRFs and neural networks, for which we will compare respective advantages and drawbacks.
70

Automatic Identification of Duplicates in Literature in Multiple Languages

Klasson Svensson, Emil January 2018 (has links)
As the the amount of books available online the sizes of each these collections are at the same pace growing larger and more commonly in multiple languages. Many of these cor- pora contain duplicates in form of various editions or translations of books. The task of finding these duplicates is usually done manually but with the growing sizes making it time consuming and demanding. The thesis set out to find a method in the field of Text Mining and Natural Language Processing that can automatize the process of manually identifying these duplicates in a corpora mainly consisting of fiction in multiple languages provided by Storytel. The problem was approached using three different methods to compute distance measures between books. The first approach was comparing titles of the books using the Levenstein- distance. The second approach used extracting entities from each book using Named En- tity Recognition and represented them using tf-idf and cosine dissimilarity to compute distances. The third approach was using a Polylingual Topic Model to estimate the books distribution of topics and compare them using Jensen Shannon Distance. In order to es- timate the parameters of the Polylingual Topic Model 8000 books were translated from Swedish to English using Apache Joshua a statistical machine translation system. For each method every book written by an author was pairwise tested using a hypothesis test where the null hypothesis was that the two books compared is not an edition or translation of the others. Since there is no known distribution to assume as the null distribution for each book a null distribution was estimated using distance measures of books not written by the author. The methods were evaluated on two different sets of manually labeled data made by the author of the thesis. One randomly sampled using one-stage cluster sampling and one consisting of books from authors that the corpus provider prior to the thesis be considered more difficult to label using automated techniques. Of the three methods the Title Matching was the method that performed best in terms of accuracy and precision based of the sampled data. The entity matching approach was the method with the lowest accuracy and precision but with a almost constant recall at around 50 %. It was concluded that there seems to be a set of duplicates that are clearly distin- guished from the estimated null-distributions, with a higher significance level a better pre- cision and accuracy could have been made with a similar recall for the specific method. For topic matching the result was worse than the title matching and when studied the es- timated model was not able to create quality topics the cause of multiple factors. It was concluded that further research is needed for the topic matching approach. None of the three methods were deemed be complete solutions to automatize detection of book duplicates.

Page generated in 0.0287 seconds