• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interopérabilité Sémantique Multi-lingue des Ressources Lexicales en Données Liées Ouvertes / Semantic Interoperability of Multilingual Lexical Resources in Lexical Linked Data

Tchechmedjiev, Andon 14 October 2016 (has links)
Lorsqu’il s’agit la construction de ressources lexico-sémantiques multilingues, la première chose qui vient à l’esprit, et la nécessité que les ressources à alignées partagent le même format de données et la même représentations (interopérabilité représentationnelle). Avec l’apparition de standard tels que LMF et leur adaptation au web sémantique pour la production de ressources lexico- sémantiques multilingues en tant que données lexicales liées ouvertes (Ontolex), l’interopérabilité représentationnelle n’est plus un verrou majeur. Cependant, en ce qui concerne l’interopérabilité des alignements multilingues, le choix et la construction du pivot interlingue est l’un des obstacles principaux. Pour nombre de ressources (par ex. BabelNet, EuroWordNet), le choix est fait d’utiliser l’Anglais, ou une autre langue comme pivot interlingue. Ce choix mène à une perte de contraste dans les cas où des sens du Pivot ont des lexicalisations différentes dans la même acception dans plusieurs autres langues. L’utilisation d’une pivot à acceptions interlingues, solution proposée il y a déjà plus de 20 ans, pourrait être viable. Néanmoins, leur construction manuelle est trop ardue du fait du manque d’experts parlant assez de langues et leur construction automatique pose problème du fait de l’absence d’une formalisation et d’une caractérisation axiomatique permettant de garantir leur propriétés. Nous proposons dans cette thèse de d’abord formaliser l’architecture à pivot interlingue par acceptions, en développant une axiomatisation garantissant leurs propriétés. Nous proposons ensuite des algorithmes de construction initiale automatique en utilisant les propriétés combinatoires du graphe des alignements bilingues, mais aussi des algorithmes de mise à jour garantissant l’interopérabilité dynamique. Dans un deuxième temps, nous étudions de manière plus pratique sur DBNary, un extraction périodique de Wiktionary dans de nombreuses éditions de langues, afin de cerner les contraintes pratiques à l’application des algorithmes proposés. / When it comes to the construction of multilingual lexico-semantic resources, the first thing that comes to mind is that the resources we want to align, should share the same data model and format (representational interoperability). However, with the emergence of standards such as LMF and their implementation and widespread use for the production of resources as lexical linked data (Ontolex), representational interoperability has ceased to be a major challenge for the production of large-scale multilingual resources. However, as far as the interoperability of sense-level multi-lingual alignments is concerned, a major challenge is the choice of a suitable interlingual pivot. Many resources make the choice of using English senses as the pivot (e.g. BabelNet, EuroWordNet), although this choice leads to a loss of contrast between English senses that are lexicalized with a different words in other languages. The use of acception-based interlingual representations, a solution proposed over 20 years ago, could be viable. However, the manual construction of such language-independent pivot representations is very difficult due to the lack of expert speaking enough languages fluently and algorithms for their automatic constructions have never since materialized, mainly because of the lack of a formal axiomatic characterization that ensures the pre- servation of their correctness properties. In this thesis, we address this issue by first formalizing acception-based interlingual pivot architectures through a set of axiomatic constraints and rules that guarantee their correctness. Then, we propose algorithms for the initial construction and the update (dynamic interoperability) of interlingual acception-based multilingual resources by exploiting the combinatorial properties of pairwise bilingual translation graphs. Secondly, we study the practical considerations of applying our construction algorithms on a tangible resource, DBNary, a resource periodically extracted from Wiktionary in many languages in lexical linked data.
2

Une approche linguistique de l'évaluation des ressources extraites par analyse distributionnelle automatique / Evaluation of resources provided by automatic distributional analysis : a linguistic approach

Morlane-Hondère, François 10 July 2013 (has links)
Dans cette thèse, nous abordons du point de vue linguistique la question de l'évaluation des bases lexicales extraites par analyse distributionnelle automatique (ADA). Les méthodes d'évaluation de ces ressources qui sont actuellement mises en œuvre (comparaison à des lexiques de référence, évaluation par la tâche, test du TOEFL...) relèvent en effet d'une approche quantitative des données qui ne laisse que peu de place à l'interprétation des rapprochements générés. De ce fait, les conditions qui font que certains couples de mots sont extraits alors que d'autres ne le sont pas restent mal connues. Notre travail vise une meilleure compréhension des fonctionnements en corpus qui régissent les rapprochements distributionnels. Pour cela, nous avons dans un premier temps adopté une approche quantitative qui a consisté à comparer plusieurs ressources distributionnelles calculées sur des corpus différents à des lexiques de références (le Dictionnaire électronique des synonymes du CRISCO et le réseau lexical JeuxDeMots). Cette étape nous a permis, premièrement, d'avoir une estimation globale du contenu de nos ressources, et, deuxièmement, de sélectionner des échantillons de couples de mots à étudier d'un point de vue qualitatif. Cette deuxième étape constitue le cœur de la thèse. Nous avons choisi de nous focaliser sur les relations lexico-sémantiques que sont la synonymie, l'antonymie, l'hyperonymie et la méronymie, que nous abordons en mettant en place quatre protocoles différents. En nous appuyant sur les relations contenues dans les lexiques de référence, nous avons comparé les propriétés distributionnelles des couples de synonymes/antonymes/hyperonymes/méronymes qui ont été extraits par l'ADA avec celles des couples qui ne l'ont pas été. Nous mettons ainsi au jour plusieurs phénomènes qui favorisent ou bloquent la substituabilité des couples de mots (donc leur extraction par l'ADA). Ces phénomènes sont considérés au regard de paramètres comme la nature du corpus qui a permis de générer les bases distributionnelles étudiées (corpus encyclopédique, journalistique ou littéraire) ou les limites des lexiques de référence. Ainsi, en même temps qu'il questionne les méthodes d'évaluation des bases distributionnelles actuellement employées, ce travail de thèse illustre l'intérêt qu'il y a à considérer ces ressources comme des objets d'études linguistiques à part entière. Les bases distributionnelles sont en effet le résultat d'une mise en œuvre à grande échelle du principe de substituabilité, ce qui en fait un matériau de choix pour la description des relations lexico-sémantiques. / In this thesis, we address the question of the evaluation of distributional thesauri from a linguistic point of view. The most current ways to evaluate distributional methods rely on the comparison with gold standards like WordNet or semantic tasks like the TOEFL test. However, these evaluation methods are quantitative and thus restrict the possibility of performing a linguistic analysis of the distributional neighbours. Our work aims at a better understanding of the distributional behaviors of words in texts through the study of distributional thesauri. First, we take a quantitative approach based on a comparison of several distributional thesauri with gold standards (the DES - a dictionary of synonyms - and JeuxDeMots - a crowdsourced lexical network). This step allowed us to have an overview of the nature of the semantic relations extracted in our distributional thesauri. In a second step, we relied on this comparison to select samples of distributional neighbours for a qualitative study. We focused on "classical" semantic relations, e.g. synonymy, antonymy, hypernymy and meronymy. We considered several protocols to compare the properties of the couples of distributional neighbours which were found in the gold standards and the others. Thus, taking into account parameters like the nature of the corpora from which were generated our distributional thesauri, we explain why some synonyms, hypernyms, etc. can be substituted in texts while others cannot. The purpose of this work is twofold. First, it questions the traditional evaluation methods, then it shows how distributional thesauri can be used for the study of semantic relations.
3

VerbNet.Br: construção semiautomática de um léxico verbal online e independente de domínio para o português do Brasil / VerbNet.BR: the semi-automatic construction of an on-line and domain-independent Verb Lexicon for Brazilian Portuguese

Scarton, Carolina Evaristo 28 January 2013 (has links)
A criação de recursos linguístico-computacionais de base, como é o caso dos léxicos computacionais, é um dos focos da área de Processamento de Línguas Naturais (PLN). Porém, a maioria dos recursos léxicos computacionais existentes é específica da língua inglesa. Dentre os recursos já desenvolvidos para a língua inglesa, tem-se a VerbNet, que é um léxico com informações semânticas e sintáticas dos verbos do inglês, independente de domínio, construído com base nas classes verbais de Levin, além de possuir mapeamentos para a WordNet de Princeton (WordNet). Considerando que há poucos estudos computacionais sobre as classes de Levin, que é a base da VerbNet, para línguas diferentes do inglês, e dada a carência de um léxico para o português nos moldes da VerbNet do inglês, este trabalho teve como objetivo a criação de um recurso léxico para o português do Brasil (chamado VerbNet.Br), semelhante à VerbNet. A construção manual destes recursos geralmente é inviável devido ao tempo gasto e aos erros inseridos pelo autor humano. Portanto, há um grande esforço na área para a criação destes recursos apoiada por técnicas computacionais. Uma técnica reconhecida e bastante usada é o uso de aprendizado de máquina em córpus para extrair informação linguística. A outra é o uso de recursos já existentes para outras línguas, em geral o inglês, visando à construção de um novo recurso alinhado, aproveitando-se de atributos multilíngues/cross-linguísticos (cross-linguistic) (como é o caso da classificação verbal de Levin). O método proposto neste mestrado para a construção da VerbNet.Br é genérico, porque pode ser utilizado para a construção de recursos semelhantes para outras línguas, além do português do Brasil. Além disso, futuramente, será possível estender este recurso via criação de subclasses de conceitos. O método para criação da VerbNet.Br é fundamentado em quatro etapas: três automáticas e uma manual. Porém, também foram realizados experimentos sem o uso da etapa manual, constatando-se, com isso, que ela pode ser descartada sem afetar a precisão e abrangência dos resultados. A avaliação do recurso criado foi realizada de forma intrínseca qualitativa e quantitativa. A avaliação qualitativa consistiu: (a) da análise manual de algumas classes da VerbNet, criando um gold standard para o português do Brasil; (b) da comparação do gold standard criado com os resultados da VerbNet.Br, obtendo resultados promissores, por volta de 60% de f-measure; e (c) da comparação dos resultados da VerbNet.Br com resultados de agrupamento de verbos, concluindo que ambos os métodos apresentam resultados similares. A avaliação quantitativa considerou a taxa de aceitação dos membros das classes da VerbNet.Br, apresentando resultados na faixa de 90% de aceitação dos membros em cada classe. Uma das contribuições deste mestrado é a primeira versão da VerbNet.Br, que precisa de validação linguística, mas que já contém informação para ser utilizada em tarefas de PLN, com precisão e abrangência de 44% e 92,89%, respectivamente / Building computational-linguistic base resources, like computational lexical resources (CLR), is one of the goals of Natural Language Processing (NLP). However, most computational lexicons are specific to English. One of the resources already developed for English is the VerbNet, a lexicon with domain-independent semantic and syntactic information of English verbs. It is based on Levin\'s verb classification, with mappings to Princeton\'s WordNet (WordNet). Since only a few computational studies for languages other than English have been made about Levin\'s classification, and given the lack of a Portuguese CLR similar to VerbNet, the goal of this research was to create a CLR for Brazilian Portuguese (called VerbNet.Br). The manual building of these resources is usually unfeasible because it is time consuming and it can include many human-made errors. Therefore, great efforts have been made to build such resources with the aid of computational techniques. One of these techniques is machine learning, a widely known and used method for extracting linguistic information from corpora. Another one is the use of pre-existing resources for other languages, most commonly English, to support the building of new aligned resources, taking advantage of some multilingual/cross-linguistic features (like the ones in Levin\'s verb classification). The method proposed here for the creation of VerbNet.Br is generic, therefore it may be used to build similar resources for languages other than Brazilian Portuguese. Moreover, the proposed method also allows for a future extension of the resource via subclasses of concepts. The VerbNet.Br has a four-step method: three automatic and one manual. However, experiments were also carried out without the manual step, which can be discarded without affecting precision and recall. The evaluation of the resource was intrinsic, both qualitative and quantitative. The qualitative evaluation consisted in: (a) manual analysis of some VerbNet classes, resulting in a Brazilian Portuguese gold standard; (b) comparison of this gold standard with the VerbNet.Br results, presenting promising results (almost 60% of f-measure); and (c), comparison of the VerbNet.Br results to verb clustering results, showing that both methods achieved similar results. The quantitative evaluation considered the acceptance rate of candidate members of VerbNet.Br, showing results around 90% of acceptance. One of the contributions of this research is to present the first version of VerbNet.Br. Although it still requires linguistic validation, it already provides information to be used in NLP tasks, with precision and recall of 44% and 92.89%, respectively
4

VerbNet.Br: construção semiautomática de um léxico verbal online e independente de domínio para o português do Brasil / VerbNet.BR: the semi-automatic construction of an on-line and domain-independent Verb Lexicon for Brazilian Portuguese

Carolina Evaristo Scarton 28 January 2013 (has links)
A criação de recursos linguístico-computacionais de base, como é o caso dos léxicos computacionais, é um dos focos da área de Processamento de Línguas Naturais (PLN). Porém, a maioria dos recursos léxicos computacionais existentes é específica da língua inglesa. Dentre os recursos já desenvolvidos para a língua inglesa, tem-se a VerbNet, que é um léxico com informações semânticas e sintáticas dos verbos do inglês, independente de domínio, construído com base nas classes verbais de Levin, além de possuir mapeamentos para a WordNet de Princeton (WordNet). Considerando que há poucos estudos computacionais sobre as classes de Levin, que é a base da VerbNet, para línguas diferentes do inglês, e dada a carência de um léxico para o português nos moldes da VerbNet do inglês, este trabalho teve como objetivo a criação de um recurso léxico para o português do Brasil (chamado VerbNet.Br), semelhante à VerbNet. A construção manual destes recursos geralmente é inviável devido ao tempo gasto e aos erros inseridos pelo autor humano. Portanto, há um grande esforço na área para a criação destes recursos apoiada por técnicas computacionais. Uma técnica reconhecida e bastante usada é o uso de aprendizado de máquina em córpus para extrair informação linguística. A outra é o uso de recursos já existentes para outras línguas, em geral o inglês, visando à construção de um novo recurso alinhado, aproveitando-se de atributos multilíngues/cross-linguísticos (cross-linguistic) (como é o caso da classificação verbal de Levin). O método proposto neste mestrado para a construção da VerbNet.Br é genérico, porque pode ser utilizado para a construção de recursos semelhantes para outras línguas, além do português do Brasil. Além disso, futuramente, será possível estender este recurso via criação de subclasses de conceitos. O método para criação da VerbNet.Br é fundamentado em quatro etapas: três automáticas e uma manual. Porém, também foram realizados experimentos sem o uso da etapa manual, constatando-se, com isso, que ela pode ser descartada sem afetar a precisão e abrangência dos resultados. A avaliação do recurso criado foi realizada de forma intrínseca qualitativa e quantitativa. A avaliação qualitativa consistiu: (a) da análise manual de algumas classes da VerbNet, criando um gold standard para o português do Brasil; (b) da comparação do gold standard criado com os resultados da VerbNet.Br, obtendo resultados promissores, por volta de 60% de f-measure; e (c) da comparação dos resultados da VerbNet.Br com resultados de agrupamento de verbos, concluindo que ambos os métodos apresentam resultados similares. A avaliação quantitativa considerou a taxa de aceitação dos membros das classes da VerbNet.Br, apresentando resultados na faixa de 90% de aceitação dos membros em cada classe. Uma das contribuições deste mestrado é a primeira versão da VerbNet.Br, que precisa de validação linguística, mas que já contém informação para ser utilizada em tarefas de PLN, com precisão e abrangência de 44% e 92,89%, respectivamente / Building computational-linguistic base resources, like computational lexical resources (CLR), is one of the goals of Natural Language Processing (NLP). However, most computational lexicons are specific to English. One of the resources already developed for English is the VerbNet, a lexicon with domain-independent semantic and syntactic information of English verbs. It is based on Levin\'s verb classification, with mappings to Princeton\'s WordNet (WordNet). Since only a few computational studies for languages other than English have been made about Levin\'s classification, and given the lack of a Portuguese CLR similar to VerbNet, the goal of this research was to create a CLR for Brazilian Portuguese (called VerbNet.Br). The manual building of these resources is usually unfeasible because it is time consuming and it can include many human-made errors. Therefore, great efforts have been made to build such resources with the aid of computational techniques. One of these techniques is machine learning, a widely known and used method for extracting linguistic information from corpora. Another one is the use of pre-existing resources for other languages, most commonly English, to support the building of new aligned resources, taking advantage of some multilingual/cross-linguistic features (like the ones in Levin\'s verb classification). The method proposed here for the creation of VerbNet.Br is generic, therefore it may be used to build similar resources for languages other than Brazilian Portuguese. Moreover, the proposed method also allows for a future extension of the resource via subclasses of concepts. The VerbNet.Br has a four-step method: three automatic and one manual. However, experiments were also carried out without the manual step, which can be discarded without affecting precision and recall. The evaluation of the resource was intrinsic, both qualitative and quantitative. The qualitative evaluation consisted in: (a) manual analysis of some VerbNet classes, resulting in a Brazilian Portuguese gold standard; (b) comparison of this gold standard with the VerbNet.Br results, presenting promising results (almost 60% of f-measure); and (c), comparison of the VerbNet.Br results to verb clustering results, showing that both methods achieved similar results. The quantitative evaluation considered the acceptance rate of candidate members of VerbNet.Br, showing results around 90% of acceptance. One of the contributions of this research is to present the first version of VerbNet.Br. Although it still requires linguistic validation, it already provides information to be used in NLP tasks, with precision and recall of 44% and 92.89%, respectively
5

Métrologie des graphes de terrain, application à la construction de ressources lexicales et à la recherche d'information / Metrology of terrain networks, application to lexical resources enrichment and to information retrieval

Navarro, Emmanuel 04 November 2013 (has links)
Cette thèse s'organise en deux parties : une première partie s'intéresse aux mesures de similarité entre sommets d'un graphe, une seconde aux méthodes de clustering de graphe biparti. Une nouvelle mesure de similarité entre sommets basée sur des marches aléatoires en temps courts est introduite. Cette méthode a l'avantage, en particulier, d'être insensible à la densité du graphe. Il est ensuite proposé un large état de l'art des similarités entre sommets, ainsi qu'une comparaison expérimentale de ces différentes mesures. Cette première partie se poursuit par la proposition d'une méthode robuste de comparaison de graphes partageant le même ensemble de sommets. Cette mesure est mise en application pour comparer et fusionner des graphes de synonymie. Enfin une application d'aide à la construction de ressources lexicales est présentée. Elle consiste à proposer de nouvelles relations de synonymie à partir de l'ensemble des relations de synonymie déjà existantes. Dans une seconde partie, un parallèle entre l'analyse formelle de concepts et le clustering de graphe biparti est établi. Ce parallèle conduit à l'étude d'un cas particulier pour lequel une partition d’un des groupes de sommets d’un graphe biparti peut-être déterminée alors qu'il n'existe pas de partitionnement correspondant sur l’autre type de sommets. Une méthode simple qui répond à ce problème est proposée et évaluée. Enfin Kodex, un système de classification automatique des résultats d'une recherche d'information est présenté. Ce système est une application en RI des méthodes de clustering vues précédemment. Une évaluation sur une collection de deux millions de pages web montre les avantages de l'approche et permet en outre de mieux comprendre certaines différences entre méthodes de clustering. / This thesis is organized in two parts : the first part focuses on measures of similarity (or proximity) between vertices of a graph, the second part on clustering methods for bipartite graph. A new measure of similarity between vertices, based on short time random walks, is introduced. The main advantage of the method is that it is insensitive to the density of the graph. A broad state of the art of similarities between vertices is then proposed, as well as experimental comparisons of these measures. This is followed by the proposal of a robust method for comparing graphs sharing the same set of vertices. This measure is shown to be applicable to the comparison and merging of synonymy networks. Finally an application for the enrichment of lexical resources is presented. It consists in providing candidate synonyms on the basis of already existing links. In the second part, a parallel between formal concept analysis and clustering of bipartite graph is established. This parallel leads to the particular case where a partition of one of the vertex groups can be determined whereas there is no corresponding partition on the other group of vertices. A simple method that addresses this problem is proposed and evaluated. Finally, a system of automatic classification of search results (Kodex) is presented. This system is an application of previously seen clustering methods. An evaluation on a collection of two million web pages shows the benefits of the approach and also helps to understand some differences between clustering methods.

Page generated in 0.0559 seconds