Spelling suggestions: "subject:"corpus cocreation"" "subject:"corpus 3dcreation""
1 |
Techniques for creating ground-truthed sketch corporaMacLean, Scott January 2009 (has links)
The problem of recognizing handwritten mathematics notation has been studied for over forty years with little practical success. The poor performance of math recognition systems is due, at least in part, to a lack of realistic data for use in
training recognition systems and evaluating their accuracy. In fields for which such data is available, such as face and voice recognition, the data, along with objectively-evaluated recognition contests, has contributed to the rapid advancement of the state of the art.
This thesis proposes a method for constructing data corpora not only for hand-
written math recognition, but for sketch recognition in general. The method consists of automatically generating template expressions, transcribing these expressions by hand, and automatically labelling them with ground-truth. This approach is motivated by practical considerations and is shown to be more extensible and objective than other potential methods.
We introduce a grammar-based approach for the template generation task. In this approach, random derivations in a context-free grammar are controlled so as to generate math expressions for transcription. The generation process may be controlled in terms of expression size and distribution over mathematical semantics.
Finally, we present a novel ground-truthing method based on matching terminal symbols in grammar derivations to recognized symbols. The matching is produced by a best-first search through symbol recognition results. Experiments show that this method is highly accurate but rejects many of its inputs.
|
2 |
Techniques for creating ground-truthed sketch corporaMacLean, Scott January 2009 (has links)
The problem of recognizing handwritten mathematics notation has been studied for over forty years with little practical success. The poor performance of math recognition systems is due, at least in part, to a lack of realistic data for use in
training recognition systems and evaluating their accuracy. In fields for which such data is available, such as face and voice recognition, the data, along with objectively-evaluated recognition contests, has contributed to the rapid advancement of the state of the art.
This thesis proposes a method for constructing data corpora not only for hand-
written math recognition, but for sketch recognition in general. The method consists of automatically generating template expressions, transcribing these expressions by hand, and automatically labelling them with ground-truth. This approach is motivated by practical considerations and is shown to be more extensible and objective than other potential methods.
We introduce a grammar-based approach for the template generation task. In this approach, random derivations in a context-free grammar are controlled so as to generate math expressions for transcription. The generation process may be controlled in terms of expression size and distribution over mathematical semantics.
Finally, we present a novel ground-truthing method based on matching terminal symbols in grammar derivations to recognized symbols. The matching is produced by a best-first search through symbol recognition results. Experiments show that this method is highly accurate but rejects many of its inputs.
|
3 |
Reconnaissance des procédés de traduction sous-phrastiques : des ressources aux validations / Recognition of sub-sentential translation techniques : from resources to validationZhai, Yuming 19 December 2019 (has links)
Les procédés de traduction constituent un sujet important pour les traductologues et les linguistes. Face à un certain mot ou segment difficile à traduire, les traducteurs humains doivent appliquer les solutions particulières au lieu de la traduction littérale, telles que l'équivalence idiomatique, la généralisation, la particularisation, la modulation syntaxique ou sémantique, etc.En revanche, ce sujet a reçu peu d'attention dans le domaine du Traitement Automatique des Langues (TAL). Notre problématique de recherche se décline en deux questions : est-il possible de reconnaître automatiquement les procédés de traduction ? Certaines tâches en TAL peuvent-elles bénéficier de la reconnaissance des procédés de traduction ?Notre hypothèse de travail est qu'il est possible de reconnaître automatiquement les différents procédés de traduction (par exemple littéral versus non littéral). Pour vérifier notre hypothèse, nous avons annoté un corpus parallèle anglais-français en procédés de traduction, tout en établissant un guide d'annotation. Notre typologie de procédés est proposée en nous appuyant sur des typologies précédentes, et est adaptée à notre corpus. L'accord inter-annotateur (0,67) est significatif mais dépasse peu le seuil d'un accord fort (0,61), ce qui reflète la difficulté de la tâche d'annotation. En nous fondant sur des exemples annotés, nous avons ensuite travaillé sur la classification automatique des procédés de traduction. Même si le jeu de données est limité, les résultats expérimentaux valident notre hypothèse de travail concernant la possibilité de reconnaître les différents procédés de traduction. Nous avons aussi montré que l'ajout des traits sensibles au contexte est pertinent pour améliorer la classification automatique.En vue de tester la généricité de notre typologie de procédés de traduction et du guide d'annotation, nos études sur l'annotation manuelle ont été étendues au couple de langues anglais-chinois. Ce couple de langues partagent beaucoup moins de points communs par rapport au couple anglais-français au niveau linguistique et culturel. Le guide d'annotation a été adapté et enrichi. La typologie de procédés de traduction reste identique à celle utilisée pour le couple anglais-français, ce qui justifie d'étudier le transfert des expériences menées pour le couple anglais-français au couple anglais-chinois.Dans le but de valider l'intérêt de ces études, nous avons conçu un outil d'aide à la compréhension écrite pour les apprenants de français langue étrangère. Une expérience sur la compréhension écrite avec des étudiants chinois confirme notre hypothèse de travail et permet de modéliser l'outil. D'autres perspectives de recherche incluent l'aide à la construction de ressource de paraphrases, l'évaluation de l'alignement automatique de mots et l'évaluation de la qualité de la traduction automatique. / Translation techniques constitute an important subject in translation studies and in linguistics. When confronted with a certain word or segment that is difficult to translate, human translators must apply particular solutions instead of literal translation, such as idiomatic equivalence, generalization, particularization, syntactic or semantic modulation, etc.However, this subject has received little attention in the field of Natural Language Processing (NLP). Our research problem is twofold: is it possible to automatically recognize translation techniques? Can some NLP tasks benefit from the recognition of translation techniques?Our working hypothesis is that it is possible to automatically recognize the different translation techniques (e.g. literal versus non-literal). To verify our hypothesis, we annotated a parallel English-French corpus with translation techniques, while establishing an annotation guide. Our typology of techniques is proposed based on previous typologies, and is adapted to our corpus. The inter-annotator agreement (0.67) is significant but slightly exceeds the threshold of a strong agreement (0.61), reflecting the difficulty of the annotation task. Based on annotated examples, we then worked on the automatic classification of translation techniques. Even if the dataset is limited, the experimental results validate our working hypothesis regarding the possibility of recognizing the different translation techniques. We have also shown that adding context-sensitive features is relevant to improve the automatic classification.In order to test the genericity of our typology of translation techniques and the annotation guide, our studies of manual annotation have been extended to the English-Chinese language pair. This pair shares far fewer linguistic and cultural similarities than the English-French pair. The annotation guide has been adapted and enriched. The typology of translation techniques remains the same as that used for the English-French pair, which justifies studying the transfer of the experiments conducted for the English-French pair to the English-Chinese pair.With the aim to validate the benefits of these studies, we have designed a tool to help learners of French as a foreign language in reading comprehension. An experiment on reading comprehension with Chinese students confirms our working hypothesis and allows us to model the tool. Other research perspectives include helping to build paraphrase resources, evaluating automatic word alignment and evaluating the quality of machine translation.
|
4 |
Coreference resolution with and for WikipediaGhaddar, Abbas 06 1900 (has links)
Wikipédia est une ressource embarquée dans de nombreuses applications du traite-
ment des langues naturelles. Pourtant, aucune étude à notre connaissance n’a tenté de
mesurer la qualité de résolution de coréférence dans les textes de Wikipédia, une étape
préliminaire à la compréhension de textes. La première partie de ce mémoire consiste à
construire un corpus de coréférence en anglais, construit uniquement à partir des articles
de Wikipédia. Les mentions sont étiquetées par des informations syntaxiques et séman-
tiques, avec lorsque cela est possible un lien vers les entités FreeBase équivalentes. Le
but est de créer un corpus équilibré regroupant des articles de divers sujets et tailles.
Notre schéma d’annotation est similaire à celui suivi dans le projet OntoNotes. Dans la
deuxième partie, nous allons mesurer la qualité des systèmes de détection de coréférence
à l’état de l’art sur une tâche simple consistant à mesurer les mentions du concept décrit
dans une page Wikipédia (p. ex : les mentions du président Obama dans la page Wiki-
pédia dédiée à cette personne). Nous tenterons d’améliorer ces performances en faisant
usage le plus possible des informations disponibles dans Wikipédia (catégories, redi-
rects, infoboxes, etc.) et Freebase (information du genre, du nombre, type de relations
avec autres entités, etc.). / Wikipedia is a resource of choice exploited in many NLP applications, yet we are
not aware of recent attempts to adapt coreference resolution to this resource, a prelim-
inary step to understand Wikipedia texts. The first part of this master thesis is to build
an English coreference corpus, where all documents are from the English version of
Wikipedia. We annotated each markable with coreference type, mention type and the
equivalent Freebase topic. Our corpus has no restriction on the topics of the documents
being annotated, and documents of various sizes have been considered for annotation.
Our annotation scheme follows the one of OntoNotes with a few disparities. In part two,
we propose a testbed for evaluating coreference systems in a simple task of measuring
the particulars of the concept described in a Wikipedia page (eg. The statements of Pres-
ident Obama the Wikipedia page dedicated to that person). We show that by exploiting
the Wikipedia markup (categories, redirects, infoboxes, etc.) of a document, as well
as links to external knowledge bases such as Freebase (information of the type, num-
ber, type of relationship with other entities, etc.), we can acquire useful information on
entities that helps to classify mentions as coreferent or not.
|
5 |
Shades of Certainty : Annotation and Classification of Swedish Medical RecordsVelupillai, Sumithra January 2012 (has links)
Access to information is fundamental in health care. This thesis presents research on Swedish medical records with the overall goal of building intelligent information access tools that can aid health personnel, researchers and other professions in their daily work, and, ultimately, improve health care in general. The issue of ethics and identifiable information is addressed by creating an annotated gold standard corpus and porting an existing de-identification system to Swedish from English. The aim is to move towards making textual resources available to researchers without risking exposure of patients’ confidential information. Results for the rule-based system are not encouraging, but results for the gold standard are fairly high. Affirmed, uncertain and negated information needs to be distinguished when building accurate information extraction tools. Annotation models are created, with the aim of building automated systems. One model distinguishes certain and uncertain sentences, and is applied on medical records from several clinical departments. In a second model, two polarities and three levels of certainty are applied on diagnostic statements from an emergency department. Overall results are promising. Differences are seen depending on clinical practice, annotation task and level of domain expertise among the annotators. Using annotated resources for automatic classification is studied. Encouraging overall results using local context information are obtained. The fine-grained certainty levels are used for building classifiers for real-world e-health scenarios. This thesis contributes two annotation models of certainty and one of identifiable information, applied on Swedish medical records. A deeper understanding of the language use linked to conveying certainty levels is gained. Three annotated resources that can be used for further research have been created, and implications for automated systems are presented.
|
6 |
Studies in Corpora and Idioms : Getting the cat out of the bagMinugh, David January 2014 (has links)
“Idiomatic” expressions, usually called “idioms”, such as a dime a dozen, a busman’s holiday, or to have bats in your belfry are a curious part of any language: they usually have a fixed lexical (why a busman?) and structural composition (only dime and dozen in direct conjunction mean ‘common, ordinary’), can be semantically obscure (why bats?), yet are widely recognized in the speech community, in spite of being so rare that only large corpora can provide us with access to sufficient empirical data on their use. In this compilation thesis, four published studies focusing on idioms in corpora are presented. Study 1 details the creation of and data in the author’s medium-sized corpus from 1999, the 3.7 million word Coll corpus of online university student newspapers, with comparisons to data from standard corpora of the time. Study 2 examines the extent to which recognized idioms are to be found in the Coll corpus and how they can be varied. Study 3 draws upon the British National Corpus and a series of British and American newspaper corpora to see how idioms may be “anchored” in their contexts, primarily by the device of premodification via an adjective appropriate to the context, not to the idiom. Study 4 examines idiom-usage patterns in the Time Magazine corpus, focusing on possible aspects of diachronic change over the near-century Time represents. The introductory compilation chapter places and discusses these studies in their contexts of contemporary idiom and corpus research; building on these studies, it provides two specific examples of potential ways forward in idiom research: an examination of the idioms used in a specific subgenre of newspapers (editorials), and a detailed suggestion for teachers about how to examine multiple facets of a specific modern idiom (the glass ceiling) in the classroom. Finally, a summing-up includes suggestions for further research, particularly at the level of the patterning of individual idioms, rather than treating them as a homogeneous phenomenon.
|
7 |
Bootstrapping Named Entity Annotation by Means of Active Machine Learning: A Method for Creating CorporaOlsson, Fredrik January 2008 (has links)
This thesis describes the development and in-depth empirical investigation of a method, called BootMark, for bootstrapping the marking up of named entities in textual documents. The reason for working with documents, as opposed to for instance sentences or phrases, is that the BootMark method is concerned with the creation of corpora. The claim made in the thesis is that BootMark requires a human annotator to manually annotate fewer documents in order to produce a named entity recognizer with a given performance, than would be needed if the documents forming the basis for the recognizer were randomly drawn from the same corpus. The intention is then to use the created named en- tity recognizer as a pre-tagger and thus eventually turn the manual annotation process into one in which the annotator reviews system-suggested annotations rather than creating new ones from scratch. The BootMark method consists of three phases: (1) Manual annotation of a set of documents; (2) Bootstrapping – active machine learning for the purpose of selecting which document to an- notate next; (3) The remaining unannotated documents of the original corpus are marked up using pre-tagging with revision. Five emerging issues are identified, described and empirically investigated in the thesis. Their common denominator is that they all depend on the real- ization of the named entity recognition task, and as such, require the context of a practical setting in order to be properly addressed. The emerging issues are related to: (1) the characteristics of the named entity recognition task and the base learners used in conjunction with it; (2) the constitution of the set of documents annotated by the human annotator in phase one in order to start the bootstrapping process; (3) the active selection of the documents to annotate in phase two; (4) the monitoring and termination of the active learning carried out in phase two, including a new intrinsic stopping criterion for committee-based active learning; and (5) the applicability of the named entity recognizer created during phase two as a pre-tagger in phase three. The outcomes of the empirical investigations concerning the emerging is- sues support the claim made in the thesis. The results also suggest that while the recognizer produced in phases one and two is as useful for pre-tagging as a recognizer created from randomly selected documents, the applicability of the recognizer as a pre-tagger is best investigated by conducting a user study involving real annotators working on a real named entity recognition task.
|
8 |
Créer un corpus annoté en entités nommées avec Wikipédia et WikiData : de mauvais résultats et du potentielPagès, Lucas 04 1900 (has links)
Ce mémoire explore l'utilisation conjointe de WikiData et de Wikipédia pour créer une ressource d'entités nommées (NER) annotée : DataNER. Il fait suite aux travaux ayant utilisé les bases de connaissance Freebase et DBpedia et tente de les remplacer avec WikiData, une base de connaissances collaborative dont la croissance continue est garantie par une communauté active. Malheureusement, les résultats du processus proposé dans ce mémoire ne sont pas à la hauteur des attentes initiales.
Ce document décrit dans un premier temps la façon dont on construit DataNER. L'utilisation des ancres de Wikipédia permet d'identifier un grand nombre d'entités nommées dans la ressource et le programme NECKAr permet de les classifier parmi les classes LOC, PER, ORG et MISC en utilisant WikiData. On décrit de ce fait les détails de ce processus, dont la façon dont on utilise les données de Wikipédia et WikiData afin de produire de nouvelles entités nommées et comment calibrer les paramètres du processus de création de DataNER.
Dans un second temps, on compare DataNER à d'autres ressources similaires en utilisant des modèles de NER ainsi qu'avec des comparaisons manuelles. Ces comparaisons nous permettent de mettre en valeur différentes raisons pour lesquelles les données de DataNER ne sont pas d'aussi bonne qualité que celles de ces autres ressources.
On conclut de ce fait sur des pistes d'améliorations de DataNER ainsi que sur un commentaire sur le travail effectué, tout en insistant sur le potentiel de cette méthode de création de corpus. / This master's thesis explores the joint use of WikiData and Wikipedia to make an annotated named entities (NER) corpus : DataNER. It follows papers which have used the knowledge bases DBpedia and Freebase and attempts at replacing them with WikiData, a collaborative knowledge base with an active community guaranteeing its continuous growth. Unfortunately, the results of the process described in this thesis did not reach our initial expectations.
This document first describes the way in which we build DataNER. The use of Wikipedia anchors enable us to identify a significant quantity of named entities in the resource and the NECKAr toolkit labels them with classes LOC, PER, ORG and MISC using WikiData. Thus, we describe the details of the corpus making process, including the way in which we infer more named entities thanks to Wikipedia and WikiData, as well as how we calibrate the making of DataNER with all the information at our availability.
Secondly, we compare DataNER with other similar corpora using models trained on each of them, as well as manual comparisons. Those comparisons enable us to identify different reasons why the quality of DataNER does not match the one of those other corpora.
We conclude by giving ideas as to how to enhance the quality of DataNER, giving a more personal comment of the work that has been accomplished and insisting on the potential of using Wikipedia and WikiData to automatically create a corpus.
|
Page generated in 0.0775 seconds