• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Inter-annotator Agreement Measurement Methodology For The Turkish Discourse Bank (tdb)

Yalcinkaya, Ihsan Saban 01 September 2010 (has links) (PDF)
In the TDB[1]-like corpora annotation efforts, which are constructed by the intuitions of the annotators, the reliability of the corpus can only be determined via correct interannotator agreement measurement methodology (Artstein, &amp / Poesio, 2008). In this thesis, a methodology was defined to measure the inter-annotator agreement among the TDB annotators. The statistical tests and the agreement coefficients that are widely used in scientific communities, including Cochran&rsquo / s Q test (1950), Fleiss&rsquo / Kappa (1971), and Krippendorff&rsquo / s Alpha (1995), were examined in detail. The inter-annotator agreement measurement approaches of the various corpus annotation efforts were scrutinized in terms of the reported statistical results. It was seen that none of the reported interannotator agreement approaches were statistically appropriate for the TDB. Therefore, a comprehensive inter-annotator agreement measurement methodology was designed from scratch. A computer program, the Rater Agreement Tool (RAT), was developed in order to perform statistical measurements on the TDB with different corpus parameters and data handling approaches. It was concluded that Krippendorff&rsquo / s Alpha is the most appropriate statistical method for the TDB. It was seen that the measurements are affected with data handling approach preferences, as well as the used agreement statistic methods. It was also seen that there is not only one correct approach but several approaches valid for different research considerations. For the TDB, the major data handling suggestions that emerged are: (1) considering the words as building blocks of the annotations and (2) using the interval approach when it is preferred to weigh the partial disagreements, and using the boundary approach when it is preferred to evaluate all disagreements in same way.
2

Bootstrapping Annotated Job Ads using Named Entity Recognition and Swedish Language Models / Identifiering av namngivna enheter i jobbannonser genom användning av semi-övervakade tekniker och svenska språkmodeller

Nyqvist, Anna January 2021 (has links)
Named entity recognition (NER) is a task that concerns detecting and categorising certain information in text. A promising approach for NER that recently has emerged is fine-tuning Transformer-based language models for this specific task. However, these models may require a relatively large quantity of labelled data to perform well. This can limit NER models applicability in real-world applications as manual annotation often is costly and time-consuming. In this thesis, we investigate the learning curve of human annotation and of a NER model during a semi-supervised bootstrapping process. Special emphasis is given the dependence of the number of classes and the amount of training data used in the process. We first annotate a set of collected job advertisements and then apply bootstrapping using both annotated and unannotated data and continuously fine-tune a pre-trained Swedish BERT model. The initial class system is simplified during the bootstrapping process according to model performance and inter-annotator agreement. The model performance increased as the training set grew larger with a final micro F1-score of 54%. This result provides a good baseline, and we point out several improvements that can be made to further enhance performance. We further identify classes handled differently by the annotators and potential factors as to why this is. Suggestions for future work include adjusting the current class system further by removing classes that were identified as low-performing in this thesis. / Namngiven entitetsigenkänning (eng. named entity recognition) innebär att identifiera och kategorisera nyckelord i text. En ny lovande teknik för identifiering av namngivna enheter är att finjustera Transformerbaserade språkmodeller för denna specifika uppgift. Dessa modeller kräver dock stora mängder märkt data för att prestera väl. Detta kan begränsa antal områden i vilka de kan användas då manuell märkning av data ofta är kostsamt och tidskrävande. I denna avhandling undersöker vi inlärningskurvan för manuell annotering och för en språkmodell under en halvövervakad bootstrapping process. Särskild vikt läggs på hur modellens och annoterarnas inlärning påverkas av antal klasser och mängden träningsdata som används i processen. Vi annoterar först en samling jobbannonser och tillämpar sedan en bootstrapping process med både märkt och omärkt data i vilken en förtränad svensk BERT-modell kontinuerligt finjusteras. Det första klasssystemet förenklas under processens gång beroende på modellprestation och interannoterar-överenskommelse. Modellen presterade bättre med mer träningsdata och uppnådde en slutlig micro F1-score på 54%. Detta resultat ger en bra baslinje, och vi föreslår flera förbättringar som kan göras för att ytterligare förbättra modellprestationen. Vidare identifierar vi även klasser som hanteras olika av annoterare och potentiella faktorer till vad detta beror på. Förslag för framtida arbete inkluderar att justera det nuvarande klasssystemet ytterligare genom att ta bort klasser som identifierades som lågpresterande i denna avhandling.
3

Broad-domain Quantifier Scoping with RoBERTa

Rasmussen, Nathan Ellis 10 August 2022 (has links)
No description available.
4

Synthèse de parole expressive au delà du niveau de la phrase : le cas du conte pour enfant : conception et analyse de corpus de contes pour la synthèse de parole expressive / Expressive speech synthesis beyond the level of the sentence : the children tale usecase : tale corpora design and analysis for expressive speech synthesis

Doukhan, David 20 September 2013 (has links)
L'objectif de la thèse est de proposer des méthodes permettant d'améliorer l'expressivité des systèmes de synthèse de la parole. Une des propositions centrales de ce travail est de définir, utiliser et mesurer l'impact de structures linguistiques opérant au delà du niveau de la phrase, par opposition aux approches opérant sur des phrases isolées de leur contexte. Le cadre de l'étude est restreint au cas de la lecture de contes pour enfants. Les contes ont la particularité d'avoir été l'objet d'un certain nombre d'études visant à en dégager une structure narrative et de faire intervenir une certain nombre de stéréotypes de personnages (héros, méchant, fée) dont le discours est souvent rapporté. Ces caractéristiques particulières sont exploitées pour modéliser les propriétés prosodiques des contes au delà du niveau de la phrase. La transmission orale des contes a souvent été associée à une pratique musicale (chants, instruments) et leur lecture reste associée à des propriétés mélodiques très riches, dont la reproduction reste un défi pour les synthétiseurs de parole modernes. Pour répondre à ces problématiques, un premier corpus de contes écrits est collecté et annoté avec des informations relatives à la structure narrative des contes, l'identification et l'attribution des citations directes, le référencement des mentions des personnages ainsi que des entités nommées et des énumérations étendues. Le corpus analysé est décrit en terme de couverture et d'accord inter-annotateurs. Il est utilisé pour modéliser des systèmes de segmentation des contes en épisode, de détection des citations directes, des actes de dialogue et des modes de communication. Un deuxième corpus de contes lus par un locuteur professionnel est présenté. La parole est alignée avec les transcriptions lexicale et phonétique, les annotations du corpus texte et des méta-informations décrivant les caractéristiques des personnages intervenant dans le conte. Les relations entre les annotations linguistiques et les propriétés prosodiques observées dans le corpus de parole sont décrites et modélisées. Finalement, un prototype de contrôle des paramètres expressifs du synthétiseur par sélection d'unités Acapela est réalisé. Le prototype génère des instructions prosodiques opérant au delà du niveau de la phrase, notamment en utilisant les informations liées à la structure du conte et à la distinction entre discours direct et discours rapporté. La validation du prototype de contrôle est réalisée dans le cadre d'une expérience perceptive, qui montre une amélioration significative de la qualité de la synthèse. / The aim of this thesis is to propose ways to improve the expressiveness of speech synthesis systems. One of the central propositions of this work is to define, use and measure the impact of linguistic structures operating beyond the sentence level, as opposed to approaches operating on sentences out of their context. The scope of the study is restricted to the case of storytelling for children. The stories have the distinction of having been the subject of a number of studies in order to highlight a narrative structure and involve a number of stereotypical characters (hero, villain, fairy) whose speech is often reported. These special features are used to model the prosodic properties tales beyond the sentence level. The oral transmission of tales was often associated with musical practice (vocals, instruments) and their reading is associated with rich melodic properties including reproduction remains a challenge for modern speech synthesizers. To address these issues, a first corpus of written tales is collected and annotated with information about the narrative structure of stories, identification and allocation of direct quotations, referencing references to characters as well as named entities and enumerations areas. The corpus analyzed is described in terms of coverage and inter-annotator agreement. It is used to model systems segmentation tales episode, detection of direct quotes, dialogue acts and modes of communication. A second corpus of stories read by a professional speaker is presented. The word is aligned with the lexical and phonetic transcriptions, annotations of the corpus text and meta-information describing the characteristics of the characters involved in the story. The relationship between linguistic annotations and prosodic properties observed in the speech corpus are described and modeled. Finally, a prototype control expressive synthesizer parameters by Acapela unit selection is made. The prototype generates prosodic operating instructions beyond the sentence level, including using the information related to the structure of the story and the distinction between direct speech and reported speech. Prototype validation control is performed through a perceptual experience, which shows a significant improvement in the quality of the synthesis.

Page generated in 0.1174 seconds