Spelling suggestions: "subject:"crosslingual"" "subject:"crosslingualen""
21 |
Improving Multilingual Models for the Swedish Language : Exploring CrossLingual Transferability and Stereotypical BiasesKatsarou, Styliani January 2021 (has links)
The best performing Transformer-based Language Models are monolingual and mainly focus on high-resource languages such as English. In an attempt to extend their usage to more languages, multilingual models have been introduced. Nevertheless, multilingual models still underperform on a specific language when compared to a similarly sized monolingual model that has been trained solely on that specific language. The main objective of this thesis project is to explore how a multilingual model can be improved for Swedish which is a low-resource language. We study if a multilingual model can benefit from further pre-training on Swedish or on a mix of English and Swedish text before fine-tuning. Our results on the task of semantic text similarity show that further pre-training increases the Pearson Correlation Score by 5% for specific cross-lingual language settings. Taking into account the responsibilities that arise from the increased use of Language Models in real-world applications, we supplement our work by additional experiments that measure stereotypical biases associated to gender. We use a new dataset that we designed specifically for that purpose. Our systematic study compares Swedish to English as well as various model sizes. The insights from our exploration indicate that the Swedish language carries less bias associated to gender than English and that higher manifestation of gender bias is associated to the use of larger Language Models. / De bästa Transformerbaserade språkmodellerna är enspråkiga och fokuserar främst på resursrika språk som engelska. I ett försök att utöka deras användning till fler språk har flerspråkiga modeller introducerats. Flerspråkiga modeller underpresterar dock fortfarande på enskilda språk när man jämför med en enspråkig modell av samma storlek som enbart har tränats på det specifika språket. Huvudsyftet med detta examensarbete är att utforska hur en flerspråkig modell kan förbättras för svenska som är ett resurssnålt språk. Vi studerar om en flerspråkig modell kan dra nytta av ytterligare förträning på svenska eller av en blandning av engelsk och svensk text innan finjustering. Våra resultat på uppgiften om semantisk textlikhet visar att ytterligare förträning ökar Pearsons korrelationspoäng med 5% för specifika tvärspråkiga språkinställningar. Med hänsyn till det ansvar som uppstår från den ökade användningen av språkmodeller i verkliga tillämpningar, kompletterar vi vårt arbete med ytterligare experiment som mäter stereotypa fördomar kopplade till kön. Vi använder en ny datauppsättning som vi har utformat specifikt för det ändamålet. Vår systematiska studie jämför svenska med engelska samt olika modellstorlekar. Insikterna från vår forskning tyder på att det svenska språket har mindre partiskhet förknippat med kön än engelska, samt att högre manifestation av könsfördomar är förknippat med användningen av större språkmodeller.
|
22 |
Entity-based coherence in statistical machine translation : a modelling and evaluation perspectiveWetzel, Dominikus Emanuel January 2018 (has links)
Natural language documents exhibit coherence and cohesion by means of interrelated structures both within and across sentences. Sentences do not stand in isolation from each other and only a coherent structure makes them understandable and sound natural to humans. In Statistical Machine Translation (SMT) only little research exists on translating a document from a source language into a coherent document in the target language. The dominant paradigm is still one that considers sentences independently from each other. There is both a need for a deeper understanding of how to handle specific discourse phenomena, and for automatic evaluation of how well these phenomena are handled in SMT. In this thesis we explore an approach how to treat sentences as dependent on each other by focussing on the problem of pronoun translation as an instance of a discourse-related non-local phenomenon. We direct our attention to pronoun translation in the form of cross-lingual pronoun prediction (CLPP) and develop a model to tackle this problem. We obtain state-of-the-art results exhibiting the benefit of having access to the antecedent of a pronoun for predicting the right translation of that pronoun. Experiments also showed that features from the target side are more informative than features from the source side, confirming linguistic knowledge that referential pronouns need to agree in gender and number with their target-side antecedent. We show our approach to be applicable across the two language pairs English-French and English-German. The experimental setting for CLPP is artificially restricted, both to enable automatic evaluation and to provide a controlled environment. This is a limitation which does not yet allow us to test the full potential of CLPP systems within a more realistic setting that is closer to a full SMT scenario. We provide an annotation scheme, a tool and a corpus that enable evaluation of pronoun prediction in a more realistic setting. The annotated corpus consists of parallel documents translated by a state-of-the-art neural machine translation (NMT) system, where the appropriate target-side pronouns have been chosen by annotators. With this corpus, we exhibit a weakness of our current CLPP systems in that they are outperformed by a state-of-the-art NMT system in this more realistic context. This corpus provides a basis for future CLPP shared tasks and allows the research community to further understand and test their methods. The lack of appropriate evaluation metrics that explicitly capture non-local phenomena is one of the main reasons why handling non-local phenomena has not yet been widely adopted in SMT. To overcome this obstacle and evaluate the coherence of translated documents, we define a bilingual model of entity-based coherence, inspired by work on monolingual coherence modelling, and frame it as a learning-to-rank problem. We first evaluate this model on a corpus where we artificially introduce coherence errors based on typical errors CLPP systems make. This allows us to assess the quality of the model in a controlled environment with automatically provided gold coherence rankings. Results show that this model can distinguish with high accuracy between a human-authored translation and one with coherence errors, that it can also distinguish between document pairs from two corpora with different degrees of coherence errors, and that the learnt model can be successfully applied when the test set distribution of errors comes from a different one than the one from the training data, showing its generalization potentials. To test our bilingual model of coherence as a discourse-aware SMT evaluation metric, we apply it to more realistic data. We use it to evaluate a state-of-the-art NMT system against post-editing systems with pronouns corrected by our CLPP systems. For verifying our metric, we reuse our annotated parallel corpus and consider the pronoun annotations as proxy for human document-level coherence judgements. Experiments show far lower accuracy in ranking translations according to their entity-based coherence than on the artificial corpus, suggesting that the metric has difficulties generalizing to a more realistic setting. Analysis reveals that the system translations in our test corpus do not differ in their pronoun translations in almost half of the document pairs. To circumvent this data sparsity issue, and to remove the need for parameter learning, we define a score-based SMT evaluation metric which directly uses features from our bilingual coherence model.
|
23 |
Effective Techniques for Indonesian Text RetrievalAsian, Jelita, jelitayang@gmail.com January 2007 (has links)
The Web is a vast repository of data, and information on almost any subject can be found with the aid of search engines. Although the Web is international, the majority of research on finding of information has a focus on languages such as English and Chinese. In this thesis, we investigate information retrieval techniques for Indonesian. Although Indonesia is the fourth most populous country in the world, little attention has been given to search of Indonesian documents. Stemming is the process of reducing morphological variants of a word to a common stem form. Previous research has shown that stemming is language-dependent. Although several stemming algorithms have been proposed for Indonesian, there is no consensus on which gives better performance. We empirically explore these algorithms, showing that even the best algorithm still has scope for improvement. We propose novel extensions to this algorithm and develop a new Indonesian stemmer, and show that these can improve stemming correctness by up to three percentage points; our approach makes less than one error in thirty-eight words. We propose a range of techniques to enhance the performance of Indonesian information retrieval. These techniques include: stopping; sub-word tokenisation; and identification of proper nouns; and modifications to existing similarity functions. Our experiments show that many of these techniques can increase retrieval performance, with the highest increase achieved when we use grams of size five to tokenise words. We also present an effective method for identifying the language of a document; this allows various information retrieval techniques to be applied selectively depending on the language of target documents. We also address the problem of automatic creation of parallel corpora --- collections of documents that are the direct translations of each other --- which are essential for cross-lingual information retrieval tasks. Well-curated parallel corpora are rare, and for many languages, such as Indonesian, do not exist at all. We describe algorithms that we have developed to automatically identify parallel documents for Indonesian and English. Unlike most current approaches, which consider only the context and structure of the documents, our approach is based on the document content itself. Our algorithms do not make any prior assumptions about the documents, and are based on the Needleman-Wunsch algorithm for global alignment of protein sequences. Our approach works well in identifying Indonesian-English parallel documents, especially when no translation is performed. It can increase the separation value, a measure to discriminate good matches of parallel documents from bad matches, by approximately ten percentage points. We also investigate the applicability of our identification algorithms for other languages that use the Latin alphabet. Our experiments show that, with minor modifications, our alignment methods are effective for English-French, English-German, and French-German corpora, especially when the documents are not translated. Our technique can increase the separation value for the European corpus by up to twenty-eight percentage points. Together, these results provide a substantial advance in understanding techniques that can be applied for effective Indonesian text retrieval.
|
24 |
Hledání struktury vět přirozeného jazyka pomocí částečně řízených metod / Discovering the structure of natural language sentences by semi-supervised methodsRosa, Rudolf January 2018 (has links)
Discovering the structure of natural language sentences by semi-supervised methods Rudolf Rosa In this thesis, we focus on the problem of automatically syntactically ana- lyzing a language for which there is no syntactically annotated training data. We explore several methods for cross-lingual transfer of syntactic as well as morphological annotation, ultimately based on utilization of bilingual or multi- lingual sentence-aligned corpora and machine translation approaches. We pay particular attention to automatic estimation of the appropriateness of a source language for the analysis of a given target language, devising a novel measure based on the similarity of part-of-speech sequences frequent in the languages. The effectiveness of the presented methods has been confirmed by experiments conducted both by us as well as independently by other respectable researchers. 1
|
25 |
Multilingual Zero-Shot and Few-Shot Causality DetectionReimann, Sebastian Michael January 2021 (has links)
Relations that hold between causes and their effects are fundamental for a wide range of different sectors. Automatically finding sentences that express such relations may for example be of great interest for the economy or political institutions. However, for many languages other than English, a lack of training resources for this task needs to be dealt with. In recent years, large, pretrained transformer-based model architectures have proven to be very effective for tasks involving cross-lingual transfer such as cross-lingual language inference, as well as multilingual named entity recognition, POS-tagging and dependency parsing, which may hint at similar potentials for causality detection. In this thesis, we define causality detection as a binary labelling problem and use cross-lingual transfer to alleviate data scarcity for German and Swedish by using three different classifiers that make either use of multilingual sentence embeddings obtained from a pretrained encoder or pretrained multilingual language models. The source languages in most of our experiments will be English, for Swedish we however also use a small German training set and a combination of English and German training data. We try out zero-shot transfer as well as making use of limited amounts of target language data either as a development set or as additional training data in a few-shot setting. In the latter scenario, we explore the impact of varying sizes of training data. Moreover, the problem of data scarcity in our situation also makes it necessary to work with data from different annotation projects. We also explore how much this would impact our result. For German as a target language, our results in a zero-shot scenario expectedly fall short in comparison with monolingual experiments, but F1-macro scores between 60 and 65 in cases where annotation did not differ drastically still signal that it was possible to transfer at least some knowledge. When introducing only small amounts of target language data, already notable improvements were observed and with the full German training data of about 3,000 sentences combined with the most suitable English data set, the performance for German in some scenarios even almost matches the state of the art for monolingual experiments on English. The best zero-shot performance on the Swedish data was even outperforming the scores achieved for German. However, due to problems with the additional Swedish training data, we were not able to improve upon the zero-shot performance in a few-shot setting in a similar manner as it was the case for German.
|
26 |
Low Supervision, Low Corpus size, Low Similarity! Challenges in cross-lingual alignment of word embeddings : An exploration of the limitations of cross-lingual word embedding alignment in truly low resource scenariosDyer, Andrew January 2019 (has links)
Cross-lingual word embeddings are an increasingly important reseource in cross-lingual methods for NLP, particularly for their role in transfer learning and unsupervised machine translation, purportedly opening up the opportunity for NLP applications for low-resource languages. However, most research in this area implicitly expects the availablility of vast monolingual corpora for training embeddings, a scenario which is not realistic for many of the world's languages. Moreover, much of the reporting of the performance of cross-lingual word embeddings is based on a fairly narrow set of mostly European language pairs. Our study examines the performance of cross-lingual alignment across a more diverse set of language pairs; controls for the effect of the corpus size on which the monolingual embedding spaces are trained; and studies the impact of spectral graph properties of the embedding spsace on alignment. Through our experiments on a more diverse set of language pairs, we find that performance in bilingual lexicon induction is generally poor in heterogeneous pairs, and that even using a gold or heuristically derived dictionary has little impact on the performance on these pairs of languages. We also find that the performance for these languages only increases slowly with corpus size. Finally, we find a moderate correlation between the isospectral difference of the source and target embeddings and the performance of bilingual lexicon induction. We infer that methods other than cross-lingual alignment may be more appropriate in the case of both low resource languages and heterogeneous language pairs.
|
27 |
Exploring Cross-Lingual Transfer Learning for Swedish Named Entity Recognition : Fine-tuning of English and Multilingual Pre-trained Models / Utforskning av tvärspråklig överföringsinlärning för igenkänning av namngivna enheter på svenskaLai Wikström, Daniel, Sparr, Axel January 2023 (has links)
Named Entity Recognition (NER) is a critical task in Natural Language Processing (NLP), and recent advancements in language model pre-training have significantly improved its performance. However, this improvement is not universally applicable due to a lack of large pre-training datasets or computational budget for smaller languages. This study explores the viability of fine-tuning an English and a multilingual model on a Swedish NER task, compared to a model trained solely on Swedish. Our methods involved training these models and measuring their performance using the F1-score metric. Despite fine-tuning, the Swedish model outperformed both the English and multilingual models by 3.0 and 9.0 percentage points, respectively. The performance gap between the English and Swedish models during fine-tuning decreased from 19.8 to 9.0 percentage points. This suggests that while the Swedish model achieved the best performance, fine-tuning can substantially enhance the performance of English and multilingual models for Swedish NER tasks. / Inom området för Natural Language Processing (NLP) är identifiering av namngivna entiteter (NER) en viktig problemtyp. Tack vare senaste tidens framsteg inom förtränade språkmodeller har modellernas prestanda på problemtypen ökat kraftigt. Denna förbättring kan dock inte tillämpas överallt på grund av en brist på omfattande dataset för förträning eller tillräcklig datorkraft för mindre språk. I denna studie undersöks potentialen av fine-tuning på både en engelsk, en svensk och en flerspråkig modell för en svensk NER-uppgift. Dessa modeller tränades och deras effektivitet bedömdes genom att använda F1-score som mått på prestanda. Även med fine-tuning var den svenska modellen bättre än både den engelska och flerspråkiga modellen, med en skillnad på 3,0 respektive 9,0 procentenheter i F1-score. Skillnaden i prestandan mellan den engelska och svenska modellen minskade från 19,8 till 9,0 procentenheter efter fine-tuning. Detta indikerar att även om den svenska modellen var mest framgångsrik, kan fine-tuning av engelska och flerspråkiga modeller betydligt förbättra prestandan för svenska NER-uppgifter.
|
28 |
Monolingual and Cross-Lingual Survey Response AnnotationZhao, Yahui January 2023 (has links)
Multilingual natural language processing (NLP) is increasingly recognized for its potential in processing diverse text-type data, including those from social media, reviews, and technical reports. Multilingual language models like mBERT and XLM-RoBERTa (XLM-R) play a pivotal role in multilingual NLP. Notwithstanding their capabilities, the performance of these models largely relies on the availability of annotated training data. This thesis employs the multilingual pre-trained model XLM-R to examine its efficacy in sequence labelling to open-ended questions on democracy across multilingual surveys. Traditional annotation practices have been labour-intensive and time-consuming, with limited automation attempts. Previous studies often translated multilingual data into English, bypassing the challenges and nuances of native languages. Our study explores automatic multilingual annotation at the token level for democracy survey responses in five languages: Hungarian, Italian, Polish, Russian, and Spanish. The results reveal promising F1 scores, indicating the feasibility of using multilingual models for such tasks. However, the performance of these models is closely tied to the quality and nature of the training set. This research paves the way for future experiments and model adjustments, underscoring the importance of refining training data and optimizing model techniques for enhanced classification accuracy.
|
29 |
Task-agnostic knowledge distillation of mBERT to Swedish / Uppgiftsagnostisk kunskapsdestillation av mBERT till svenskaKina, Added January 2022 (has links)
Large transformer models have shown great performance in multiple natural language processing tasks. However, slow inference, strong dependency on powerful hardware, and large energy consumption limit their availability. Furthermore, the best-performing models use high-resource languages such as English, which increases the difficulty of using these models for low-resource languages. Research into compressing large transformer models has been successful, using methods such as knowledge distillation. In this thesis, an existing task-agnostic knowledge distillation method is employed by using Swedish data for distillation of mBERT models further pre-trained on different amounts of Swedish data, in order to obtain a smaller multilingual model with performance in Swedish competitive with a monolingual student model baseline. It is shown that none of the models distilled from a multilingual model outperform the distilled Swedish monolingual model on Swedish named entity recognition and Swedish translated natural language understanding benchmark tasks. It is also shown that further pre-training mBERT does not significantly affect the performance of the multilingual teacher or student models on downstream tasks. The results corroborate previously published results showing that no student model outperforms its teacher. / Stora transformator-modeller har uppvisat bra prestanda i flera olika uppgifter inom naturlig bearbetning av språk. Men långsam inferensförmåga, starkt beroende av kraftfull hårdvara och stor energiförbrukning begränsar deras tillgänglighet. Dessutom använder de bäst presterande modellerna högresursspråk som engelska, vilket ökar svårigheten att använda dessa modeller för lågresursspråk. Forskning om att komprimera dessa stora transformatormodeller har varit framgångsrik, med metoder som kunskapsdestillation. I denna avhandling används en existerande uppgiftsagnostisk kunskapsdestillationsmetod genom att använda svensk data för destillation av mBERT modeller vidare förtränade på olika mängder svensk data för att få fram en mindre flerspråkig modell med prestanda på svenska konkurrerande med en enspråkig elevmodell baslinje. Det visas att ingen av modellerna destillerade från en flerspråkig modell överträffar den destillerade svenska enspråkiga modellen på svensk namngiven enhetserkännande och svensk översatta naturlig språkförståelse benchmark uppgifter. Det visas också att ytterligare förträning av mBERTpåverkar inte väsentligt prestandan av de flerspråkiga lärar- eller elevmodeller för nedströmsuppgifter. Resultaten bekräftar tidigare publicerade resultat som visar att ingen elevmodell överträffar sin lärare.
|
30 |
Predicting Linguistic Structure with Incomplete and Cross-Lingual SupervisionTäckström, Oscar January 2013 (has links)
Contemporary approaches to natural language processing are predominantly based on statistical machine learning from large amounts of text, which has been manually annotated with the linguistic structure of interest. However, such complete supervision is currently only available for the world's major languages, in a limited number of domains and for a limited range of tasks. As an alternative, this dissertation considers methods for linguistic structure prediction that can make use of incomplete and cross-lingual supervision, with the prospect of making linguistic processing tools more widely available at a lower cost. An overarching theme of this work is the use of structured discriminative latent variable models for learning with indirect and ambiguous supervision; as instantiated, these models admit rich model features while retaining efficient learning and inference properties. The first contribution to this end is a latent-variable model for fine-grained sentiment analysis with coarse-grained indirect supervision. The second is a model for cross-lingual word-cluster induction and the application thereof to cross-lingual model transfer. The third is a method for adapting multi-source discriminative cross-lingual transfer models to target languages, by means of typologically informed selective parameter sharing. The fourth is an ambiguity-aware self- and ensemble-training algorithm, which is applied to target language adaptation and relexicalization of delexicalized cross-lingual transfer parsers. The fifth is a set of sequence-labeling models that combine constraints at the level of tokens and types, and an instantiation of these models for part-of-speech tagging with incomplete cross-lingual and crowdsourced supervision. In addition to these contributions, comprehensive overviews are provided of structured prediction with no or incomplete supervision, as well as of learning in the multilingual and cross-lingual settings. Through careful empirical evaluation, it is established that the proposed methods can be used to create substantially more accurate tools for linguistic processing, compared to both unsupervised methods and to recently proposed cross-lingual methods. The empirical support for this claim is particularly strong in the latter case; our models for syntactic dependency parsing and part-of-speech tagging achieve the hitherto best published results for a wide number of target languages, in the setting where no annotated training data is available in the target language.
|
Page generated in 0.0499 seconds