Spelling suggestions: "subject:"aretrained language models"" "subject:"inrestrained language models""
1 |
Parafrasidentifiering med maskinklassificerad data : utvärdering av olika metoder / Paraphrase identification with computer classified paraphrases : An evaluation of different methodsJohansson, Oskar January 2020 (has links)
Detta arbete undersöker hur språkmodellen BERT och en MaLSTM-arkitektur fungerar att för att identifiera parafraser ur 'Microsoft Paraphrase Research Corpus' (MPRC) om dessa tränats på automatiskt identifierade parafraser ur 'Paraphrase Database' (PPDB). Metoderna ställs mot varandra för att undersöka vilken som presterar bäst och metoden att träna på maskinklassificerad data för att användas på mänskligt klassificerad data utvärderas i förhållande till annan klassificering av samma dataset. Meningsparen som används för att träna modellerna hämtas från de högst rankade parafraserna ur PPDB och genom en genereringsmetod som skapar icke-parafraser ur samma dataset. I resultatet visar sig BERT vara kapabel till att identifiera en del parafraser ur MPRC, medan MaLSTM-arkitekturen inte klarade av detta trots förmåga att särskilja på parafraser och icke-parafraser under träning. Både BERT och MaLSTM presterade sämre på att identifiera parafraser ur MPRC än modeller som till exempel StructBERT, som tränat och utvärderats på samma dataset, presterar. Anledningar till att MaLSTM inte klarar av uppgiften diskuteras och främst lyfts att meningarna från icke-parafraserna ur träningsdatan är för olika varandra i förhållande till hur de ser ut i MPRC. Slutligen diskuteras vikten av att forska vidare på hur man kan använda sig av maskinframtagna parafraser inom parafraseringsrelaterad forskning.
|
2 |
Generative Language Models for Automated Programming FeedbackHedberg Segeholm, Lea, Gustafsson, Erik January 2023 (has links)
In recent years, Generative Language Models have exploded into the mainstream with household names like BERT and ChatGPT, proving that text generation could have the potential to solve a variety of tasks. As the number of students enrolled into programming classes has increased significantly, providing adequate feedback for everyone has become a pressing logistical issue. In this work, we evaluate the ability of near state-of-the-art Generative Language Models to provide said feedback on an automated basis. Our results show that the latest publicly available model GPT-3.5 has a significant aptitude for finding errors in code while the older GPT-3 is noticeably more uneven in its analysis. It is our hope that future, potentially fine-tuned models could help fill the role of providing early feedback for beginners, thus significantly alleviating the pressure put upon instructors.
|
3 |
Context matters : Classifying Swedish texts using BERT's deep bidirectional word embeddingsHolmer, Daniel January 2020 (has links)
When classifying texts using a linear classifier, the texts are commonly represented as feature vectors. Previous methods to represent features as vectors have been unable to capture the context of individual words in the texts, in theory leading to a poor representation of natural language. Bidirectional Encoder Representations from Transformers (BERT), uses a multi-headed self-attention mechanism to create deep bidirectional feature representations, able to model the whole context of all words in a sequence. A BERT model uses a transfer learning approach, where it is pre-trained on a large amount of data and can be further fine-tuned for several down-stream tasks. This thesis uses one multilingual, and two dedicated Swedish BERT models, for the task of classifying Swedish texts as of either easy-to-read or standard complexity in their respective domains. The performance on the text classification task using the different models is then compared both with feature representation methods used in earlier studies, as well as with the other BERT models. The results show that all models performed better on the classification task than the previous methods of feature representation. Furthermore, the dedicated Swedish models show better performance than the multilingual model, with the Swedish model pre-trained on more diverse data outperforming the other.
|
4 |
Monolingual and Cross-Lingual Survey Response AnnotationZhao, Yahui January 2023 (has links)
Multilingual natural language processing (NLP) is increasingly recognized for its potential in processing diverse text-type data, including those from social media, reviews, and technical reports. Multilingual language models like mBERT and XLM-RoBERTa (XLM-R) play a pivotal role in multilingual NLP. Notwithstanding their capabilities, the performance of these models largely relies on the availability of annotated training data. This thesis employs the multilingual pre-trained model XLM-R to examine its efficacy in sequence labelling to open-ended questions on democracy across multilingual surveys. Traditional annotation practices have been labour-intensive and time-consuming, with limited automation attempts. Previous studies often translated multilingual data into English, bypassing the challenges and nuances of native languages. Our study explores automatic multilingual annotation at the token level for democracy survey responses in five languages: Hungarian, Italian, Polish, Russian, and Spanish. The results reveal promising F1 scores, indicating the feasibility of using multilingual models for such tasks. However, the performance of these models is closely tied to the quality and nature of the training set. This research paves the way for future experiments and model adjustments, underscoring the importance of refining training data and optimizing model techniques for enhanced classification accuracy.
|
5 |
Text Content Features for Hybrid Recommendations : Pre-trained Language Models for Better RecommendationsLazarova, Mariya January 2021 (has links)
Nowadays, with the ever growing availability of options in many areas of our lives, it is crucial to have good ways to navigate your choices. This is why recommendation engines’ role is growing more important. Recommenders are often based on user-item interaction. In many areas like news and podcasts, however, by the time there is enough interaction data for an item, the item has already become irrelevant. This is why incorporating content features is desirable, as the content does not depend on the popularity or novelty of an item. Very often, there is text describing an item, so text features are good candidates for features within recommender systems. Within Natural Language Processing (NLP), pre-trained language models based on the Transformer architecture have brought a revolution in recent years, achieving state-of-the-art performance on many language tasks. Because of this, it is natural to explore how such models can play a role within recommendation systems. The scope of this work is on the intersection between NLP and recommendation systems where we investigate what are the effects of adding BERT-based encodings of titles and descriptions of movies and books to a recommender system. The results show that even in off-the-shelf BERT-models there is a considerable amount of information on movie and book similarity. It also shows that BERT based representations could be used in a recommender system for user recommendation to combine the best of collaborative and content representations. In this thesis, it is shown that adding deep pre-trained language model representations could improve a recommender system’s capability to predict good items for users with up to 0.43 AUC-ROC score for a shallow model, and 0.017 AUC-ROC score for a deeper model. It is also shown that SBERT can be fine-tuned to encode item similarity with up to 0.03 nDCG and up to 0.05 nDCG@10 score improvement. / Med den ständigt växande tillgängligheten av val i många delar av våra liv har det blivit viktigt att enkelt kunna navigera kring olika alternativ. Det är därför rekommendationssystems har blivit viktigare. Rekommendationssystem baseras ofta på interaktion-historiken mellan användare och artikel. När tillräckligt mycket data inom nyheter och podcast har hunnits samlats in för att utföra en rekommendation så har artikeln hunnit bli irrelevant. Det är därför det är önskvärt att införa innehållsfunktioner till rekommenderaren, då innehållet inte är beroende av popularitet eller nymodigheten av artikeln. Väldigt ofta finns det text som beskriver en artikel vilket har lett till textfunktioner blivit bra kandidater som funktion för rekommendationssystem. Inom Naturlig Språkbehandling (NLP), har förtränande språkmodeller baserad på transformator arkitekturen revolutionerat området de senaste åren. Den nya arkitekturen har uppnått toppmoderna resultat på flertal språkuppgifter. Tack vare detta, har det blivit naturligt att utforska hur sådana modeller kan fungera inom rekommendationssystem. Det här arbetet är mellan två områden, NLP och rekommendationssystem. Arbetet utforskar effekten av att lägga till BERT-baserade kodningar av titel och beskrivning av filmer, samt böcker till ett rekommendationssystem. Resultaten visar att även i förpackade BERT modeller finns det mycket av information om likheter mellan film och böcker. Resultaten visar även att BERT representationer kan användas i rekommendationssystem för användarrekommendationer, i kombination med kollaborativa och artikel baserade representationer. Uppsatsen visar att lägga till förtränade djupspråkmodell representationer kan förbättra rekommendationssystemens förmåga att förutsäga bra artiklar för användare. Förbättringarna är upp till 0.43 AUC-ROC poäng för en grundmodell, samt 0.017 AUC-ROC poäng för en djupmodell. Uppsatsen visar även att SBERT kan bli finjusterad för att koda artikel likhet med upp till 0.03 nDCG och upp till 0.05 nDCG@10 poängs förbättring.
|
6 |
Fine-Tuning Pre-Trained Language Models for CEFR-Level and Keyword Conditioned Text Generation : A comparison between Google’s T5 and OpenAI’s GPT-2 / Finjustering av förtränade språkmodeller för CEFR-nivå och nyckelordsbetingad textgenerering : En jämförelse mellan Googles T5 och OpenAIs GPT-2Roos, Quintus January 2022 (has links)
This thesis investigates the possibilities of conditionally generating English sentences based on keywords-framing content and different difficulty levels of vocabulary. It aims to contribute to the field of Conditional Text Generation (CTG), a type of Natural Language Generation (NLG), where the process of creating text is based on a set of conditions. These conditions include words, topics, content or perceived sentiments. Specifically, it compares the performances of two well-known model architectures: Sequence-toSequence (Seq2Seq) and Autoregressive (AR). These are applied to two different tasks, individual and combined. The Common European Framework of Reference (CEFR) is used to assess the vocabulary level of the texts. In the absence of openly available CEFR-labelled datasets, the author has developed a new methodology with the host company to generate suitable datasets. The generated texts are evaluated on accuracy of the vocabulary levels and readability using readily available formulas. The analysis combines four established readability metrics, and assesses classification accuracy. Both models show a high degree of accuracy when classifying texts into different CEFR-levels. However, the same models are weaker when generating sentences based on a desired CEFR-level. This study contributes empirical evidence suggesting that: (1) Seq2Seq models have a higher accuracy than AR models in generating English sentences based on a desired CEFR-level and keywords; (2) combining Multi-Task Learning (MTL) with instructiontuning is an effective way to fine-tune models on text-classification tasks; and (3) it is difficult to assess the quality of computer generated language using only readability metrics. / I den här studien undersöks möjligheterna att villkorligt generera engelska meningar på så-kallad “naturligt” språk, som baseras på nyckelord, innehåll och vokabulärnivå. Syftet är att bidra till området betingad textgenerering, en underkategori av naturlig textgenerering, vilket är en metod för att skapa text givet vissa ingångsvärden, till exempel ämne, innehåll eller uppfattning. I synnerhet jämförs prestandan hos två välkända modellarkitekturer: sekvenstill-sekvens (Seq2Seq) och autoregressiv (AR). Dessa tillämpas på två uppgifter, såväl individuellt som kombinerat. Den europeiska gemensamma referensramen (CEFR) används för att bedöma texternas vokabulärnivå. I och med avsaknaden av öppet tillgängliga CEFR-märkta dataset har författaren tillsammans med värdföretaget utvecklat en ny metod för att generera lämpliga dataset. De av modellerna genererade texterna utvärderas utifrån vokabulärnivå och läsbarhet samt hur väl de uppfyller den sökta CEFRnivån. Båda modellerna visade en hög träffsäkerhet när de klassificerar texter i olika CEFR-nivåer. Dock uppvisade samma modeller en sämre förmåga att generera meningar utifrån en önskad CEFR-nivå. Denna studie bidrar med empiriska bevis som tyder på: (1) att Seq2Seq-modeller har högre träffsäkerhet än AR-modeller när det gäller att generera engelska meningar utifrån en önskad CEFR-nivå och nyckelord; (2) att kombinera inlärning av multipla uppgifter med instruktionsjustering är ett effektivt sätt att finjustera modeller för textklassificering; (3) att man inte kan bedömma kvaliteten av datorgenererade meningar genom att endast använda läsbarhetsmått.
|
7 |
Dynamic Network Modeling from Temporal Motifs and Attributed Node ActivityGiselle Zeno (16675878) 26 July 2023 (has links)
<p>The most important networks from different domains—such as Computing, Organization, Economic, Social, Academic, and Biology—are networks that change over time. For example, in an organization there are email and collaboration networks (e.g., different people or teams working on a document). Apart from the connectivity of the networks changing over time, they can contain attributes such as the topic of an email or message, contents of a document, or the interests of a person in an academic citation or a social network. Analyzing these dynamic networks can be critical in decision-making processes. For instance, in an organization, getting insight into how people from different teams collaborate, provides important information that can be used to optimize workflows.</p>
<p><br></p>
<p>Network generative models provide a way to study and analyze networks. For example, benchmarking model performance and generalization in tasks like node classification, can be done by evaluating models on synthetic networks generated with varying structure and attribute correlation. In this work, we begin by presenting our systemic study of the impact that graph structure and attribute auto-correlation on the task of node classification using collective inference. This is the first time such an extensive study has been done. We take advantage of a recently developed method that samples attributed networks—although static—with varying network structure jointly with correlated attributes. We find that the graph connectivity that contributes to the network auto-correlation (i.e., the local relationships of nodes) and density have the highest impact on the performance of collective inference methods.</p>
<p><br></p>
<p>Most of the literature to date has focused on static representations of networks, partially due to the difficulty of finding readily-available datasets of dynamic networks. Dynamic network generative models can bridge this gap by generating synthetic graphs similar to observed real-world networks. Given that motifs have been established as building blocks for the structure of real-world networks, modeling them can help to generate the graph structure seen and capture correlations in node connections and activity. Therefore, we continue with a study of motif evolution in <em>dynamic</em> temporal graphs. Our key insight is that motifs rarely change configurations in fast-changing dynamic networks (e.g. wedges intotriangles, and vice-versa), but rather keep reappearing at different times while keeping the same configuration. This finding motivates the generative process of our proposed models, using temporal motifs as building blocks, that generates dynamic graphs with links that appear and disappear over time.</p>
<p><br></p>
<p>Our first proposed model generates dynamic networks based on motif-activity and the roles that nodes play in a motif. For example, a wedge is sampled based on the likelihood of one node having the role of hub with the two other nodes being the spokes. Our model learns all parameters from observed data, with the goal of producing synthetic graphs with similar graph structure and node behavior. We find that using motifs and node roles helps our model generate the more complex structures and the temporal node behavior seen in real-world dynamic networks.</p>
<p><br></p>
<p>After observing that using motif node-roles helps to capture the changing local structure and behavior of nodes, we extend our work to also consider the attributes generated by nodes’ activities. We propose a second generative model for attributed dynamic networks that (i) captures network structure dynamics through temporal motifs, and (ii) extends the structural roles of nodes in motifs to roles that generate content embeddings. Our new proposed model is the first to generate synthetic dynamic networks and sample content embeddings based on motif node roles. To the best of our knowledge, it is the only attributed dynamic network model that can generate <em>new</em> content embeddings—not observed in the input graph, but still similar to that of the input graph. Our results show that modeling the network attributes with higher-order structures (e.g., motifs) improves the quality of the networks generated.</p>
<p><br></p>
<p>The generative models proposed address the difficulty of finding readily-available datasets of dynamic networks—attributed or not. This work will also allow others to: (i) generate networks that they can share without divulging individual’s private data, (ii) benchmark model performance, and (iii) explore model generalization on a broader range of conditions, among other uses. Finally, the evaluation measures proposed will elucidate models, allowing fellow researchers to push forward in these domains.</p>
|
8 |
Aligning language models to code : exploring efficient, temporal, and preference alignment for code generationWeyssow, Martin 09 1900 (has links)
Pre-trained and large language models (PLMs, LLMs) have had a transformative impact on the artificial intelligence (AI) for software engineering (SE) research field.
Through large-scale pre-training on terabytes of natural and programming language data, these models excel in generative coding tasks such as program repair and code generation.
Existing approaches to align the model's behaviour with specific tasks propose using parameter-free methods like prompting or fine-tuning to improve their effectiveness.
Nevertheless, it remains unclear how to align code PLMs and LLMs to more complex scenarios that extend beyond task effectiveness.
We focus on model alignment in three overlooked scenarios for code generation, each addressing a specific objective: optimizing fine-tuning costs, aligning models with new data while retaining previous knowledge, and aligning with user coding preferences or non-functional requirements.
We explore these scenarios in three articles, which constitute the main contributions of this thesis.
In the first article, we conduct an empirical study on parameter-efficient fine-tuning techniques (PEFTs) for code LLMs in resource-constraint settings.
Our study reveals the superiority of PEFTs over few-shot learning, showing that PEFTs like LoRA and QLoRA allow fine-tuning LLMs with up to 33 billion parameters on a single 24GB GPU without compromising task effectiveness.
In the second article, we examine the behaviour of code PLMs in a continual fine-tuning setting, where the model acquires new knowledge from sequential domain-specific datasets.
Each dataset introduces new data about third-party libraries not seen during pre-training or previous fine-tuning.
We demonstrate that sequential fine-tuning leads to catastrophic forgetting and implement replay- and regularization-based continual learning approaches, showcasing their superiority in balancing task effectiveness and knowledge retention.
In our third article, we introduce CodeUltraFeedback and CODAL-Bench, a novel dataset and benchmark for aligning code LLMs to user coding preferences or non-functional requirements.
Our experiments reveal that tuning LLMs with reinforcement learning techniques like direct preference optimization (DPO) using CodeUltraFeedback results in better-aligned LLMs to coding preferences and substantial improvement in the functional correctness of LLM-generated code. / Les modèles de langue pré-entraînés et de grande taille (PLMs, LLMs) ont eu un impact
transformateur sur le domaine de la recherche en intelligence artificielle (IA) pour l’ingénierie
logicielle (SE). Grâce à un pré-entraînement à grande échelle sur des téraoctets de données
en langage naturel et de programmation, ces modèles excellent dans les tâches de codage
génératif telles que la réparation de programmes et la génération de code. Les approches
existantes pour aligner le comportement du modèle avec des tâches spécifiques proposent
l’utilisation de méthodes non paramétriques telles que le prompting ou le fine-tuning pour
améliorer leur efficacité. Néanmoins, il reste incertain comment aligner les PLMs et LLMs de
code sur des scénarios plus complexes qui nécessitent plus que garantir l’efficacité du modèle
sur des tâches cibles. Nous nous concentrons sur l’alignement des modèles dans trois scénarios
négligés pour la génération de code, chacun abordant un objectif spécifique: optimiser les
coûts de fine-tuning, aligner les modèles avec de nouvelles données dans le temps tout en
conservant les connaissances antérieures, et aligner les modèles sur les préférences de codage
des utilisateurs ou exigences non fonctionnelles. Nous explorons ces scénarios dans trois
articles, qui constituent les principales contributions de cette thèse.
Dans le premier article, nous réalisons une étude empirique sur les techniques de finetuning efficaces en paramètres (PEFTs) pour les LLMs de code dans des environnements
à ressources limitées. Notre étude révèle la supériorité des PEFTs par rapport au few-shot
learning, montrant que des PEFTs comme LoRA et QLoRA permettent de fine-tuner des
LLMs jusqu’à 33 milliards de paramètres sur un seul GPU de 24Go sans compromettre
l’efficacité sur les tâches. Dans le deuxième article, nous examinons le comportement des
PLMs de code dans un contexte de fine-tuning continu, où le modèle acquiert de nouvelles
connaissances à partir de jeux de données séquentiels. Chaque jeu de données introduit
de nouvelles informations sur des bibliothèques tierces non vues lors de la phase de préentraînement ou dans les jeux de données de fine-tuning précédents. Nous démontrons que le
fine-tuning séquentiel conduit à de l’oubli catastrophique et mettons en œuvre des approches
d’apprentissage continu basées sur le replay et la régularisation, et montrons leur supériorité
pour balancer l’efficacité du modèle et la rétention des connaissances. Dans notre troisième
article, nous introduisons CodeUltraFeedback et CODAL-Bench, un nouveau jeu de données
et un banc d’essai pour aligner les LLMs de code sur les préférences de codage des utilisateurs
ou exigences non fonctionnelles. Nos expériences révèlent que le tuning des LLMs avec des
techniques d’apprentissage par renforcement comme l’optimisation directe des préférences
(DPO) utilisant CodeUltraFeedback résulte en des LLMs mieux alignés sur les préférences de
codage et une amélioration substantielle de l’exactitude fonctionnelle des codes générés.
|
9 |
Representation Learning for Biomedical Text MiningSänger, Mario 10 January 2025 (has links)
Die Untersuchung von Beziehungen zwischen biomedizinischen Entitäten bildet einen Eckpfeiler der modernen Medizin. Angesichts der rasanten Zunahme der Forschungsliteratur wird es jedoch zunehmend schwieriger, durch bloßes Lesen umfassende Informationen über bestimmte Entitäten und deren Beziehungen zu gewinnen. Text-Mining Ansätze versuchen, die Verarbeitung dieser riesigen Datenmengen mit Hilfe von Maschinellen Lernen zu erleichtern. Wir tragen zu dieser Forschung bei indem wir Methoden zum Erlernen von Entitäts- und Textrepräsentationen auf Basis großer Publikations- und Wissensdatenbanken entwickeln. Als erstes schlagen wir zwei neuartige Ansätze zur Relationsextraktion vor, die Techniken des Representation Learnings nutzen, um umfassende Modelle biomedizinischer Entitäten und Entitätspaaren zu lernen. Diese Modelle lernen Vektorrepräsentationen, indem sie alle PubMed-Artikel berücksichtigen, die eine bestimmte Entität oder ein Entitätspaar erwähnen. Wir verwenden diese Vektoren als Eingabe für ein neuronales Netzwerk, um Relationen global zu klassifizieren, d. h. die Vorhersagen basieren auf dem gesamten Korpus und nicht auf einzelnen Sätzen oder Artikeln wie in konventionellen Ansätzen. In unserem zweiten Beitrag untersuchen wir die Auswirkungen multimodaler Entitätsinformationen auf die Vorhersage von Relationen mithilfe von Knowledge Graph Embedding Methoden. In unserer Studie erweitern wir bestehende Modelle, indem wir Wissensgraphen mit multimodalen Informationen anreichern. Ferner schlagen wir ein allgemeines Framework für die Integration dieser Informationen in den Lernprozess für Entitätsrepräsentationen vor. In unserem dritten Beitrag erweitern wir Sprachmodelle mit zusätzlichen Entitätsinformationen für die Identifikation von Relationen in Texten. Wir führen eine umfangreiche Evaluation durch, welche die Leistung solcher Modelle in mehreren Szenarien erfasst und damit eine umfassende, jedoch bisher fehlende, Bewertung solcher Modelle liefert. / With the rapid growth of biomedical literature, obtaining comprehensive information regarding particular biomedical entities and relations by only reading is becoming increasingly difficult. Text mining approaches seek to facilitate processing these vast amounts of text using machine learning. This renders effective and efficient encoding of all relevant information regarding specific entities as one central challenge in these approaches. In this thesis, we contribute to this research by developing machine learning methods for learning entity and text representations based on large-scale publication repositories and diverse information from in-domain knowledge bases. First, we propose two novel relation extraction approaches that use representation learning techniques to create comprehensive models of entities or entity pairs. These models learn low-dimensional embeddings by considering all publications from PubMed mentioning a specific entity or pair of entities. We use these embeddings as input for a neural network to classify relations globally, i.e., predictions are based on the entire corpus, not on single sentences or articles as in prior art. In our second contribution, we investigate the impact of multi-modal entity information for biomedical link prediction using knowledge graph embedding methods (KGEM). Our study enhances existing KGEMs by augmenting biomedical knowledge graphs with multi-modal entity information from in-domain databases. We propose a general framework for integrating this information into the KGEM entity representation learning process. In our third contribution, we augment pre-trained language models (PLM) with additional context information to identify interactions described in scientific texts. We perform an extensive benchmark that assesses the performance of such models across a wide range of biomedical relation scenarios, providing a comprehensive, but so far missing, evaluation of knowledge-augmented PLM-based extraction models.
|
Page generated in 0.0882 seconds