• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 12
  • 8
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Predicting the Unpredictable – Using Language Models to Assess Literary Quality

Wu, Yaru January 2023 (has links)
People read for various purposes like learning specific skills, acquiring foreign languages, and enjoying the pure reading experience, etc. This kind of pure enjoyment may credit to many aspects, such as the aesthetics of languages, the beauty of rhyme, and the entertainment of being surprised by what will happen next, the last of which is typically featured in fictional narratives and is also the main topic of this project. In other words, “good” fiction may be better at entertaining readers by baffling and eluding their expectations whereas “normal” narratives may contain more cliches and ready-made sentences that are easy to predict. Therefore, this project examines whether “good” fiction is less predictable than “normal” fiction, the two of which are predefined as canonized and non-canonized.  The predictability can be statistically reflected by the probability of the next words being correctly predicted given the previous content, which is then further measured in the metric of perplexity. Thanks to recent advances in deep learning, language models based on neural networks with billions of parameters can now be trained on terabytes of text to improve their performance in predicting the next unseen texts. Therefore, the generative pre-trained modeling and the text generator are combined to estimate the perplexities of canonized literature and non-canonized literature.  Due to the potential risk that the terabytes of text on which the advanced models have been trained may contain book content within the corpus, two series of models are designed to yield non-biased perplexity results, namely the self-trained models and the generative pre-trained Transformer-2 models. The comparisons of these two groups of results set up the final hierarchy of architecture constituted by five models for further experiments.  Over the process of perplexity estimation, the perplexity variance can also be generated at the same time, which is then used to denote how predictability varies across sequences with a certain length within each piece of literature. Evaluated by the perplexity variance, the literature property of homogeneity can also be examined between these two groups of literature.  The ultimate results from the five models imply that there lie distinctions in both perplexity values and variances between the canonized literature and non-canonized literature. Besides, the canonized literature shows higher perplexity values and variances measured in both median and mean metrics, which denotes that it is less predictable and homogeneous than the non-canonized literature.  Obviously, the perplexity values and variances cannot be used to define the literary quality directly. However, they offer some signals that the metric of perplexity can be insightful in the literary quality analysis using natural language processing techniques.
12

Smart Compose for Live Chat Agent / Kundtjänstens automatiska kompletteringssystem

Zhang, Tonghua January 2021 (has links)
In the digital business environment, customer service communication has grown up to become a labor- intensive task. In consideration of high labor costs, automatic customer service could be such a good alternative for many companies. However, communication with customers can not be easily automated. Staffs of customer service always need task-specific knowledge and information, which is incapable for automated systems to reply. Therefore, industries with frequent communication to consumers need a semiauto completion system, to cut manpower cost. In this thesis project, I utilized the GPT2 model, which was pre-trained by OpenAI, and finetuned it on MultiWOZ dataset in unsupervised way to train a full-fledged and task-oriented language model. On the basis of this auto-regressive language model, I designed and deployed an auto-completion system that timely predicts words or sentences which users may input in the next moment and provides quick completing suggestions for subsequent dialogue. After that, I evaluated the performance of the language model and practicability of the auto-completion system, and furthermore proposed a possible optimization framework to balance the system’s endogenous contradictions. / I den digitala affärsmiljön har kundservicekommunikation vuxit upp till att bli en arbetsintensiv uppgift. Med tanke på höga arbetskraftskostnader kan automatisk kundservice vara ett bra alternativ för många företag. Kundtjänstpersonal behöver alltid uppgiftspecifik kunskap och information, vilket inte är möjligt för automatiska system att leverera. Därför behöver industrier med frekvent kommunikation till konsumenterna ett semiautomatiskt kompletteringssystem, för att sänka arbetskraftskostnaderna. I detta avhandlingsprojekt använde jag GPT-2-modellen, som förtränats av OpenAI, och finjusterade den på MultiWOZ-datamängden på ett oövervakat sätt för att träna en fullfjädrad och uppgiftsorienterad språkmodell. På grundval av denna autoregressiva språkmodell designade och implementerade jag ett system för automatisk komplettering som i rätt tid förutsäger ord eller meningar som användarna kan mata in i nästa ögonblick och ger snabba kompletteringsförslag för efterföljande dialog. Därefter utvärderade jag prestandan för språkmodellen och genomförbarheten för det automatiska kompletteringssystemet och föreslog dessutom en möjlig optimeringsram för att balansera systemets endogena motsägelser.
13

Transformer-based Source Code Description Generation : An ensemble learning-based approach / Transformatorbaserad Generering av Källkodsbeskrivning : En ensemblemodell tillvägagångssätt

Antonios, Mantzaris January 2022 (has links)
Code comprehension can be significantly benefited from high-level source code summaries. For the majority of the developers, understanding another developer’s code or code that was written in the past by them, is a timeconsuming and frustrating task. This is necessary though in software maintenance or in cases where several people are working on the same project. A fast, reliable and informative source code description generator can automate this procedure, which is often avoided by developers. The rise of Transformers has turned the attention to them leading to the development of various Transformer-based models that tackle the task of source code summarization from different perspectives. Most of these models though are treating each other in a competitive manner when their complementarity could be proven beneficial. To this end, an ensemble learning-based approach is followed to explore the feasibility and effectiveness of the collaboration of more than one powerful Transformer-based models. The used base models are PLBart and GraphCodeBERT, two models with different focuses, and the ensemble technique is stacking. The results show that such a model can improve the performance and informativeness of individual models. However, it requires changes in the configuration of the respective models, that might harm them, and also further fine-tuning at the aggregation phase to find the most suitable base models’ weights and next-token probabilities combination, for the at the time ensemble. The results also revealed the need for human evaluation since metrics like BiLingual Evaluation Understudy (BLEU) are not always representative of the quality of the produced summary. Even if the outcome is promising, further work should follow, driven by this approach and based on the limitations that are not resolved in this work, for the development of a potential State Of The Art (SOTA) model. / Mjukvaruunderhåll samt kodförståelse är två områden som märkbart kan gynnas av källkodssammanfattning på hög nivå. För majoriteten av dagens utvecklare är det en tidskrävande och frustrerande uppgift att förstå en annan utvecklares kod.. För majoriteten av utvecklarna är det en tidskrävande och frustrerande uppgift att förstå en annan utvecklares kod eller kod som skrivits tidigare an dem. Detta är nödvändigt vid underhåll av programvara eller när flera personer arbetar med samma projekt. En snabb, pålitlig och informativ källkodsbeskrivningsgenerator kan automatisera denna procedur, som ofta undviks av utvecklare. Framväxten av Transformers har riktat uppmärksamheten mot dem, vilket har lett till utvecklingen av olika Transformer-baserade modeller som tar sig an uppgiften att sammanfatta källkod ur olika perspektiv. De flesta av dessa modeller behandlar dock varandra på ett konkurrenskraftigt sätt när deras komplementaritet kan bevisas vara mer fördelaktigt. För detta ändamål följs en ensembleinlärningsbaserad strategi för att utforska genomförbarheten och effektiviteten av samarbetet mellan mer än en kraftfull transformatorbaserad modell. De använda basmodellerna är PLBart och GraphCodeBERT, två modeller med olika fokus, och ensemblingstekniken staplas. Resultaten visar att en sådan modell kan förbättra prestanda och informativitet hos enskilda modeller. Det kräver dock förändringar i konfigurationen av respektive modeller som kan leda till skada, och även ytterligare finjusteringar i aggregeringsfasen för att hitta de mest lämpliga basmodellernas vikter och nästa symboliska sannolikhetskombination för den dåvarande ensemblen. Resultaten visade också behovet av mänsklig utvärdering eftersom mätvärden som BLEU inte alltid är representativa för kvaliteten på den producerade sammanfattningen. Även om resultaten är lovande bör ytterligare arbete följa, drivet av detta tillvägagångssätt och baserat på de begränsningar som inte är lösta i detta arbete, för utvecklingen av en potentiell SOTA-modell.
14

Generating and simplifying sentences / Génération et simplification des phrases

Narayan, Shashi 07 November 2014 (has links)
Selon la représentation d’entrée, cette thèse étudie ces deux types : la génération de texte à partir de représentation de sens et à partir de texte. En la première partie (Génération des phrases), nous étudions comment effectuer la réalisation de surface symbolique à l’aide d’une grammaire robuste et efficace. Cette approche s’appuie sur une grammaire FB-LTAG et prend en entrée des arbres de dépendance peu profondes. La structure d’entrée est utilisée pour filtrer l’espace de recherche initial à l’aide d’un concept de filtrage local par polarité afin de paralléliser les processus. Afin nous proposons deux algorithmes de fouille d’erreur: le premier, un algorithme qui exploite les arbres de dépendance plutôt que des données séquentielles et le second, un algorithme qui structure la sortie de la fouille d’erreur au sein d’un arbre afin de représenter les erreurs de façon plus pertinente. Nous montrons que nos réalisateurs combinés à ces algorithmes de fouille d’erreur améliorent leur couverture significativement. En la seconde partie (Simplification des phrases), nous proposons l’utilisation d’une forme de représentations sémantiques (contre à approches basées la syntaxe ou SMT) afin d’améliorer la tâche de simplification de phrase. Nous utilisons les structures de représentation du discours pour la représentation sémantique profonde. Nous proposons alors deux méthodes de simplification de phrase: une première approche supervisée hybride qui combine une sémantique profonde à de la traduction automatique, et une seconde approche non-supervisée qui s’appuie sur un corpus comparable de Wikipedia / Depending on the input representation, this dissertation investigates issues from two classes: meaning representation (MR) to text and text-to-text generation. In the first class (MR-to-text generation, "Generating Sentences"), we investigate how to make symbolic grammar based surface realisation robust and efficient. We propose an efficient approach to surface realisation using a FB-LTAG and taking as input shallow dependency trees. Our algorithm combines techniques and ideas from the head-driven and lexicalist approaches. In addition, the input structure is used to filter the initial search space using a concept called local polarity filtering; and to parallelise processes. To further improve our robustness, we propose two error mining algorithms: one, an algorithm for mining dependency trees rather than sequential data and two, an algorithm that structures the output of error mining into a tree to represent them in a more meaningful way. We show that our realisers together with these error mining algorithms improves on both efficiency and coverage by a wide margin. In the second class (text-to-text generation, "Simplifying Sentences"), we argue for using deep semantic representations (compared to syntax or SMT based approaches) to improve the sentence simplification task. We use the Discourse Representation Structures for the deep semantic representation of the input. We propose two methods: a supervised approach (with state-of-the-art results) to hybrid simplification using deep semantics and SMT, and an unsupervised approach (with competitive results to the state-of-the-art systems) to simplification using the comparable Wikipedia corpus
15

Extractive Multi-document Summarization of News Articles

Grant, Harald January 2019 (has links)
Publicly available data grows exponentially through web services and technological advancements. To comprehend large data-streams multi-document summarization (MDS) can be used. In this research, the area of multi-document summarization is investigated. Multiple systems for extractive multi-document summarization are implemented using modern techniques, in the form of the pre-trained BERT language model for word embeddings and sentence classification. This is combined with well proven techniques, in the form of the TextRank ranking algorithm, the Waterfall architecture and anti-redundancy filtering. The systems are evaluated on the DUC-2002, 2006 and 2007 datasets using the ROUGE metric. Where the results show that the BM25 sentence representation implemented in the TextRank model using the Waterfall architecture and an anti-redundancy technique outperforms the other implementations, providing competitive results with other state-of-the-art systems. A cohesive model is derived from the leading system and tried in a user study using a real-world application. The user study is conducted using a real-time news detection application with users from the news-domain. The study shows a clear favour for cohesive summaries in the case of extractive multi-document summarization. Where the cohesive summary is preferred in the majority of cases.
16

Reasoning with qualitative spatial and temporal textual cases / Raisonnement qualitatif spatio-temporel à partir de cas textuels

Dufour-Lussier, Valmi 07 October 2014 (has links)
Cette thèse propose un modèle permettant la mise en œuvre d'un système de raisonnement à partir de cas capable d'adapter des procédures représentées sous forme de texte en langue naturelle, en réponse à des requêtes d'utilisateurs. Bien que les cas et les solutions soient sous forme textuelle, l'adaptation elle-même est d'abord appliquée à un réseau de contraintes temporelles exprimées à l'aide d'une algèbre qualitative, grâce à l'utilisation d'un opérateur de révision des croyances. Des méthodes de traitement automatique des langues sont utilisées pour acquérir les représentations algébriques des cas ainsi que pour regénérer le texte à partir du résultat de l'adaptation / This thesis proposes a practical model making it possible to implement a case-based reasoning system that adapts processes represented as natural language text in response to user queries. While the cases and the solutions are in textual form, the adaptation itself is performed on networks of temporal constraints expressed with a qualitative algebra, using a belief revision operator. Natural language processing methods are used to acquire case representations and to regenerate text based on the adaptation result
17

Fine-Tuning Pre-Trained Language Models for CEFR-Level and Keyword Conditioned Text Generation : A comparison between Google’s T5 and OpenAI’s GPT-2 / Finjustering av förtränade språkmodeller för CEFR-nivå och nyckelordsbetingad textgenerering : En jämförelse mellan Googles T5 och OpenAIs GPT-2

Roos, Quintus January 2022 (has links)
This thesis investigates the possibilities of conditionally generating English sentences based on keywords-framing content and different difficulty levels of vocabulary. It aims to contribute to the field of Conditional Text Generation (CTG), a type of Natural Language Generation (NLG), where the process of creating text is based on a set of conditions. These conditions include words, topics, content or perceived sentiments. Specifically, it compares the performances of two well-known model architectures: Sequence-toSequence (Seq2Seq) and Autoregressive (AR). These are applied to two different tasks, individual and combined. The Common European Framework of Reference (CEFR) is used to assess the vocabulary level of the texts. In the absence of openly available CEFR-labelled datasets, the author has developed a new methodology with the host company to generate suitable datasets. The generated texts are evaluated on accuracy of the vocabulary levels and readability using readily available formulas. The analysis combines four established readability metrics, and assesses classification accuracy. Both models show a high degree of accuracy when classifying texts into different CEFR-levels. However, the same models are weaker when generating sentences based on a desired CEFR-level. This study contributes empirical evidence suggesting that: (1) Seq2Seq models have a higher accuracy than AR models in generating English sentences based on a desired CEFR-level and keywords; (2) combining Multi-Task Learning (MTL) with instructiontuning is an effective way to fine-tune models on text-classification tasks; and (3) it is difficult to assess the quality of computer generated language using only readability metrics. / I den här studien undersöks möjligheterna att villkorligt generera engelska meningar på så-kallad “naturligt” språk, som baseras på nyckelord, innehåll och vokabulärnivå. Syftet är att bidra till området betingad textgenerering, en underkategori av naturlig textgenerering, vilket är en metod för att skapa text givet vissa ingångsvärden, till exempel ämne, innehåll eller uppfattning. I synnerhet jämförs prestandan hos två välkända modellarkitekturer: sekvenstill-sekvens (Seq2Seq) och autoregressiv (AR). Dessa tillämpas på två uppgifter, såväl individuellt som kombinerat. Den europeiska gemensamma referensramen (CEFR) används för att bedöma texternas vokabulärnivå. I och med avsaknaden av öppet tillgängliga CEFR-märkta dataset har författaren tillsammans med värdföretaget utvecklat en ny metod för att generera lämpliga dataset. De av modellerna genererade texterna utvärderas utifrån vokabulärnivå och läsbarhet samt hur väl de uppfyller den sökta CEFRnivån. Båda modellerna visade en hög träffsäkerhet när de klassificerar texter i olika CEFR-nivåer. Dock uppvisade samma modeller en sämre förmåga att generera meningar utifrån en önskad CEFR-nivå. Denna studie bidrar med empiriska bevis som tyder på: (1) att Seq2Seq-modeller har högre träffsäkerhet än AR-modeller när det gäller att generera engelska meningar utifrån en önskad CEFR-nivå och nyckelord; (2) att kombinera inlärning av multipla uppgifter med instruktionsjustering är ett effektivt sätt att finjustera modeller för textklassificering; (3) att man inte kan bedömma kvaliteten av datorgenererade meningar genom att endast använda läsbarhetsmått.
18

Parazitické hlasy a protetická já: Detekce post-lyrického subjektu v dílech současné digitální literatury / Parasitic Voices and Prosthetic Selves: Detecting the Post-Lyrical Subject in the Works of Contemporary Digital Literature

Suchánek, Tomáš January 2021 (has links)
This diploma thesis explores subjectivity in the domain of so-called digital writing, that is, in texts of largely experimental nature generated by computer algorithms (or with their assistance). In order to do so, the thesis briefly covers the history of digital writing, its mediatic specificities, poetics as well as various theoretical and philosophical conceptualizations. Most importantly, it undertakes an analysis of a post-lyrical subject, a concept devised by Janez Strehovec, that is common to all cases of generative writing under focus. For its comparative analysis, the thesis deals with the recent works from contemporary creators who approach algorithmic textuality from variegated perspectives, incl. Nick Montfort, Allison Parish, Stephanie Strickland, Li Zilles, and Jörg Piringer. Texts generated by programs are conceived of as expressing a new, parasitic and prosthetic, genus of cyber-textual subjectivity that defies the traditional lyric and expands its pool "by other means," as Marjorie Perloff would say. Such a tendency results in conceptually as well as formally complex literary corpus "infected" by - to further exploit the suggested metaphor - parasitic voices and prosthetic selves. Unlike in generic lyric, the post- lyrical subject surpasses the confines of poetry as genre; it is...
19

Medical image captioning based on Deep Architectures / Medicinsk bild textning baserad på Djupa arkitekturer

Moschovis, Georgios January 2022 (has links)
Diagnostic Captioning is described as “the automatic generation of a diagnostic text from a set of medical images of a patient collected during an examination” [59] and it can assist inexperienced doctors and radiologists to reduce clinical errors or help experienced professionals increase their productivity. In this context, tools that would help medical doctors produce higher quality reports in less time could be of high interest for medical imaging departments, as well as significantly impact deep learning research within the biomedical domain, which makes it particularly interesting for people involved in industry and researchers all along. In this work, we attempted to develop Diagnostic Captioning systems, based on novel Deep Learning approaches, to investigate to what extent Neural Networks are capable of performing medical image tagging, as well as automatically generating a diagnostic text from a set of medical images. Towards this objective, the first step is concept detection, which boils down to predicting the relevant tags for X-RAY images, whereas the ultimate goal is caption generation. To this end, we further participated in ImageCLEFmedical 2022 evaluation campaign, addressing both the concept detection and the caption prediction tasks by developing baselines based on Deep Neural Networks; including image encoders, classifiers and text generators; in order to get a quantitative measure of my proposed architectures’ performance [28]. My contribution to the evaluation campaign, as part of this work and on behalf of NeuralDynamicsLab¹ group at KTH Royal Institute of Technology, within the school of Electrical Engineering and Computer Science, ranked 4th in the former and 5th in the latter task [55, 68] among 12 groups included within the top-10 best performing submissions in both tasks. / Diagnostisk textning avser automatisk generering från en diagnostisk text från en uppsättning medicinska bilder av en patient som samlats in under en undersökning och den kan hjälpa oerfarna läkare och radiologer, minska kliniska fel eller hjälpa erfarna yrkesmän att producera diagnostiska rapporter snabbare [59]. Därför kan verktyg som skulle hjälpa läkare och radiologer att producera rapporter av högre kvalitet på kortare tid vara av stort intresse för medicinska bildbehandlingsavdelningar, såväl som leda till inverkan på forskning om djupinlärning, vilket gör den domänen särskilt intressant för personer som är involverade i den biomedicinska industrin och djupinlärningsforskare. I detta arbete var mitt huvudmål att utveckla system för diagnostisk textning, med hjälp av nya tillvägagångssätt som används inom djupinlärning, för att undersöka i vilken utsträckning automatisk generering av en diagnostisk text från en uppsättning medi-cinska bilder är möjlig. Mot detta mål är det första steget konceptdetektering som går ut på att förutsäga relevanta taggar för röntgenbilder, medan slutmålet är bildtextgenerering. Jag deltog i ImageCLEF Medical 2022-utvärderingskampanjen, där jag deltog med att ta itu med både konceptdetektering och bildtextförutsägelse för att få ett kvantitativt mått på prestandan för mina föreslagna arkitekturer [28]. Mitt bidrag, där jag representerade forskargruppen NeuralDynamicsLab² , där jag arbetade som ledande forskningsingenjör, placerade sig på 4:e plats i den förra och 5:e i den senare uppgiften [55, 68] bland 12 grupper som ingår bland de 10 bästa bidragen i båda uppgifterna.

Page generated in 0.1383 seconds