• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • Tagged with
  • 14
  • 13
  • 11
  • 8
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

SPORK: A Summarization Pipeline for Online Repositories of Knowledge

Lyngbaek, Steffen Slyngbae 01 June 2013 (has links) (PDF)
The web 2.0 era has ushered an unprecedented amount of interactivity on the Internet resulting in a flood of user-generated content. This content is often unstructured and comes in the form of blog posts and comment discussions. Users can no longer keep up with the amount of content available, which causes developers to start relying on natural language techniques to help mitigate the problem. Although many natural language processing techniques have been employed for years, automatic text summarization, in particular, has recently gained traction. This research proposes a graph-based, extractive text summarization system called SPORK (Summarization Pipeline for Online Repositories of Knowledge). The goal of SPORK is to be able to identify important key topics presented in multi-document texts, such as online comment threads. While most other automatic summarization systems simply focus on finding the top sentences represented in the text, SPORK separates the text into clusters, and identifies different topics and opinions presented in the text. SPORK has shown results of managing to identify 72\% of key topics present in any discussion and up to 80\% of key topics in a well-structured discussion.
2

Semantics-driven Abstractive Document Summarization

Alambo, Amanuel 02 August 2022 (has links)
No description available.
3

Proposition-based summarization with a coherence-driven incremental model

Fang, Yimai January 2019 (has links)
Summarization models which operate on meaning representations of documents have been neglected in the past, although they are a very promising and interesting class of methods for summarization and text understanding. In this thesis, I present one such summarizer, which uses the proposition as its meaning representation. My summarizer is an implementation of Kintsch and van Dijk's model of comprehension, which uses a tree of propositions to represent the working memory. The input document is processed incrementally in iterations. In each iteration, new propositions are connected to the tree under the principle of local coherence, and then a forgetting mechanism is applied so that only a few important propositions are retained in the tree for the next iteration. A summary can be generated using the propositions which are frequently retained. Originally, this model was only played through by hand by its inventors using human-created propositions. In this work, I turned it into a fully automatic model using current NLP technologies. First, I create propositions by obtaining and then transforming a syntactic parse. Second, I have devised algorithms to numerically evaluate alternative ways of adding a new proposition, as well as to predict necessary changes in the tree. Third, I compared different methods of modelling local coherence, including coreference resolution, distributional similarity, and lexical chains. In the first group of experiments, my summarizer realizes summary propositions by sentence extraction. These experiments show that my summarizer outperforms several state-of-the-art summarizers. The second group of experiments concerns abstractive generation from propositions, which is a collaborative project. I have investigated the option of compressing extracted sentences, but generation from propositions has been shown to provide better information packaging.
4

A comparative study of automatic text summarization using human evaluation and automatic measures / En jämförande studie av automatisk textsammanfattning med användning av mänsklig utvärdering och automatiska mått

Wennstig, Maja January 2023 (has links)
Automatic text summarization has emerged as a promising solution to manage the vast amount of information available on the internet, enabling a wider audience to access it. Nevertheless, further development and experimentation with different approaches are still needed. This thesis explores the potential of combining extractive and abstractive approaches into a hybrid method, generating three types of summaries: extractive, abstractive, and hybrid. The news articles used in the study are from the Swedish newspaper Dagens Nyheter(DN). The quality of the summaries is assessed using various automatic measures, including ROUGE, BERTScore, and Coh-Metrix. Additionally, human evaluations are conducted to compare the different types of summaries in terms of perceived fluency, adequacy, and simplicity. The results of the human evaluation show a statistically significant difference between attractive, abstractive, and hybrid summaries with regard to fluency, adequacy, and simplicity. Specifically, there is a significant difference between abstractive and hybrid summaries in terms of fluency and simplicity, but not in adequacy. The automatic measures, however, do not show significant differences between the different summaries but tend to give higher scores to the hybrid and abstractive summaries
5

Automatic movie analysis and summarisation

Gorinski, Philip John January 2018 (has links)
Automatic movie analysis is the task of employing Machine Learning methods to the field of screenplays, movie scripts, and motion pictures to facilitate or enable various tasks throughout the entirety of a movie’s life-cycle. From helping with making informed decisions about a new movie script with respect to aspects such as its originality, similarity to other movies, or even commercial viability, all the way to offering consumers new and interesting ways of viewing the final movie, many stages in the life-cycle of a movie stand to benefit from Machine Learning techniques that promise to reduce human effort, time, or both. Within this field of automatic movie analysis, this thesis addresses the task of summarising the content of screenplays, enabling users at any stage to gain a broad understanding of a movie from greatly reduced data. The contributions of this thesis are four-fold: (i)We introduce ScriptBase, a new large-scale data set of original movie scripts, annotated with additional meta-information such as genre and plot tags, cast information, and log- and tag-lines. To our knowledge, Script- Base is the largest data set of its kind, containing scripts and information for almost 1,000 Hollywood movies. (ii) We present a dynamic summarisation model for the screenplay domain, which allows for extraction of highly informative and important scenes from movie scripts. The extracted summaries allow for the content of the original script to stay largely intact and provide the user with its important parts, while greatly reducing the script-reading time. (iii) We extend our summarisation model to capture additional modalities beyond the screenplay text. The model is rendered multi-modal by introducing visual information obtained from the actual movie and by extracting scenes from the movie, allowing users to generate visual summaries of motion pictures. (iv) We devise a novel end-to-end neural network model for generating natural language screenplay overviews. This model enables the user to generate short descriptive and informative texts that capture certain aspects of a movie script, such as its genres, approximate content, or style, allowing them to gain a fast, high-level understanding of the screenplay. Multiple automatic and human evaluations were carried out to assess the performance of our models, demonstrating that they are well-suited for the tasks set out in this thesis, outperforming strong baselines. Furthermore, the ScriptBase data set has started to gain traction, and is currently used by a number of other researchers in the field to tackle various tasks relating to screenplays and their analysis.
6

Building high-quality datasets for abstractive text summarization : A filtering‐based method applied on Swedish news articles

Monsen, Julius January 2021 (has links)
With an increasing amount of information on the internet, automatic text summarization could potentially make content more readily available for a larger variety of people. Training and evaluating text summarization models require datasets of sufficient size and quality. Today, most such datasets are in English, and for minor languages such as Swedish, it is not easy to obtain corresponding datasets with handwritten summaries. This thesis proposes methods for compiling high-quality datasets suitable for abstractive summarization from a large amount of noisy data through characterization and filtering. The data used consists of Swedish news articles and their preambles which are here used as summaries. Different filtering techniques are applied, yielding five different datasets. Furthermore, summarization models are implemented by warm-starting an encoder-decoder model with BERT checkpoints and fine-tuning it on the different datasets. The fine-tuned models are evaluated with ROUGE metrics and BERTScore. All models achieve significantly better results when evaluated on filtered test data than when evaluated on unfiltered test data. Moreover, models trained on the most filtered dataset with the smallest size achieves the best results on the filtered test data. The trade-off between dataset size and quality and other methodological implications of the data characterization, the filtering and the model implementation are discussed, leading to suggestions for future research.
7

Evaluating Text Summarization Models on Resumes : Investigating the Quality of Generated Resume Summaries and their Suitability as Resume Introductions / Utvärdering av Textsammanfattningsmodeller för CV:n : Undersökning av Kvaliteten på Genererade CV-sammanfattningar och deras Lämplighet som CV-introduktioner

Krohn, Amanda January 2023 (has links)
This thesis aims to evaluate different abstractive text summarization models and techniques for summarizing resumes. It has two main objectives: investigate the models’ performance on resume summarization and assess the suitability of the generated summaries as resume introductions. Although automatic abstractive text summarization has gained traction in various areas, its application in the resume domain has not yet been explored. Resumes present a unique challenge for abstractive summarization due to their diverse style, content, and length. To address these challenges, three state-of-the-art pre-trained text generation models: BART, T5, and ProphetNet, were selected. Additionally, two approaches that can handle longer resumes were investigated. The first approach, named LongBART, modified the BART architecture by incorporating the Longformer’s self-attention into the encoder. The second approach, named HybridBART, used an extractive-then-abstractive summarization strategy. The models were fine-tuned on a dataset of 653 resume-introduction pairs and were evaluated using automatic metrics as well as two types of human evaluations: a survey and expert interviews. None of the models demonstrated superiority across all criteria and evaluation metrics. However, the survey responses indicated that LongBART showed promising results, receiving the highest scores in three out of five criteria. On the other hand, ProphetNet consistently received the lowest scores across all criteria in the survey, and across all automatic metrics. Expert interviews emphasized that the generated summaries cannot be considered correct summaries due to the presence of hallucinated personal attributes. However, there is potential for using the generated texts as resume introductions, given that measures are taken to ensure the hallucinated personal attributes are sufficiently generic. / Denna avhandling utvärderar olika modeller och tekniker för automatisk textsammanfattning för sammanfattning av CV:n. Avhandlingen har två mål: att undersöka modellernas prestanda på sammanfattning av CV:n och bedöma lämpligheten att använda de genererade sammanfattningar som CV-introduktioner. Även om automatisk abstrakt textsummering har fått fotfäste inom olika sammanhang är dess tillämpning inom CV-domänen ännu outforskad. CV:n utgör en unik utmaning för abstrakt textsammanfattning på grund av deras varierande stil, innehåll och längd. För att hantera dessa utmaningar valdes tre av de främsta förtränade modellerna inom textgenerering: BART, T5 och ProphetNet. Dessutom undersöktes två extra metoder som kan hantera längre CV:n. Det första tillvägagångssättet, kallat LongBART, modifierade BART-arkitekturen genom att inkludera självuppmärksamhet från Longformer-arkitekturen i kodaren. Det andra tillvägagångssättet, kallat HybridBART, använde en extraktiv-sen-abstraktiv sammanfattningsstrategi. Modellerna finjusterades med ett dataset med 653 CV-introduktionspar och utvärderades med hjälp av automatiska mått, samt två typer av mänsklig utvärdering: en enkätundersökning och intervjuer med experter. Ingen av modellerna visade överlägsenhet på alla kriterier och utvärderingsmått. Dock indikerade enkätsvaren att LongBART visade lovande resultat, genom att få högst poäng i tre av fem utvärderingskategorier. Å andra sidan fick ProphetNet lägst poäng i samtliga utvärderingskategorier, samt lägst poäng i alla automatiska mätningar. Expertintervjuer framhävde att de genererade sammanfattningarna inte kan anses vara pålitliga som fristående sammanfattningar på grund av förekomsten av hallucinerade personliga egenskaper. Trots detta finns det potential att använda dessa sammanfattningar som introduktioner, under förutsättningen att åtgärder vidtas för att säkerställa att hallucinerade personliga attribut är tillräckligt generiska.
8

Training Neural Models for Abstractive Text Summarization

Kryściński, Wojciech January 2018 (has links)
Abstractive text summarization aims to condense long textual documents into a short, human-readable form while preserving the most important information from the source document. A common approach to training summarization models is by using maximum likelihood estimation with the teacher forcing strategy. Despite its popularity, this method has been shown to yield models with suboptimal performance at inference time. This work examines how using alternative, task-specific training signals affects the performance of summarization models. Two novel training signals are proposed and evaluated as part of this work. One, a novelty metric, measuring the overlap between n-grams in the summary and the summarized article. The other, utilizing a discriminator model to distinguish human-written summaries from generated ones on a word-level basis. Empirical results show that using the mentioned metrics as rewards for policy gradient training yields significant performance gains measured by ROUGE scores, novelty scores and human evaluation. / Abstraktiv textsammanfattning syftar på att korta ner långa textdokument till en förkortad, mänskligt läsbar form, samtidigt som den viktigaste informationen i källdokumentet bevaras. Ett vanligt tillvägagångssätt för att träna sammanfattningsmodeller är att använda maximum likelihood-estimering med teacher-forcing-strategin. Trots dess popularitet har denna metod visat sig ge modeller med suboptimal prestanda vid inferens. I det här arbetet undersöks hur användningen av alternativa, uppgiftsspecifika träningssignaler påverkar sammanfattningsmodellens prestanda. Två nya träningssignaler föreslås och utvärderas som en del av detta arbete. Den första, vilket är en ny metrik, mäter överlappningen mellan n-gram i sammanfattningen och den sammanfattade artikeln. Den andra använder en diskrimineringsmodell för att skilja mänskliga skriftliga sammanfattningar från genererade på ordnivå. Empiriska resultat visar att användandet av de nämnda mätvärdena som belöningar för policygradient-träning ger betydande prestationsvinster mätt med ROUGE-score, novelty score och mänsklig utvärdering.
9

[en] SUMARIZATION OF HEALTH SCIENCE PAPERS IN PORTUGUESE / [pt] SUMARIZAÇÃO DE ARTIGOS CIENTÍFICOS EM PORTUGUÊS NO DOMÍNIO DA SAÚDE

DAYSON NYWTON C R DO NASCIMENTO 30 October 2023 (has links)
[pt] Neste trabalho, apresentamos um estudo sobre o fine-tuning de um LLM (Modelo de Linguagem Amplo ou Large Language Model) pré-treinado para a sumarização abstrativa de textos longos em português. Para isso, construímos um corpus contendo uma coleção de 7.450 artigos científicos na área de Ciências da Saúde em português. Utilizamos esse corpus para o fine-tuning do modelo BERT pré-treinado para o português brasileiro (BERTimbau). Em condições semelhantes, também treinamos um segundo modelo baseado em Memória de Longo Prazo e Recorrência (LSTM) do zero, para fins de comparação. Nossa avaliação mostrou que o modelo ajustado obteve pontuações ROUGE mais altas, superando o modelo baseado em LSTM em 30 pontos no F1-score. O fine-tuning do modelo pré-treinado também se destaca em uma avaliação qualitativa feita por avaliadores a ponto de gerar a percepção de que os resumos gerados poderiam ter sido criados por humanos em uma coleção de documentos específicos do domínio das Ciências da Saúde. / [en] In this work, we present a study on the fine-tuning of a pre-trained Large Language Model for abstractive summarization of long texts in Portuguese. To do so, we built a corpus gathering a collection of 7,450 public Health Sciences papers in Portuguese. We fine-tuned a pre-trained BERT model for Brazilian Portuguese (the BERTimbau) with this corpus. In a similar condition, we also trained a second model based on Long Short-Term Memory (LSTM) from scratch for comparison purposes. Our evaluation showed that the fine-tuned model achieved higher ROUGE scores, outperforming the LSTM based by 30 points for F1-score. The fine-tuning of the pre-trained model also stands out in a qualitative evaluation performed by assessors, to the point of generating the perception that the generated summaries could have been created by humans in a specific collection of documents in the Health Sciences domain.
10

ResQu: A Framework for Automatic Evaluation of Knowledge-Driven Automatic Summarization

Jaykumar, Nishita 01 June 2016 (has links)
No description available.

Page generated in 0.0744 seconds