• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 41
  • 13
  • 12
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 238
  • 71
  • 66
  • 66
  • 63
  • 58
  • 51
  • 44
  • 38
  • 37
  • 34
  • 34
  • 32
  • 31
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Use of Text Summarization for Supporting Event Detection

Wu, Pao-Feng 12 August 2003 (has links)
Environmental scanning, which acquires and use the information about event, trends, and changes in an organization¡¦s external environment, is an important process in the strategic management of an organization and permits the organization to quickly adapt to the changes of its external environment. Event detection that detects the onset of new events from news documents is essential to facilitating an organization¡¦s environmental scanning activity. However, traditional feature-based event detection techniques detect events by comparing the similarity between features of news stories and incur several problems. For example, for illustration and comparison purpose, a news story may contain sentences or paragraphs that are not highly relevant to defining its event. Without removing such less relevant sentences or paragraphs before detection, the effectiveness of traditional event detection techniques may suffer. In this study, we developed a summary-based event detection (SED) technique that filters less relevant sentences or paragraphs in a news story before performing feature-based event detection. Using a traditional feature-based event detection technique (i.e., INCR) as benchmark, the empirical evaluation results showed that the proposed SED technique could achieve comparable or even better detection effectiveness (measured by miss and false alarm rates) than the INCR technique, for data corpora where the percentage of news stories discussing old events is high.
22

A hybrid approach to automatic text summarization

Yuan, Li-An 18 October 2007 (has links)
Automatic text summarization can efficiently and effectively save users¡¦ time while reading text documents. The objective of automatic text summarization is to extract essential sentences that cover almost all the concepts of a document so that users are able to comprehend the ideas the document tries to address by simply reading through the corresponding summary. This research focuses on developing a hybrid automatic text summarization approach, KCS, to enhancing the quality of summaries. This approach basically consists of two major components: first, it employs the K-mixture probabilistic model to calculate term weights in a statistical sense; it then identifies the term relationship between nouns and nouns as well as nouns and verbs, which results in the connective strength (CS) of nouns. With the connective strengths available scores of sentences can be calculated and ranked to be extracted. We conduct three experiments to justify the proposed approach. The quality of summary is examined by its capability of increasing accuracy of text classification,while the classifier employed, the Naïve Bayes classifier, is kept the same through all experiments. The results show that the K-mixture model is more contributive to document classification than traditional TFIDF weighting scheme. It, however, is still no better than CS, a more complex linguistic-based approach. More importantly, our proposed approach, KCS, performs best among all approaches considered. It implies that KCS can extract more representative sentences from the document and its feasibility in text summarization applications is thus justified.
23

Crawling, Collecting, and Condensing News Comments

Gobaan, Raveendran January 2013 (has links)
Traditionally, public opinion and policy is decided by issuing surveys and performing censuses designed to measure what the public thinks about a certain topic. Within the past five years social networks such as Facebook and Twitter have gained traction for collection of public opinion about current events. Academic research on Facebook data proves difficult since the platform is generally closed. Twitter on the other hand restricts the conversation of its users making it difficult to extract large scale concepts from the microblogging infrastructure. News comments provide a rich source of discourse from individuals who are passionate about an issue. Furthermore, due to the overhead of commenting, the population of commenters is necessarily biased towards individual who have either strong opinions of a topic or in depth knowledge of the given issue. Furthermore, their comments are often a collection of insight derived from reading multiple articles on any given topic. Unfortunately the commenting systems employed by news companies are not implemented by a single entity, and are often stored and generated using AJAX, which causes traditional crawlers to ignore them. To make matters worse they are often noisy; containing spam, poor grammar, and excessive typos. Furthermore, due to the anonymity of comment systems, conversations can often be derailed by malicious users or inherent biases in the commenters. In this thesis we discuss the design and creation of a crawler designed to extract comments from domains across the internet. For practical purposes we create a semiautomatic parser generator and describe how our system attempts to employ user feedback to predict which remote procedure calls are used to load comments. By reducing comment systems into remote procedure calls, we simplify the internet into a much simpler space, where we can focus on the data, almost independently from its presentation. Thus we are able to quickly create high fidelity parsers to extract comments from a web page. Once we have our system, we show the usefulness by attempting to extract meaningful opinions from the large collections we collect. Unfortunately doing so in real time is shown to foil traditional summarization systems, which are designed to handle dozens of well formed documents. In attempting to solve this problem we create a new algorithm, KLSum+, that outperforms all its competitors in efficiency while generally scoring well against the ROUGE SU4 metric. This algorithm factors in background models to boost accuracy, but performs over 50 times faster than alternatives. Furthermore, using the summaries we see that the data collected can provide useful insight into public opinion and even provide the key points of discourse.
24

Design and Evaluation of Temporal Summarization Systems

Guttikonda, Rakesh January 2014 (has links)
Temporal Summarization (TS) is a new track introduced as part of the Text REtrieval Conference (TREC) in 2013. This track aims to develop systems which can return important updates related to an event over time. In TREC 2013, the TS track specifically used disaster related events such as earthquake, hurricane, bombing, etc. This thesis mainly focuses on building an effective TS system by using a combination of Information Retrieval techniques. The developed TS system returns updates related to disaster related events in a timely manner. By participating in TREC 2013 and with experiments conducted after TREC, we examine the effectiveness of techniques such as distributional similarity for term expansion, which can be employed in building TS systems. Also, this thesis describes the effectiveness of other techniques such as stemming, adaptive sentence selection over time and de-duplication in our system, by comparing it with other baseline systems. The second part of the thesis examines the current methodology used for evaluating TS systems. We propose a modified evaluation method which could reduce the manual effort of assessors, and also correlates well with the official track’s evaluation. We also propose a supervised learning based evaluation method, which correlates well with the official track’s evaluation of systems and could save the assessor’s time by as much as 80%.
25

Topic-focused and summarized web information retrieval

Yoo, Seung Yeol, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Since the Web is getting bigger and bigger with a rapidly increasing number of heterogeneous Web pages, Web users often suffer from two problems: P1) irrelevant information and P2) information overload Irrelevant information indicates the weak relevance between the retrieved information and a user's information need. Information overload indicates that the retrieved information may contain 1) redundant information (e.g., common information between two retrieved Web pages) or 2) too much amount of information which cannot be easily understood by a user. We consider four major causes of those two problems P1) and P2) as follows; ??? Firstly, ambiguous query-terms. ??? Secondly, ambiguous terms in a Web page. ??? Thirdly, a query and a Web page cannot be semantically matched, because of the first and second causes. ??? Fourthly, the whole content of a Web page is a coarse context-boundary to measure the similarity between the Web page and a query. To answer those two problems P1) and P2), we consider that the meanings of words in a Web page and a query are primitive hints for understanding the related semantics of the Web page. Thus, in this dissertation, we developed three cooperative technologies: Word Sense Based Web Information Retrieval (WSBWIR), Subjective Segment Importance Model (SSIM) and Topic Focused Web Page Summarization (TFWPS). ??? WSBWIR allows for a user to 1) describe their information needs at senselevel and 2) provides one way for users to conceptually explore information existing within Web pages. ??? SSIM discovers a semantic structure of a Web page. A semantic structure respects not only Web page authors logical presentation structures but also a user specific topic interests on the Web pages at query time. ??? TFWPS dynamically generates extractive summaries respecting a user's topic interests. WSBWIR, SSIM and TFWPS technologies are implemented and experimented through several case-studies, classification and clustering tasks. Our experiments demonstrated that 1) the comparable effectiveness of exploration of Web pages using word senses, and 2) the segments partitioned by SSIM and summaries generated by TFWPS can provide more topically coherent features for classification and clustering purposes.
26

Easy to Find: Creating Query-Based Multi-Document Summaries to Enhance Web Search

Qumsiyeh, Rani Majed 15 March 2011 (has links) (PDF)
Current web search engines, such as Google, Yahoo!, and Bing, rank the set of documents S retrieved in response to a user query Q and display each document with a title and a snippet, which serves as an abstract of the corresponding document in S. Snippets, however, are not as useful as they are designed for, i.e., to assist search engine users to quickly identify results of interest, if they exist, without browsing through the documents in S, since they (i) often include very similar information and (ii) do not capture the main content of the corresponding documents. Moreover, when the intended information need specified in a search query is ambiguous, it is difficult, if not impossible, for a search engine to identify precisely the set of documents that satisfy the user's intended request. Furthermore, a document title retrieved by web search engines is not always a good indicator of the content of the corresponding document, since it is not always informative. All these design problems can be solved by our proposed query-based, web informative summarization engine, denoted Q-WISE. Q-WISE clusters documents in S, which allows users to view segregated document collections created according to the specific topic covered in each collection, and generates a concise/comprehensive summary for each collection/cluster of documents. Q-WISE is also equipped with a query suggestion module that provides a guide to its users in formulating a keyword query, which facilitates the web search and improves the precision and recall of the search results. Experimental results show that Q-WISE is highly effective and efficient in generating a high quality summary for each cluster of documents on a specific topic, retrieved in response to a Q-WISE user's query. The empirical study also shows that Q-WISE's clustering algorithm is highly accurate, labels generated for the clusters are useful and often reflect the topic of the corresponding clustered documents, and the performance of the query suggestion module of Q-WISE is comparable to commercial web search engines.
27

SPORK: A Summarization Pipeline for Online Repositories of Knowledge

Lyngbaek, Steffen Slyngbae 01 June 2013 (has links) (PDF)
The web 2.0 era has ushered an unprecedented amount of interactivity on the Internet resulting in a flood of user-generated content. This content is often unstructured and comes in the form of blog posts and comment discussions. Users can no longer keep up with the amount of content available, which causes developers to start relying on natural language techniques to help mitigate the problem. Although many natural language processing techniques have been employed for years, automatic text summarization, in particular, has recently gained traction. This research proposes a graph-based, extractive text summarization system called SPORK (Summarization Pipeline for Online Repositories of Knowledge). The goal of SPORK is to be able to identify important key topics presented in multi-document texts, such as online comment threads. While most other automatic summarization systems simply focus on finding the top sentences represented in the text, SPORK separates the text into clusters, and identifies different topics and opinions presented in the text. SPORK has shown results of managing to identify 72\% of key topics present in any discussion and up to 80\% of key topics in a well-structured discussion.
28

Improving Deposition Summarization using Enhanced Generation and Extraction of Entities and Keywords

Sumant, Aarohi Milind 01 June 2021 (has links)
In the legal domain, depositions help lawyers and paralegals to record details and recall relevant information relating to a case. Depositions are conversations between a lawyer and a deponent and are generally in Question-Answer (QA) format. These documents can be lengthy, which raises the need for applying summarization methods to the documents. Though many automatic summarization methods are available, not all of them give good results, especially in the legal domain. This creates a need to process the QA pairs and develop methods to help summarize the deposition. For further downstream tasks like summarization and insight generation, converting QA pairs to canonical or declarative form can be helpful. Since the transformed canonical sentences are not perfectly readable, we explore methods based on heuristics, language modeling, and deep learning, to improve the quality of sentences in terms of grammaticality, sentence correctness, and relevance. Further, extracting important entities and keywords from a deposition will help rank the candidate summary sentences and assist with extractive summarization. This work investigates techniques for enhanced generation of canonical sentences and extracting relevant entities and keywords to improve deposition summarization. / Master of Science / In the legal domain, depositions help lawyers and paralegals to record details and recall relevant information relating to a case. Depositions are conversations between a lawyer and a deponent and are generally in Question-Answer format. These documents can be lengthy, which raises the need for applying summarization methods to the documents. Typical automatic summarization techniques perform poorly on depositions since the data format is very different from standard text documents such as news articles, blogs. To standardize the process of summary generation, we convert the Question-Answer pairs from the deposition document to their canonical or declarative form. We apply techniques to improve the readability of these transformed sentences. Further, we extract entities such as person names, locations, organization and keywords from the deposition to retrieve important sentences and help in summarization. This work describes the techniques used to correct transformed sentences and extract important entities and keywords to improve the summarization of depositions.
29

Automation of summarization evaluation methods and their application to the summarization process

Nahnsen, Thade January 2011 (has links)
Summarization is the process of creating a more compact textual representation of a document or a collection of documents. In view of the vast increase in electronically available information sources in the last decade, filters such as automatically generated summaries are becoming ever more important to facilitate the efficient acquisition and use of required information. Different methods using natural language processing (NLP) techniques are being used to this end. One of the shallowest approaches is the clustering of available documents and the representation of the resulting clusters by one of the documents; an example of this approach is the Google News website. It is also possible to augment the clustering of documents with a summarization process, which would result in a more balanced representation of the information in the cluster, NewsBlaster being an example. However, while some systems are already available on the web, summarization is still considered a difficult problem in the NLP community. One of the major problems hampering the development of proficient summarization systems is the evaluation of the (true) quality of system-generated summaries. This is exemplified by the fact that the current state-of-the-art evaluation method to assess the information content of summaries, the Pyramid evaluation scheme, is a manual procedure. In this light, this thesis has three main objectives. 1. The development of a fully automated evaluation method. The proposed scheme is rooted in the ideas underlying the Pyramid evaluation scheme and makes use of deep syntactic information and lexical semantics. Its performance improves notably on previous automated evaluation methods. 2. The development of an automatic summarization system which draws on the conceptual idea of the Pyramid evaluation scheme and the techniques developed for the proposed evaluation system. The approach features the algorithm for determining the pyramid and bases importance on the number of occurrences of the variable-sized contributors of the pyramid as opposed to word-based methods exploited elsewhere. 3. The development of a text coherence component that can be used for obtaining the best ordering of the sentences in a summary.
30

Sumarização Automática de Atualização para a língua portuguesa / Update Summarization for the portuguese language

Nóbrega, Fernando Antônio Asevêdo 12 December 2017 (has links)
O enorme volume de dados textuais disponível na web caracteriza-se como um cenário ideal para inúmeras aplicações do Processamento de Língua Natural, tal como a tarefa da Sumarização Automática de Atualização (SAA), que tem por objetivo a geração automática de resumos a partir de uma coleção textual admitindo-se que o leitor possui algum conhecimento prévio sobre os textos-fonte. Dessa forma, um bom resumo de atualização deve ser constituído pelas informações mais relevantes, novas e atualizadas com relação ao conhecimento prévio do leitor. Essa tarefa implica em diversos desafios, sobretudo nas etapas de seleção e síntese de conteúdo para o sumário. Embora existam inúmeras abordagens na literatura, com diferentes níveis de complexidade teórica e computacional, pouco dessas investigações fazem uso de algum conhecimento linguístico profundo, que pode auxiliar a identificação de conteúdo mais relevante e atualizado. Além disso, os métodos de sumarização comumente empregam uma abordagem de síntese extrativa, na qual algumas sentenças dos textos-fonte são selecionadas e organizadas para compor o sumário sem alteração de seu conteúdo. Tal abordagem pode limitar a informatividade do sumário, uma vez que alguns segmentos sentenciais podem conter informação redundante ou irrelevante ao leitor. Assim, esforços recentes foram direcionados à síntese compressiva, na qual alguns segmentos das sentenças selecionadas para o sumário são removidos previamente à inserção no sumário. Nesse cenário, este trabalho de doutorado teve por objetivo a investigação do uso de conhecimentos linguísticos, como a Teoria Discursiva Multidocumento (CST), Segmentação de Subtópicos e Reconhecimento de Entidades Nomeadas, em distintas abordagens de seleção de conteúdo por meio das sínteses extrativas e compressivas visando à produção de sumários de atualização mais informativos. Tendo a língua Portuguesa como principal objeto de estudo, foram organizados três novos córpus, o CSTNews-Update, que viabiliza experimentos de SAA, e o PCSC-Pares e G1-Pares, para o desenvolvimento/avaliação de métodos de Compressão Sentencial. Ressalta-se que os experimentos de sumarização foram também realizados para a língua inglesa. Após as experimentações, observou-se que a Segmentação de Subtópicos foi mais efetiva para a produção de sumários mais informativos, porém, em apenas poucas abordagens de seleção de conteúdo. Além disso, foram propostas algumas simplificações para o método DualSum por meio da distribuição de Subtópicos. Tais métodos apresentaram resultados muito satisfatórios com menor complexidade computacional. Visando a produção de sumários compressivos, desenvolveram-se inúmeros métodos de Compressão Sentencial por meio de algoritmos de Aprendizado de Máquina. O melhor método proposto apresentou resultados superiores a um trabalho do estado da arte, que faz uso de algoritmos de Deep Learning. Além dos resultados supracitados, ressalta-se que anteriormente a este trabalho, a maioria das investigações de Sumarização Automática para a língua Portuguesa foi direcionada à geração de sumários a partir de um (monodocumento) ou vários textos relacionados (multidocumento) por meio da síntese extrativa, sobretudo pela ausência se recursos que viabilizassem a expansão da área de Sumarização Automática para esse idioma. Assim, as contribuições deste trabalho engajam-se em três campos, nos métodos de SAA propostos com conhecimento linguísticos, nos métodos de Compressão Sentencial e nos recursos desenvolvidos para a língua Portuguesa. / The huge amount of data that is available online is the main motivation for many tasks of Natural Language Processing, as the Update Summarization (US) which aims to produce a summary from a collection of related texts under the assumption the user/reader has some previous knowledge about the texts subject. Thus, a good update summary must be produced with the most relevant, new and updated content in order to assist the user. This task presents many research challenges, mainly in the processes of content selection and synthesis of the summary. Although there are several approaches for US, most of them do not use of some linguistic information that may assist the identification relevant content for the summary/user. Furthermore, US methods frequently apply an extractive synthesis approach, in which the summary is produced by picking some sentences from the source texts without rewriting operations. Once some segments of the picked sentences may contain redundant or irrelevant content, this synthesis process can to reduce the summary informativeness. Thus, some recent efforts in this field have focused in the compressive synthesis approach, in which some sentences are compressed by deletion of tokens or rewriting operations before be inserted in the output summary. Given this background, this PhD research has investigated the use of some linguistic information, as the Cross Document Theory (CST), Subtopic Segmentation and Named Entity Recognition into distinct content selection approaches for US by use extractive and compressive synthesis process in order to produce more informative update summaries. Once we have focused on the Portuguese language, we have compiled three new resources for this language, the CSTNews-Update, which allows the investigation of US methods for this language, the PCST-Pairs and G1-Pairs, in which there are pairs of original and compressed sentences in order to produce methods of sentence compression. It is important to say we also have performed experiments for the English language, in which there are more resources. The results show the Subtopic Segmentation assists the production of better summaries, however, this have occurred just on some content selection approaches. Furthermore, we also have proposed a simplification for the method DualSum by use Subtopic Segments. These simplifications require low computation power than DualSum and they have presented very satisfactory results. Aiming the production of compressive summaries, we have proposed different compression methods by use machine learning techniques. Our better proposed method present quality similar to a state-of-art system, which is based on Deep Learning algorithms. Previously this investigation, most of the researches on the Automatic Summarization field for the Portuguese language was focused on previous traditional tasks, as the production of summaries from one and many texts that does not consider the user knowledge, by use extractive synthesis processes. Thus, beside our proposed US systems based on linguistic information, which were evaluated over English and Portuguese datasets, we have produced many Compressions Methods and three new resources that will assist the expansion of the Automatic Summarization field for the Portuguese Language.

Page generated in 0.1117 seconds