Return to search

Model Selection in Summary Evaluation

A difficulty in the design of automated text summarization algorithms is in the objective evaluation. Viewing summarization as a tradeoff between length and information content, we introduce a technique based on a hierarchy of classifiers to rank, through model selection, different summarization methods. This summary evaluation technique allows for broader comparison of summarization methods than the traditional techniques of summary evaluation. We present an empirical study of two simple, albeit widely used, summarization methods that shows the different usages of this automated task-based evaluation system and confirms the results obtained with human-based evaluation methods over smaller corpora.

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/7181
Date01 December 2002
CreatorsPerez-Breva, Luis, Yoshimi, Osamu
Source SetsM.I.T. Theses and Dissertation
Languageen_US
Detected LanguageEnglish
Format1739841 bytes, 1972183 bytes, application/postscript, application/pdf
RelationAIM-2002-023, CBCL-222

Page generated in 0.002 seconds