SummBank 1.0

Item Name: SummBank 1.0
Author(s): Dragomir Radev, Simone Teufel, Horacio Saggion, Wai Lam, John Blitzer, Arda Celebi, Elliott Drabek, Danyu Liu, Hong Qi, Tim Allison
LDC Catalog No.: LDC2003T16
ISBN: 1-58563-274-0
ISLRN: 352-475-235-734-5
DOI: https://doi.org/10.35111/7v71-fh28
Release Date: December 18, 2003
Member Year(s): 2003
DCMI Type(s): Text
Data Source(s): government documents
Application(s): cross-lingual information retrieval, summarization
Language(s): Yue Chinese, English
Language ID(s): yue, eng
Online Documentation: LDC2003T16 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: Radev, Dragomir, et al. SummBank 1.0 LDC2003T16. Web Download. Philadelphia: Linguistic Data Consortium, 2003.
Related Works: View

Introduction

SummBank 1.0 was produced by the Linguistic Data Consortium (LDC) and contains 40 news clusters in English and Chinese, 360 multi-document, human-written summaries, and nearly 2 million single document and multi-document extracts created by automatic and manual methods.

The data was created for the Summer 2001 Johns Hopkins University Workshop which focused on text summarization in a cross-lingual information retrieval framework. The goal was to gather a corpus of original documents and summaries for use as gold standards by the documents summarization community.

Data

In the summer of 2001, researchers gathered at Johns Hopkins to study cross-lingual text summarization. LDC supplied this group with a corpus of 18,147 bilingual document pairs covering 1997-2000 from the Hong Kong News Parallel Text (LDC2000T46) corpus. These document pairs were used in single-document summarization experiments. However, LDC also created 40 clusters of news articles from this corpus for use in multi-document summarization experiments. LDC had annotators create 40 queries (“Y2K readiness”, “Flower shows”, etc.) which they used in their own information retrieval engine to select candidate sets of documents. Human judges then selected the ten most relevant documents for each cluster.

In addition to providing the raw documents and clusters of documents, LDC had human annotators judge each sentence’s relevance to its cluster’s query. The judges used a scale of 0 (not relevant at all) to 10 (very relevant). There were a total of five human judges on this project, and each sentence was judged by 3 judges. These scores were used to judge a summarizer’s performance and to create automatic extractive summaries. Finally, the judges were also asked to write summaries at various compression rates for each cluster.

This distribution includes roughly two million text files, totalling approximately 13GB uncompressed. The text files are encoded either as UTF-8 for English or GB or Big-5 for Chinese.

MEAD was the summarizer that was reimplemented and upgraded during the workshop; versions of the software are available from the MEAD website.

Samples

For an example of the data in this corpus, please view this sample ().

Updates

Additional information, updates, bug fixes may be available on the SummBank website.

Available Media

View Fees





Login for the applicable fee