Chinese Gigaword

Item Name: Chinese Gigaword
Author(s): David Graff, Ke Chen
LDC Catalog No.: LDC2003T09
ISBN: 1-58563-230-9
ISLRN: 251-875-847-656-5
DOI: https://doi.org/10.35111/n069-0642
Release Date: May 22, 2003
Member Year(s): 2003
DCMI Type(s): Text
Data Source(s): newswire
Project(s): TIDES, GALE, EARS
Application(s): natural language processing, language modeling, information retrieval
Language(s): Mandarin Chinese
Language ID(s): cmn
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2003T09 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: Graff, David, and Ke Chen. Chinese Gigaword LDC2003T09. Web Download. Philadelphia: Linguistic Data Consortium, 2003.
Related Works: View

Introduction

Chinese Gigaword was produced by Linguistic Data Consortium (LDC) catalog number LDC2003T09 and ISBN 1-58563-230-9. This is a comprehensive archive of newswire text data that has been acquired from Chinese news sources by the LDC over several years.

Two distinct international sources of Chinese newswire are represented here:

Central News Agency of Taiwan (cna)
Xinhua News Agency of Beijing (xin)

Some of the Xinhua content in this collection has been published previously by the LDC in other, older corpora, particularly Mandarin Chinese News Text (LDC95T13), TREC Mandarin (LDC2000T52), and the various TDT Multilanguage Text corpora. But all of the CNA data and a significant amount of Xinhua material is being released here for the first time.

Data

There are 286 files, totalling approximately 1.5GB in compressed form.

The table below presents the following categories of information: source of the data, number of files per source, Gzip-MB shows totals for compressed file sizes, Totl-MB shows totals for uncompressed file sizes (nearly four gigabytes, total), K-wrds are actually the number of Chinese characters (there is no notion of "space-separated word tokens" in Chinese), and number of documents.

Source #Files Gzip-MB Totl-MB K-wrds #DOCs
CNA 144 1018 2606 735499 1649492
XIE 142 548 1331 382881 817348
TOTAL 286 1566 3937 1118380 2466840

The original data archives received by the LDC from Xinhua were encoded in GB-2312, whereas those from CNA were encoded in Big-5. To avoid the problems and confusion that could result from differences in character-set specifications, all text files in this corpus have been converted to UTF-8 character encoding. With some exceptions described in the 0readme.txt file, all characters in the text are either single-byte ASCII or multi-byte Chinese.

Each data file name consists of a three-letter prefix, followed by a six-digit date (representing the year and month during which the file contents were generated by the respective news source), followed by a ".gz" file extension, indicating that the file contents have been compressed using the GNU "gzip" compression utility (RFC 1952). So, each file contains all the usable data received by LDC for the given month from the given news source.

All text data are presented in SGML form, using a very simple, minimal markup structure. The corpus has been fully validated by a standard SGML parser utility (nsgmls), using a DTD file provided in the corpus.

Unlike older corpora, the present corpus uses only the information structure that is common to all sources and serves a clear function: headline, dateline, and core news content (usually containing paragraphs).

All sources have received a uniform treatment in terms of quality control and have been categorized into four distinct "types":

story this type of DOC represents a coherent report on a particular topic or event, consisting of paragraphs and full sentences
multi this type of DOC contains a series of unrelated "blurbs," each of which briefly describes a particular topic or event: "summaries of today's news," "news briefs in ..." (some general area like finance or sports), and so on
advis these are DOCs which the news service addresses to news editors, they are not intended for publication to the "end users"
other these DOCs clearly do not fall into any of the above types; these are things like lists of sports scores, stock prices, temperatures around the world, and so on

The general strategy for categorizing DOCs into these four classes was, for each source, to discover the most common and frequent clues in the text stream that correlated with the three "non-story" types. When none of the known clues was in evidence, the DOC was classified as a "story."

Updates

There are no updates at this time.

Available Media

View Fees





Login for the applicable fee