README File for the GIGAWORD MANDARIN TEXT CORPUS
=================================================
INTRODUCTION
------------
The Gigaword Mandarin Corpus is a comprehensive archive of newswire
text data that has been acquired from Chinese news sources by the
Linguistic Data Consortium (LDC), at the University of Pennsylvania.
Two distinct sources of Mandarin Chinese newswire are represented
here:
- Central News Agency of Taiwan (cna)
- Xinhua News Agency of Beijing (xin)
The three-character abbreviations shown above represent both the
directory names where the data files are found, and the 3-letter
prefix that appears at the beginning of every file name.
Some of the Xinhua content in this collection has been published
previously by the LDC in other, older corpora, particularly Mandarin
Chinese News Text (LDC95T13), TREC Mandarin (LDC2000T52), and the
various TDT Multi-language Text corpora. But all of the CNA data, and
a significant amount of Xinhua material, is being released here for
the first time.
CHARACTER ENCODING
------------------
The original data archives received by the LDC from Xinhua were
encoded in GB-2312, whereas those from CNA were encoded in Big-5. To
avoid the problems and confusion that could result from differences in
character-set specifications, all text files in this corpus have been
converted to UTF-8 character encoding.
Researchers who have concerns about the comparability and
compatibility of text data from GB and Big-5 sources should consult
The Unicode Standard, version 3.0 (published by the Unicode
Consortium, http://www.unicode.org), paying special attention to
Chapter 10, "East Asian Scripts", and Appendix A, "Han Unification
History".
Owing to the use of UTF-8, the SGML tagging within each file
(described in detail in the next section) shows up as lines of
single-byte-per-character (ASCII) text, whereas lines of actual text
data, including article headlines and datelines, contain a mixture of
single-byte and multi-byte characters.
Both Big-5 and GB are designed to support ASCII single-byte character
data as well as 2-byte Chinese characters; in addition, each of these
coding standards has a section of the 2-byte character space devoted
to "full-width" renderings of the printable ASCII characters. For
example, the digits 0-9 can be presented as either single-byte ASCII
codes or as 2-byte full-width codes, as shown in the following table:
Digit ASCII GB 2-byte Big-5 2-byte
Character byte code-point code-point
--------------------------------------------------
0 0x30 0xA3C0 0xA2AF
1 0x31 0xA3C1 0xA2B0
2 0x32 0xA3C2 0xA2B1
3 0x33 0xA3C3 0xA2B2
4 0x34 0xA3C4 0xA2B3
5 0x35 0xA3C5 0xA2B4
6 0x36 0xA3C6 0xA2B5
7 0x37 0xA3C7 0xA2B6
8 0x38 0xA3C8 0xA2B7
9 0x39 0xA3C9 0xA2B8
and similarly for the upper- and lower-case alphabet characters,
brackets, quotation marks and punctuation. We found that both
archives showed evidence of somewhat free variation between single-
and two-byte forms when presenting alphanumerics, etc, within the text
data. Although the Unicode Standard provides an analogous portion of
its code table to these full-width characters, we decided instead to
eliminate this form of variation in the data: wherever the original
data contained 2-byte versions of characters having exact correlates
in the single-byte ASCII table, we replaced the 2-byte character with
the single-byte ASCII equivalent. As a result, many lines of text
data contain a mix of multi-byte Chinese and single-byte ASCII
content. Of course, since all the data is now presented in UTF-8
encoding, this mixture is a natural property of the data, which any
UTF-8-aware process will handle without difficulty.
We also found that both archives used a handful of "accented"
alphabetics and other special characters common to European character
sets. When converted to UTF8, these characters assume their "normal"
places in the Unicode table -- e.g. the "raised circle", used as a
"degrees" mark in temperatures or latitude/longitude coordinates, can
be found in the Xinhua data rendered as U00B0 (which in UTF8 form
comes out as the two-byte sequence 0xC2 0xB0). Apart from these rare
cases, all characters in the text are either single-byte ASCII or
multi-byte Chinese.
DATA FORMAT AND SGML MARKUP
---------------------------
Each data file name consists of the 3-letter prefix, followed by a
6-digit date (representing the year and month during which the file
contents were generated by the respective news source), followed by a
".gz" file extension, indicating that the file contents have been
compressed using the GNU "gzip" compression utility (RFC 1952). So,
each file contains all the usable data received by LDC for the given
month from the given news source.
All text data are presented in SGML form, using a very simple, minimal
markup structure. The file "gigaword_c.dtd" in the "docs" directory
provides the formal "Document Type Declaration" for parsing the SGML
content. The corpus has been fully validated by a standard SGML
parser utility (nsgmls), using this DTD file.
The markup structure, common to all data files, can be summarized as
follows:
Paragraph tags are only used if the 'type' attribute of the DOC
happens to be "story" -- more on the 'type' attribute below...
Note that all data files use the UNIX-standard "\n" form of line
termination, and text lines are generally wrapped to a width of 80
characters or less.
" is found only in DOCs of this type; in the other types described below, the text content is rendered with no additional tags or special characters -- just lines of ASCII tokens separated by whitespace. * multi : This type of DOC contains a series of unrelated "blurbs", each of which briefly describes a particular topic or event; this is typically applied to DOCs that contain "summaries of todays news", "news briefs in ... (some general area like finance or sports)", and so on. Each paragraph-like blurb by itself is coherent, but it does not bear any necessary relation of topicality or continuity relative to it neighbors. * advis : (short for "advisory") These are DOCs which the news service addresses to news editors -- they are not intended for publication to the "end users" (the populations who read the news); as a result, DOCs of this type tend to contain obscure abbreviations and phrases, which are familiar to news editors, but may be meaningless to the general public. We also find a lot of formulaic, repetitive content in DOCs of this type (contact phone numbers, etc). * other : This represents DOCs that clearly do not fall into any of the above types -- in general, items of this type are intended for broad circulation (they are not advisories), they may be topically coherent (unlike "multi" type DOCS), and they typically do not contain paragraphs or sentences (they aren't really "stories"); these are things like lists of sports scores, stock prices, temperatures around the world, and so on. The general strategy for categorizing DOCs into these four classes was, for each source, to discover the most common and frequent clues in the text stream that correlated with the three "non-story" types, and to apply the appropriate label for the ``type=...'' attribute whenever the DOC displayed one of these specific clues. When none of the known clues was in evidence, the DOC was classified as a "story". This means that the most frequent classification error will tend to be the use of `` type="story" '' on DOCs that are actually some other type. But the number of such errors should be fairly small, compared to the number of "non-story" DOCs that are correctly tagged as such. Note that the markup was applied algorithmically, using logic that was based on less-than-complete knowledge of the data. For the most part, the HEADLINE, DATELINE and TEXT tags have their intended content; but due to the inherent variability (and the inevitable source errors) in the data, users may find occasional mishaps where the headline and/or dateline were not successfully identified (hence show up within TEXT), or where an initial sentence or paragraph has been mistakenly tagged as the headline or dateline. DATA QUANTITIES --------------- The "docs" directory contains a set of plain-text tables (datastats.*) that describe the quantities of data by source and month (i.e. by file), broken down according to the four "type" categories. The overall totals for each source are summarized below. Note that the "Totl-MB" numbers show the amount of data you get when the files are uncompressed (i.e. nearly 4 gigabytes, total); the "Gzip-MB" column shows totals for compressed file sizes as stored on the DVD-ROM; the "K-wrds" numbers are actually the number of Chinese characters (there is no notion of "space separated word tokens" in Chinese): Source #Files Gzip-MB Totl-MB K-wrds #DOCs CNA 144 1018 2606 735499 1649492 XIE 142 548 1331 382881 817348 TOTAL 286 1566 3937 1118380 2466840 The following tables present "K-wrds" (i.e. thousands of Chinese characters) and "#DOCS" broken down by source and DOC type: #DOCs K-wrds type="advis": CNA 7962 740 XIE 2624 412 TOTAL 10586 1152 type="multi": CNA 29750 22666 XIE 10849 7215 TOTAL 40599 29881 type="other": CNA 93879 37253 XIE 25515 6501 TOTAL 119394 43754 type="story": CNA 1517901 674830 XIE 778360 368756 TOTAL 2296261 1043586 GENERAL PROPERTIES OF THE DATA ------------------------------ Both data sets have been produced from bulk archives that were delivered to the LDC via internet transfer. As a result, we avoided many of the problems that commonly afflict newswire data that has been transmitted over modems. Still, both archives contained noticeable amounts of "noise" (unusable characters, null bytes, etc) which had to be filtered out for research use. One of the corpus authors at the LDC, Ke Chen, is a native speaker of Mandarin Chinese, and did extensive diagnosis to identify and eliminate unsuitable content in the original archival data. To some extent, this is an open-ended problem, and there may be kinds of error conditions that have gone unnoticed or untreated -- this is true of any large text collection -- but we have striven to assure that the characters presented in all files are in fact valid and displayable, and that the markup is fully SGML compliant. David Graff Ke Chen Linguistic Data Consortium January, 2003