README File for the ENGLISH GIGAWORD TEXT CORPUS
================================================
Fourth Edition
=============
INTRODUCTION
------------
The English Gigaword Corpus is a comprehensive archive of newswire
text data that has been acquired over several years by the Linguistic
Data Consortium (LDC) at the University of Pennsylvania. This is the
fourth edition of the English Gigaword Corpus.
This edition includes all of the contents in the previous edition
(LDC2007T07) as well as new data from the same six sources presented
there covering 24-month period of January 2007 through December 2008.
The six distinct international sources of English newswire included
in this edition are the following:
- Agence France-Presse, English Service (afp_eng)
- Associated Press Worldstream, English Service (apw_eng)
- Central News Agency of Taiwan, English Service (cna_eng)
- Los Angeles Times/Washington Post Newswire Service (ltw_eng)
- New York Times Newswire Service (nyt_eng)
- Xinhua News Agency, English Service (xin_eng)
The seven-letter codes in the parentheses above include the
three-character source name abbreviations and the three-character
language code ("eng") separated by an underscore ("_") character. The
three-letter language code conforms to LDC's internal convention
based on the new ISO 639-3 standard.
The seven-letter codes are used in both the directory names where
the data files are found, and in the prefix that appears at the
beginning of every data file name.
As with other Gigaword releases, some of the content in the this
corpus has been published previously by the LDC in a variety of other,
older corpora, particularly the North American News text corpora, the
various TDT corpora, and the AQUAINT text corpus, as well as earlier
editions of Gigaword English.
DATA FORMAT AND SGML MARKUP
---------------------------
Each data file name consists of the 7-letter prefix plus another
underscore character, followed by a 6-digit date representing the year
and month during which the file contents were generated by the
respective news source, followed by a ".gz" file extension indicating
that the file contents have been compressed using the GNU "gzip"
compression utility (RFC 1952). So, each file contains all the usable
data received by LDC for the given month from the given news source.
All text data are presented in SGML form, using a very simple, minimal
markup structure; all text consists of printable ASCII and whitespace.
The file "gigaword.dtd" in the "dtd" directory provides the formal
"Document Type Declaration" for parsing the SGML content. The corpus
has been fully validated by a standard SGML parser utility (nsgmls),
using this DTD file.
The markup structure, common to all data files, can be summarized as
follows:
Paragraph tags are only used if the 'type' attribute of the DOC
happens to be "story" -- more on the 'type' attribute below...
Note that all data files use the UNIX-standard "\n" form of line
termination, and text lines are generally wrapped to a width of 80
characters or less.
" is found only in DOCs of this type;
in the other types described below, the text content is rendered
with no additional tags or special characters -- just lines of ASCII
tokens separated by whitespace.
* multi : This type of DOC contains a series of unrelated "blurbs",
each of which briefly describes a particular topic or event; this is
typically applied to DOCs that contain "summaries of todays news",
"news briefs in ... (some general area like finance or sports)", and
so on. Each paragraph-like blurb by itself is coherent, but it does
not bear any necessary relation of topicality or continuity relative
to it neighboring sections.
* advis : (short for "advisory") These are DOCs which the news service
addresses to news editors -- they are not intended for publication
to the "end users" (the populations who read the news); as a result,
DOCs of this type tend to contain obscure abbreviations and phrases,
which are familiar to news editors, but may be meaningless to the
general public. We also find a lot of formulaic, repetitive content
in DOCs of this type (contact phone numbers, etc).
* other : This represents DOCs that clearly do not fall into any of
the above types -- in general, items of this type are intended for
broad circulation (they are not advisories), they may be topically
coherent (unlike "multi" type DOCS), and they typically do not
contain paragraphs or sentences (they aren't really "stories");
these are things like lists of sports scores, stock prices,
temperatures around the world, and so on.
The general strategy for categorizing DOCs into these four classes
was, for each source, to discover the most common and frequent clues
in the text stream that correlated with the three "non-story" types,
and to apply the appropriate label for the ``type=...'' attribute
whenever the DOC displayed one of these specific clues. When none of
the known clues was in evidence, the DOC was classified as a "story".
This means that the most frequent classification error will tend to be
the use of `` type="story" '' on DOCs that are actually some other
type. But the number of such errors should be fairly small, compared
to the number of "non-story" DOCs that are correctly tagged as such.
Also, since some sources tended to change their delivery methods or
format over time, the distribution of non-story types can be seen to
vary signficantly by epoch and source. The various "datastats" tables
may be helpful in tracking changes in the nature of the source data
(and LDC's ability to adapt to those changes).
Note that the markup was applied algorithmically, using logic that was
based on less-than-complete knowledge of the data. For the most part,
the HEADLINE, DATELINE and TEXT tags have their intended content; but
due to the inherent variability (and the inevitable source errors) in
the data, users may find occasional mishaps where the headline and/or
dateline were not successfully identified (hence show up within TEXT),
or where an initial sentence or paragraph has been mistakenly tagged
as the headline or dateline.
DATA QUANTITIES
---------------
The "docs" directory contains a set of plain-text tables (datastats_*)
that describe the quantities of data by source and month (i.e. by
file), broken down according to the four "type" categories. The
overall totals for each source are summarized below. Note that the
"Totl-MB" numbers show the amount of data you get when the files are
uncompressed (i.e. approximately 15 gigabytes, total); the "Gzip-MB"
column shows totals for compressed file sizes as stored on the
DVD-ROM; the "K-wrds" numbers are simply the number of
whitespace-separated tokens (of all types) after all SGML tags are
eliminated.
Source #Files Gzip-MB Totl-MB K-wrds #DOCs
-----------------------------------------------
afp_eng 122 1418 4027 466718 1592309
apw_eng 169 2304 7000 849435 2272995
cna_eng 120 70 204 21657 85600
ltw_eng 115 597 1550 192650 295224
nyt_eng 173 3042 8309 1188494 1655279
xin_eng 168 700 2094 249521 1247039
TOTAL 722 6841 19391 2968475 7148446
The following tables present "Text-MB", "K-wrds" and "#DOCS" broken
down by source and DOC type; "Text-MB" represents the total number of
characters (including whitespace) after SGML tags are eliminated.
Text-MB K-wrds #DOCs
advis
afp_eng 124 17447 44886
apw_eng 181 27362 39256
cna_eng 0 17 69
ltw_eng 88 14132 28987
nyt_eng 562 89869 149751
xin_eng 12 1920 7522
TOTAL 967 150747 270471
multi
afp_eng 76 11598 32829
apw_eng 241 39503 57975
cna_eng 20 3211 17898
ltw_eng 19 3086 7020
nyt_eng 124 20435 33183
xin_eng 122 19571 84137
TOTAL 602 97404 233042
other
afp_eng 101 15084 110149
apw_eng 337 47208 273377
cna_eng 2 213 1935
ltw_eng 1 228 1063
nyt_eng 114 17248 26321
xin_eng 109 15438 135359
TOTAL 664 95419 548204
story
afp_eng 3333 556949 1836594
apw_eng 5654 937133 2383673
cna_eng 162 26149 95770
ltw_eng 1355 227532 337983
nyt_eng 7032 1194076 1617389
xin_eng 1613 262090 1251236
TOTAL 19149 3203929 7522645
GENERAL AND SOURCE-SPECIFIC PROPERTIES OF THE DATA
--------------------------------------------------
Much of the text data (all of AFP_ENG, most of APW_ENG, LTW_ENG and
NYT_ENG) were received at LDC via dedicated, 24-hour/day electronic
feeds (leased phone lines in the case of APW_ENG, LTW_ENG and NYT_ENG,
a local satellite dish for AFP_ENG). These 24-hour transmission
services were all susceptible to "line noise" (occasional corruption
of text content), as well as service outages both at the data source
and at our receiving computers. Usually, the various disruptions of a
newswire data stream would leave tell-tale evidence in the form of
byte values falling outside the range of printable characters, or
recognizable patterns of anomalous ASCII strings.
All XIN_ENG data, all CNA_ENG data, and a 2-year portion of APW_ENG
were received as bulk electronic text archives via internet retrieval.
As such, they were not susceptible to modem line-noise or related
disruptions, though this does not guarantee that the source data are
free of mishaps. Also, the more recent portions of APW_ENG, LTW_ENG
and NYT_ENG have been delivered by various internet-based subscription
systems (explained in more detail in the source-specific sections
below); again, this has eliminated the various problems with modem
noise, but does not assure "perfect" data.
All the data have undergone a consistent extent of quality control, to
improper characters and other obvious forms of corruption.
Naturally, since the source data are all generated manually on a daily
basis, there will be a small percentage of human errors common to all
sources: missing whitespace, incorrect or variant spellings, badly
formed sentences, and so on, as are normally seen in newspapers. No
attempt has been made to address this property of the data.
As indicated above, a common feature of the modem-based archives is
that stories may be repeated in the course of daily transmissions (or
daily archiving). Sometimes a later transmission of a story comes
with minor alterations (fixed spelling, one or more paragraphs added
or removed); but just as often, the collection ends up with two or
more DOCs that are fully identical. In general, though, this practice
affects a relatively small minority of the overall content. (NYT_ENG
is perhaps the worst offender in this regard, sometimes sending as
many as six copies of some featured story.) We have not attempted to
eliminate these duplications; however, we plan to make information
about duplicate and similar articles available on our web site as
supplemental information for this corpus. (See the "ADDITIONAL
INFORMATION and UPDATES" section below.)
Finally, some of the modem services typically show a practice of
breaking long stories into chunks, and sending the chunks as separate
DOC units, with each unit having the normal structural features of a
full story. (This is especially prevalent in NYT_ENG, which has the
longest average story length of all the sources.) Normally, when this
sort of splitting is done, cues are provided in the text of each chunk
that allow editors to reconstruct the full report; but these cues tend
to rely heavily on editorial skills -- it is taken for granted by each
news service that the stories will be reassembled manually as needed
-- so the process of combining the pieces into a full story is not
amenable to an algorithmic solution, and no attempt has been made to
do this. Also, some sources (especially NYT and LTW) include advisory
annotations in the longer stories, providing guidance on how such
stories can be abridged (e.g. "(STORY CAN END HERE, OPTIONAL MATERIAL
FOLLOWS)", and other such phrases, typically parenthesized and in all
caps.
The following sections explain data properties that are particular to
each source.
AFP_ENG:
There is a gap of 54 months in the AFP_ENG collection (about four and
a half years), spanning from May 1997 to December 2001; the LDC had
discontinued its subscription to the AFP English wire service during
this period, and at the point where we restored the subscription near
the end of 2001, there was no practical means for recovering the
portion that was missed. There is also a gap spanning the periods from
September 20, 2002 to October 2, 2002, from August 6, 2003 to September
10, 2003, and February 13, 2008 through February 27, 2008.
During 2007, LDC's AFP feed switched to a new delivery method. Although
the data content appears to be fairly consistent with previously
collected content, LDC has not done detailed analysis to determine
the level of consistency.
Apart from these, the AFP_ENG content shows a high degree of internal
consistency (relative to APW_ENG and NYT_ENG), in terms of day-to-day
content and typographic conventions.
APW_ENG:
This service provides up to six other languages besides English on the
same modem connection, with DOCs in all languages interleaved at
random; of course, we have extracted just the English content for
publication here. The service draws news from quasi-independent
offices around the world, so there tends to be more variability here
in terms of typographic conventions; there is also a noticeably higher
percentage of non-story content, especially in the "other" category:
tables of sports results, stocks, weather, etc.
During the period between August 1999 and August 2001, the modem
service failed to deliver English content, while data in other
languages continued to flow in. (LDC was spooling the data
automatically, and during this period, alarms would be raised only if
the data flow stopped completely -- so the absence of English went
unnoticed.) On learning of this gap in the data, we were able to
recover much of the missing content with help from AP's New York City
office and from Richard Sproat at AT&T Labs -- we gratefully
acknowledge their assistance. Both were able to supply bulk archives
that covered most of the period that we had missed. In particular,
August - November 1999 and January - September 2000 were retrieved
from USENET/ClariNet and web archives that AT&T had collected for its
own research use, while the October 2000 - August 2001 data were
supplied by AP directly from their own web service archive. As a
result of the varying sources, these sub-parts of APW_ENG data tend to
differ from the rest of the collection (and from each other), in terms
of daily quantity, extent of typographic variance, and possibly the
breadth of subject matter being reported.
Among the data added in this edition, the data from January 2004
contained particularly noisy data due to transmission errors. We have
removed documents containing explicit noises from this month.
Starting in May 2004, APW switched to a dedicated internet delivery
system, eliminating the problems of modem noise and also creating a
much better environment for limiting or avoiding duplicate content in
stories. This system of collection continued to operate until the
end of August, 2006. At that point, there was a brief lapse in the
collection (roughly the first half of September 2006 is missing from
our archives), and then data reception switched to a "Network News
Transfer Protocol" (NNTP, related to Usenet transmission). Under this
delivery method, we found that many stories were being delivered two
or three times each, but it has proven to be fairly easy to remove
these duplications.
CNA_ENG:
The amount of data for this source is relatively small compared to
other sources. This data set has been delivered to the LDC via
internet transfer. As a result, we avoided many of the problems that
commonly afflict newswire data collected over modems. There is a
large gap of 16 months from April 2002 to July 2003 in this data set.
When this source was first released in Gigaword English II, the data
had been incorrectly assumed to be ASCII only, and when non-ASCII
bytes were found, they were simply removed. In preparing the current
release, we found that the CNA source data actually used the Big-5
("Traditional Chinese") character set in various irregular ways,
usually to render "full-width" variants of ASCII letters, digits and
punctuation. The approach taken in the previous release caused many
of these "wide" characters to end up as data corruption, particularly
when the second byte of the Big-5 wide character happened to fall in
the ASCII range (which is common for the Big-5 "full-width" versions
of ASCII characters).
For the current release, all the CNA data has been reprocessed from
original sources and correctly converted from Big-5 to UTF-8; where
appropriate, we have normalized the "full-width character" variants to
their corresponding ASCII equivalents.
LTW_ENG:
There is a gap of about 62 months (mid-June 1998 through early August
2003) during which the LDC had dropped its subscription. The data
were collected via dedicated modem up until March 2004, at which point
the delivery was switched to E-mail transmission, eliminating data
loss due to modem noise. The effect of the transmission change on
duplicated material has not been determined, but this source has
tended to show a relatively low degree of duplication.
LTW provides not only the content that is specific to the daily
newspapers published in Los Angeles and Washington, D.C., but also a
sampling of newspaper content from other papers in other cities.
NYT_ENG:
Prior to 2003, there had been only a few scattered service
interruptions for NYT_ENG, and these typically involve gaps of a few
days (the longest was about two weeks). However, there was a time
period, from February 2003 to June 2004, in which pervasive modem
noise induced a significant amount of character data corruption,
affecting the control-character story-boundary markers as well as the
text content of the stories themselves. We have filtered out
documents that showed explicit evidence of corruption. As a result,
there is a smaller amount of documents in this time period. In
particular, there is no data from June 2004, and there is very little
data from May 2004, included in this release. Also, even after
filtering out stories that showed explicit evidence of corruption
(invalid sequences of story-boundary control codes, occurrences of
inappropriate byte values), there are still likely to be
"non-explicit" cases of data corruption in the stories that remain for
this time period. On July 1, 2004, we switched to an internet-based
file transfer method to receive NYT_ENG articles, and the NYT_ENG data
after this date was not susceptible to modem line-noise.
During the preparation of the corpus, we found a number of what appear to
be incorrectly encoded characters, typically outside of the ASCII range.
Where these errors were systematic, we corrected the data by substituting
the appropriate characters. Unfortunately, in many instances, the
substitutions are non-systematic (the most common replacement being an
ASCII question mark, "?"), and automatic replacement was not practical.
It should be noted that NYT_ENG documents from 16 days in July 2002 --
all odd numbered days -- have been intentionally excluded from this
collection in order to satisfy a contractual agreement with a
partner site.
The NYT_ENG service provides not only the content that is specific to
the New York Times daily newspaper publication, but also a wide and
varied sampling of news and features from other urban and regional
newspapers around the U.S., including:
Albany Times Union
Arizona Republic
Atlanta Constitution
Bloomberg Business News
Boston Globe
Casper (Wyo.) Star-Tribune
Chicago Sun-Times
Columbia News Service
Cox News Service
Fort Worth Star-Telegram
Hearst Newspapers
Houston Chronicle
International Herald Tribune
Kansas City Star
Los Angeles Daily News
San Antonio Express-News
San Francisco Chronicle
Seattle Post-Intelligencer
States News Service
Typically, the actual source of a given DOC was indicated in the raw
data via an abbreviation (e.g. AZR, BLOOM, COX, LADN, NYT, SPI, etc)
at the end of the "slug" line that accompanies every story. (The
"slug" is a short string, usually less than 40 characters, that news
editors use to tag and sort stories and topics over the course of a
day.) Because this feature of NYT_ENG slug lines is quite consistent
and informative, the markup strategy was adapted to make sure that the
full slug line would be included as part of the content of the
"DATELINE" tag whenever possible. (Slugs were either not present or
not retained in the other three newswire sources.) Some examples: