GALE Chinese-English Parallel Aligned Treebank -- Training
Item Name: | GALE Chinese-English Parallel Aligned Treebank -- Training |
Author(s): | Xuansong Li, Stephen Grimes, Stephanie Strassel, Xiaoyi Ma, Nianwen Xue, Mitch Marcus, Ann Taylor |
LDC Catalog No.: | LDC2015T06 |
ISBN: | 1-58563-708-4 |
ISLRN: | 041-146-278-187-2 |
DOI: | https://doi.org/10.35111/hmtg-8q46 |
Release Date: | March 16, 2015 |
Member Year(s): | 2015 |
DCMI Type(s): | Text |
Data Source(s): | newswire, broadcast conversation, web collection |
Project(s): | GALE |
Application(s): | automatic content extraction, cross-lingual information retrieval, information detection, machine translation |
Language(s): | Mandarin Chinese, Chinese, English |
Language ID(s): | cmn, zho, eng |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2015T06 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Li, Xuansong, et al. GALE Chinese-English Parallel Aligned Treebank -- Training LDC2015T06. Web Download. Philadelphia: Linguistic Data Consortium, 2015. |
Related Works: | View |
Introduction
GALE Chinese-English Parallel Aligned Treebank -- Training was developed by the Linguistic Data Consortium (LDC) and contains 229,249 tokens of word aligned Chinese and English parallel text with treebank annotations. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.
Parallel aligned treebanks are treebanks annotated with morphological and syntactic structures aligned at the sentence level and the sub-sentence level. Such data sets are useful for natural language processing and related fields, including automatic word alignment system training and evaluation, transfer-rule extraction, word sense disambiguation, translation lexicon extraction and cultural heritage and cross-linguistic studies. With respect to machine translation system development, parallel aligned treebanks may improve system performance with enhanced syntactic parsers, better rules and knowledge about language pairs and reduced word error rate.
The Chinese source data was translated into English. Chinese and English treebank annotations were performed independently. The parallel texts were then word aligned. The material in this release corresponds to portions of the Chinese treebanked data in Chinese Treebank 6.0 (LDC2007T36) (CTB), OntoNotes 3.0 (LDC2009T24) and OntoNotes 4.0 (LDC2011T03).
Data
This release consists of Chinese source broadcast programming (China Central TV, Phoenix TV), newswire (Xinhua News Agency) and web data collected by LDC. The distribution by genre, words, character tokens, treebank tokens and segments appears below:
Genre | Files | Words | CharTokens | CTBTokens | Segments |
bc | 10 | 57,571 | 86,356 | 60,270 | 3,328 |
nw | 172 | 64,337 | 96,505 | 57,722 | 2,092 |
wb | 86 | 30,925 | 46,388 | 31,240 | 1,321 |
Total | 268 | 152,833 | 229,249 | 149,232 | 6,741 |
Note that all token counts are based on the Chinese data only. One token is equivalent to one character and one word is equivalent to 1.5 characters.
The Chinese word alignment task consisted of the following components:
- Identifying, aligning, and tagging eight different types of links
- Identifying, attaching, and tagging local-level unmatched words
- Identifying and tagging sentence/discourse-level unmatched words
- Identifying and tagging all instances of Chinese 的 (DE) except when they were a part of a semantic link
This release contains nine types of files - Chinese raw source files, English raw translation files, Chinese character tokenized files, Chinese CTB tokenized files, English tokenized files, Chinese treebank files, English treebank files, character-based word alignment files, and CTB-based word alignment files.
Samples
Please view the following samples:
- English Raw
- English Token
- English Treebank
- Chinese Raw
- Chinese Token
- Chinese Treebank
- Character-Based Word Alignment
- Chinese CTB Token
- CTB-Based Word Alignment
Sponsorship
This work was supported in part by the Defense Advanced Research Projects Agency, GALE Program Grant No. HR0011-06-1-0003. The content of this publication does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
Updates
None at this time.