GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2
Item Name: | GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2 |
Author(s): | Xiaoyi Ma, Dalal Zakhary, Stephanie Strassel |
LDC Catalog No.: | LDC2009T09 |
ISBN: | 1-58563-512-X |
ISLRN: | 131-245-215-805-9 |
DOI: | https://doi.org/10.35111/7kbj-n795 |
Release Date: | May 22, 2009 |
Member Year(s): | 2009 |
DCMI Type(s): | Text |
Data Source(s): | newsgroups |
Project(s): | GALE |
Application(s): | machine translation |
Language(s): | English, Arabic |
Language ID(s): | eng, ara |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2009T09 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Ma, Xiaoyi, Dalal Zakhary, and Stephanie Strassel. GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2 LDC2009T09. Web Download. Philadelphia: Linguistic Data Consortium, 2009. |
Related Works: | View |
Introduction
GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2 was prepared by the Linguistic Data Consortium (LDC) and contains a total of 145,000 words (263 files) of Arabic newsgroup text and its translation selected from thirty-five sources. Newsgroups consist of posts to electronic bulletin boards, Usenet newsgroups, discussion groups and similar forums. This release was used as training data in Phase 1 (year 1) of the DARPA-funded GALE program. This is the second of a two-part release. GALE Phase 1 Arabic Newsgroup Parallel Text - Part 1 was releasd in early 2009.
LDC has released the following GALE Phase 1 & 2 Arabic Parallel Text data sets:
- GALE Phase 1 Arabic Broadcast News Parallel Text - Part 1 (LDC2007T24)
- GALE Phase 1 Arabic Broadcast News Parallel Text - Part 2 (LDC2008T09)
- GALE Phase 1 Arabic Blog Parallel Text (LDC2008T02)
- GALE Phase 1 Arabic Newsgroup Parallel Text - Part 1 (LDC2009T03)
- GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2 (LDC2009T09)
- GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 1 (LDC2012T06)
- GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 2 (LDC2012T14)
- GALE Phase 2 Arabic Newswire Parallel Text (LDC2012T17)
- GALE Phase 2 Arabic Broadcast News Parallel Text (LDC2012T18)
- GALE Phase 2 Arabic Web Parallel Text (LDC2013T01)
Source Data
Preparing the source data involved four stages of work: data scouting, data harvesting, formatting and data selection.
Data scouting involved manually searching the web for suitable newsgroup text. Data scouts were assigned particular topics and genres along with a production target in order to focus their web search. Formal annotation guidelines and a customized annotation toolkit helped data scouts to manage the search process and to track progress. The data scouting process is described in the GALE task specification.
Data scouts logged their decisions about potential text of interest (sites, threads and posts) to a database. A nightly process queried the annotation database and harvested all designated URLs. Whenever possible, the entire site was downloaded, not just the individual thread or post located by the data scout.
Once the text was downloaded, its format was standardized (by running various scripts) so that the data could be more easily integrated into downstream annotation processes. Original-format versions of each document were also preserved. Typically, a new script was required for each new domain name that was identified. After scripts were run, an optional manual process corrected any remaining formatting problems.The selected documents were then reviewed for content-suitability using a semi-automatic process. A statistical approach was used to rank a document's relevance to a set of already-selected documents labeled as "good." An annotator then reviewed the list of relevance-ranked documents and selected those which were suitable for a particular annotation task or for annotation in general. These newly-judged documents in turn provided additional input for the generation of new ranked lists.
Manual sentence unit/segment (SU) annotation was also performed on a subset of files following LDC's Quick Rich Transcription guidelines. Three types of end of sentence SU were identified:
- statement SU
- question SU
- incomplete SU
Translation
After files were selected, they were reformatted into a human-readable translation format and assigned to professional translators for careful translation. Translators followed LDC's GALE translation guidelines, which describe the makeup of the translation team, the source data format, the translation data format, best practices for translating certain linguistic features (such as names and speech disfluencies) and quality control procedures applied to completed translations.
Final Data
A source file and its translation share the same file name across directories.
TDF Format
All final data are presented in Tab Delimited Format (TDF). TDF is compatible with other transcription formats, such as the Transcriber format and AG format, making it easy to process.
Each line of a TDF file corresponds to a speech segment and contains 13 tab delimited fields:
field | data_type | |
1 | file | unicode |
2 | channel | int |
3 | start | float |
4 | end | float |
5 | speaker | unicode |
6 | speakerType | unicode |
7 | speakerDialect | unicode |
8 | transcript | unicode |
9 | section | int |
10 | turn | int |
11 | segment | int |
12 | sectionType | unicode |
13 | suType | unicode |
A source TDF file and its translation are the same except that the transcript in the source TDF is replaced by its English translation.
Some fields are inapplicable to newsgroup text. Those include the channel, start time, end time and speaker dialect fields. Those fields are either empty or contain values as place holder.
Encoding
All data are encoded in UTF-8.
Sponsorship
This work was supported in part by the Defense Advanced Research Projects Agency, GALE Program Grant No. HR0011-06-1-0003. The content of this publication does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
Samples
For an example of the data in this corpus, please examine these images of a source document and it's translation.