GALE Phase 1 Arabic Blog Parallel Text

Item Name: GALE Phase 1 Arabic Blog Parallel Text
Author(s): Xiaoyi Ma, Dalal Zakhary, Stephanie Strassel
LDC Catalog No.: LDC2008T02
ISBN: 1-58563-462-X
ISLRN: 461-663-437-911-1
DOI: https://doi.org/10.35111/x6pk-3q51
Release Date: March 19, 2008
Member Year(s): 2008
DCMI Type(s): Text
Data Source(s): weblogs
Project(s): GALE
Application(s): machine translation, language modeling
Language(s): Standard Arabic, English
Language ID(s): arb, eng
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2008T02 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: Ma, Xiaoyi, Dalal Zakhary, and Stephanie Strassel. GALE Phase 1 Arabic Blog Parallel Text LDC2008T02. Web Download. Philadelphia: Linguistic Data Consortium, 2008.
Related Works: View

Introduction

This file contains the documentation for GALE Phase 1 Arabic Blog Parallel Text, Linguistic Data Consortium (LDC) catalog number LDC2008T02, ISBN 1-58563-462-X.

Blogs are posts to informal web-based journals of varying topical content. GALE Phase 1 Arabic Blog Parallel Text was prepared by the LDC and consists of 102K words (222 files) of Arabic blog text and its English translation from thirty-three sources. This release was used as training data in Phase 1 of the DARPA-funded GALE program.

LDC has released the following GALE Phase 1 & 2 Arabic Parallel Text data sets:

  • GALE Phase 1 Arabic Broadcast News Parallel Text - Part 1 (LDC2007T24)
  • GALE Phase 1 Arabic Broadcast News Parallel Text - Part 2 (LDC2008T09)
  • GALE Phase 1 Arabic Blog Parallel Text (LDC2008T02)
  • GALE Phase 1 Arabic Newsgroup Parallel Text - Part 1 (LDC2009T03)
  • GALE Phase 1 Arabic Newsgroup Parallel Text - Part 2 (LDC2009T09)
  • GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 1 (LDC2012T06)
  • GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 2 (LDC2012T14)
  • GALE Phase 2 Arabic Newswire Parallel Text (LDC2012T17)
  • GALE Phase 2 Arabic Broadcast News Parallel Text (LDC2012T18)
  • GALE Phase 2 Arabic Web Parallel Text (LDC2013T01)

Source Data

The task of preparing this corpus involved four stages of work: data scouting, data harvesting, formatting, and data selection.

Data scouting involved manually searching the web for suitable blog text. Data scouts were assigned particular topics and genres along with a production target in order to focus their web search. Formal annotation guidelines and a customized annotation toolkit helped data scouts to manage the search process and to track progress.

Data scouts logged their decisions about potential text of interest (sites, threads and posts) to a database. A nightly process queried the annotation database and harvested all designated URLs. Whenever possible, the entire site was downloaded, not just the individual thread or post located by the data scout.

Once the text was downloaded, its format was standardized (by running various scripts) so that the data could be more easily integrated into downstream annotation processes. Original-format versions of each document were also preserved. Typically a new script was required for each new domain name that was identified. After scripts were run, an optional manual process corrected any remaining formatting problems.

The selected documents were then reviewed for content suitability using a semi-automatic process. A statistical approach was used to rank a documents relevance to a set of already-selected documents labeled as good. An annotator then reviewed the list of relevance-ranked documents and selected those which were suitable for a particular annotation task or for annotation in general. Those newly-judged documents in turn provided additional input for the generation of new ranked lists.

Manual sentence units/segments (SU) annotation was also performed on a subset of files following LDCs Quick Rich Transcription specification. Three types of end of sentence SU are identified:

- statement SU - question SU - incomplete SU

Translation

After files were selected, they were reformatted into a human-readable translation format, and the files were then assigned to professional translators for careful translation. Translators followed LDCs GALE Translation guidelines, which describe the makeup of the translation team, the source, data format, the translation data format, best practices for translating certain linguistic features (such as names and speech disfluencies), and quality control procedures applied to completed translations.

Translators were instructed to return a 50-sentence sample as soon as it was completed. The sample was reviewed by LDCs bilingual language specialists. Subsequent deliveries were subject to quality controls as described in the translation guidelines. Low quality translations were returned to the translators for revision.

TDF Format

All final data are in Tab Delimited Format (TDF). TDF is compatible with other transcription formats, such as the Transcriber format and AG format, and it is easy to process.

Each line of a TDF file corresponds to a speech segment and contains 13 tab delimited fields:

field data_type ----- --------- 1 file unicode 2 channel int 3 start float 4 end float 5 speaker unicode 6 speakerType unicode 7 speakerDialect unicode 8 transcript unicode 9 section int 10 turn int 11 segment int 12 sectionType unicode 13 suType unicode

A source TDF file and its translation are the same except that the transcript in the source TDF is replaced by its English translation.

Encoding

All data are encoded in UTF8.

Sponsorship

This work was supported in part by the Defense Advanced Research Projects Agency, GALE Program Grant No. HR0011-06-1-0003. The content of this publication does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.

samples

For an example of the data in this corpus, please examine these screen captures(jpg) of the text:

Available Media

View Fees





Login for the applicable fee