Fisher and CALLHOME Spanish--English Speech Translation

Item Name: Fisher and CALLHOME Spanish--English Speech Translation
Author(s): Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, Sanjeev Khudanpur
LDC Catalog No.: LDC2014T23
ISBN: 1-58563-694-0
ISLRN: 221-795-248-256-0
DOI: https://doi.org/10.35111/m9me-vh08
Release Date: November 15, 2014
Member Year(s): 2014
DCMI Type(s): Text
Data Source(s): telephone conversations, transcribed speech
Application(s): speech recognition, machine translation
Language(s): Spanish, English
Language ID(s): spa, eng
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2014T23 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: Post, Matt, et al. Fisher and CALLHOME Spanish--English Speech Translation LDC2014T23. Web Download. Philadelphia: Linguistic Data Consortium, 2014.
Related Works: View

Introduction

Fisher and CALLHOME Spanish-English Speech Translation was developed at Johns Hopkins University and contains English reference translations and speech recognizer output (in various forms) that complement the LDC Fisher Spanish (LDC2010T04) and CALLHOME Spanish audio and transcript releases (LDC96T17). Together, they make a four-way parallel text dataset representing approximately 38 hours of speech, with defined training, development, and held-out test sets.

Data

The source data are the Fisher Spanish and CALLOME Spanish corpora developed by LDC, comprising transcribed telephone conversations between (mostly native) Spanish speakers in a variety of dialects. The Fisher Spanish data set consists of 819 transcribed conversations on an assortment of provided topics primarily between strangers, resulting in approximately 160 hours of speech aligned at the utterance level, with 1.5 million tokens. The CALLHOME Spanish corpus comprises 120 transcripts of spontaneous conversations primarily between friends and family members, resulting in approximately 20 hours of speech aligned at the utterance level, with just over 200,000 words (tokens) of transcribed text.

Translations were obtained by crowdsourcing using Amazon's Mechanical Turk, after which the data was split into training, development, and test sets. The CALLHOME data set defines its own data splits, organized into train, devtest, and evltest, which were retained here. For the Fisher material, four data splits were produced: a large training section and three test sets. These test sets correspond to portions of the data where four translations exist.

Samples

Please view this corpus and mapping sample.

Updates

None at this time.

Available Media

View Fees





Login for the applicable fee