Fisher English Training Part 2, Speech

Item Name: Fisher English Training Part 2, Speech
Author(s): Christopher Cieri, David Graff, Owen Kimball, Dave Miller, Kevin Walker
LDC Catalog No.: LDC2005S13
ISBN: 1-58563-335-6
ISLRN: 050-970-085-362-2
Release Date: April 15, 2005
Member Year(s): 2005
DCMI Type(s): Sound
Sample Type: 2-channel ulaw
Sample Rate: 8000
Data Source(s): telephone conversations
Project(s): EARS, GALE
Application(s): speech recognition
Language(s): English
Language ID(s): eng
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2005S13 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: Cieri, Christopher, et al. Fisher English Training Part 2, Speech LDC2005S13. Web Download. Philadelphia: Linguistic Data Consortium, 2005.
Related Works: View


Fisher English Training Part 2, Speech was developed by the Linguistic Data Consortium (LDC) and contains 975 hours of English conversational telephone speech (CTS). It contains 5,849 audio files, each one containing a full conversation of up to 10 minutes.

The corresponding transcripts for these speech files are available in Fisher English Training Part 2, Transcripts (LDC2005T19), which includes additional information regarding the speakers involved, and types of telephones used.

These two corpora represent the second half of a CTS collection that was created at LDC during 2003. The first half of the collection, released in 2004, comprises Fisher English Training Speech Part 1 Speech (LDC2004S13) and Fisher English Training Speech Part 1 Transcripts (LDC2004T19). Taken as a whole, the two parts contain 11,699 recorded telephone conversations totalling approximately 1,960 hours.

The Fisher telephone conversation collection protocol was created at LDC to address a critical need of developers trying to build robust automatic speech recognition (ASR) systems. Previous collection protocols, such as CALLFRIEND and Switchboard-II and the resulting corpora, have been adapted for ASR research but were in fact developed for language and speaker identification respectively. Although the CALLHOME protocol and corpora were developed to support ASR technology, they feature small numbers of speakers making telephone calls of relatively long duration with narrow vocabulary across the collection. CALLHOME conversations are challengingly natural and intimate. The Fisher protocol uses a large number of participants, and each one converses with another participant, whom they typically do not know, for a short period of time to discuss the assigned topics. This maximizes inter-speaker variation and vocabulary breadth while also increasing formality.

Previous protocols such as CALLHOME, CALLFRIEND, and Switchboard relied on participant activity to drive the collection. Fisher is unique in being platform-driven rather than participant-driven. Participants who wish to initiate a call may do so, however, the collection platform initiates the majority of calls. Participants need only answer their phones at the times they specified when registering for the study.

To encourage a broad range of vocabulary, Fisher participants are asked to speak on an assigned topic randomly selected from a list, which changes every 24 hours and is assigned to all subjects paired on that day. Some topics are inherited or refined from previous Switchboard studies while others were developed specifically for the Fisher protocol.


The individual audio files are presented in NIST SPHERE format, and contain two-channel mu-law sample data with both call sides. Shorten compression has been applied to all files.

For the entire collection, here is the gender breakdown for the participants: 6,813 females, 5,104 males.

The telephone calls were recorded digitally from a T-1 trunk line that terminates at a host computer at the LDC (the "robot-operator"). The T-1 telephone circuit dedicated to Fisher English collection was configured so that some lines would service people who dialed in to the system while other lines would be used for dialing out to people according to their hours of availability, as provided in the enrollment process. Whenever any two active lines (dial-in or dial-out) reached a point where the callees were ready to proceed with a conversation, the robot operator bridged the two lines, announced the topic of the day to both parties, and began recording by copying the digital mu-law sample data from each line directly to disk files.

Data collection and transcription were sponsored by DARPA and the U.S. Department of Defense, as part of the EARS project for research and development in automatic speech recognition.


For an example of the data in this corpus, please listen to this sample (WAV).


None at this time.

Available Media

View Fees

Login for the applicable fee