Fisher English Training Speech Part 1 Transcripts
|Fisher English Training Speech Part 1 Transcripts
|Christopher Cieri, David Graff, Owen Kimball, Dave Miller, Kevin Walker
|LDC Catalog No.:
|December 15, 2004
LDC User Agreement for Non-Members
|Subscription & Standard Members, and Non-Members
|Cieri, Christopher, et al. Fisher English Training Speech Part 1 Transcripts LDC2004T19. Web Download. Philadelphia: Linguistic Data Consortium, 2004.
Fisher English Training Speech Part 1 Transcripts was developed by the Linguistic Data Consortium (LDC) and contains time-aligned transcript data for 5,850 telephone conversations (984 hours) in English. In addition to the transcriptions, there is a complete set of tables describing the speakers, the properties of the telephone calls, and the set of topics that were used to initiate the conversations. The corresponding speech files for these transcripts are contained in Fisher English Training Speech Part 1 Speech (LDC2004S13).
These two corpora represent the first half of a conversational telephone speech (CTS) collection that was created at LDC during 2003. The second half of the collection, released in 2005, comprises Fisher English Training Part 2, Transcripts (LDC2005T19) and Fisher English Training Part 2, Speech (LDC2005S13). Taken as a whole, the two parts contain 11,699 recorded telephone conversations totalling approximately 1,960 hours.
The Fisher telephone conversation collection protocol was created at LDC to address a critical need of developers trying to build robust automatic speech recognition (ASR) systems. Previous collection protocols, such as CALLFRIEND and Switchboard-II and the resulting corpora, have been adapted for ASR research but were in fact developed for language and speaker identification, respectively. Although the CALLHOME protocol and corpora were developed to support ASR technology, they feature small numbers of speakers making telephone calls of relatively long duration with narrow vocabulary across the collection. CALLHOME conversations are challengingly natural and intimate. Under the Fisher protocol, a large number of participants are allowed to make up to three 10 minute calls. For each call, a participant is paired with another participant, whom they typically do not know, to discuss assigned topics. This maximizes inter-speaker variation and vocabulary breath while also increasing formality.
Previous protocols such as CALLHOME, CALLFRIEND, and Switchboard relied upon participant activity to drive the collection. Fisher is unique in being platform-driven rather than participant-driven. Participants who wish to initiate a call may do so, however, the collection platform initiates the majority of calls. Participants need only answer their phones at the times they specified when registering for the study.
To encourage a broad range of vocabulary, Fisher participants are asked to speak about an assigned topic chosen from a randomly generated list that changes every 24 hours. All participants that day will be assigned subjects from that list. Some topics are inherited or refined from previous Switchboard studies while others were developed specifically for the Fisher protocol.
The individual audio files are presented in NIST SPHERE format, and contain two-channel mu-law sample data. Shorten compression has been applied to all files. Transcription files are stored in .txt format, with speaker turns and timestamps included.
For the entire collection, here is the gender breakdown for the participants: 6,813 female, 5,104 male.
Overall, about 12% of the conversations were transcribed at LDC, and the rest were transcribed by BBN and WordWave using a significantly different approach to the task. A central goal in both sets was to maximize the speed and economy of the transcription process. This in turn involved certain aspects of mark-up detail and quality control that may have been common in previous, smaller corpora. For more details about both the BBN/WordWave and LDC transcription approaches, please refer to the readme files in the online documentation.
Data collection and transcription were sponsored by DARPA and the U.S. Department of Defense, as part of the EARS project for research and development in automatic speech recognition.
For an example of the data in this corpus, please view this sample (TXT).
As of 6/14/2017, 'fe_03_p1_calldata.tbl' was updated to correct mislabeled topics for some calls. All downloads made after this date will have the corrected file.