2008 NIST Speaker Recognition Evaluation Training Set Part 1
Item Name: | 2008 NIST Speaker Recognition Evaluation Training Set Part 1 |
Author(s): | NIST Multimodal Information Group |
LDC Catalog No.: | LDC2011S05 |
ISBN: | 1-58563-587-1 |
ISLRN: | 531-416-977-177-6 |
DOI: | https://doi.org/10.35111/pr4h-n676 |
Release Date: | August 15, 2011 |
Member Year(s): | 2011 |
DCMI Type(s): | Sound |
Sample Type: | ulaw |
Sample Rate: | 8000 |
Data Source(s): | microphone speech, telephone speech |
Project(s): | NIST SRE |
Application(s): | speaker identification |
Language(s): | Urdu, Tigrinya, Thai, Tagalog, Spanish, Russian, Panjabi, Min Nan Chinese, Lao, Korean, Khmer, Georgian, Japanese, Italian, Hindi, Persian, English, Mandarin Chinese, Bengali, Egyptian Arabic, Moroccan Arabic, Dari, Iranian Persian, Arabic |
Language ID(s): | urd, tir, tha, tgl, spa, rus, pan, nan, lao, kor, khm, kat, jpn, ita, hin, fas, eng, cmn, ben, arz, ary, prs, pes, ara |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2011S05 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | NIST Multimodal Information Group. 2008 NIST Speaker Recognition Evaluation Training Set Part 1 LDC2011S05. Web Download. Philadelphia: Linguistic Data Consortium, 2011. |
Related Works: | View |
Introduction
2008 NIST Speaker Recognition Evaluation Training Set Part 1 was developed by LDC and NIST (National Institute of Standards and Technology). It contains 640 hours of multilingual telephone speech and English interview speech along with time-aligned transcripts and other materials used as training data in the 2008 NIST Speaker Recognition Evaluation (SRE).
SRE is part of an ongoing series of evaluations conducted by NIST. These evaluations are an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text independent speaker recognition. To this end the evaluation is designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible to those wishing to participate.
The 2008 evaluation was distinguished from prior evaluations, in particular those in 2005 and 2006, by including not only conversational telephone speech data but also conversational speech data of comparable duration recorded over a microphone channel involving an interview scenario.
Data
The speech data in this release was collected in 2007 by LDC at its Human Subjects Collection facility in Philadelphia and by the International Computer Science Institute (ICSI) at the University of California, Berkley. This collection was part of the Mixer 5 project, which was designed to support the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones and in different communicative situations and/or in multiple languages.
Mixer participants were native English and bilingual English speakers. The telephone speech in this corpus is predominately English, but also includes the languages identified above. All interview segments are in English. Telephone speech represents approximately 565 hours of the data, whereas microphone speech represents the other 75 hours.
The telephone speech segments include excerpts approximately 5 minutes and 10 seconds from longer original conversations. The interview material includes short conversation interview segments of approximately three minutes from a longer interview session. As in prior evaluations, intervals of silence were not removed. Also, two separate conversation channels are provided (to aid systems in echo cancellation, dialog analysis, etc.). There are approximately six files distributed as part of SRE08 where each file is a 1024 byte header with no audio. However, these files were not included in the trials or keys distributed in the SRE08 aggregate corpus.
English language transcripts in .cfm format were produced using an automatic speech recognition (ASR) system.
Samples
For an example of the data contained in this corpus, please listen to this sample (WAV).
Updates
None at this time.