2008 NIST Speaker Recognition Evaluation Test Set

Item Name: 2008 NIST Speaker Recognition Evaluation Test Set
Author(s): NIST Multimodal Information Group
LDC Catalog No.: LDC2011S08
ISBN: 1-58563-594-4
ISLRN: 289-720-923-302-6
Release Date: October 21, 2011
Member Year(s): 2011
DCMI Type(s): Sound
Sample Type: ulaw
Sample Rate: 8000
Data Source(s): telephone speech, microphone speech
Project(s): NIST SRE
Application(s): speech recognition
Language(s): Yue Chinese, Wu Chinese, Vietnamese, Uzbek, Urdu, Thai, Tagalog, Tamil, Russian, Panjabi, Min Nan Chinese, Lao, Korean, Japanese, Italian, Hindi, Persian, Mandarin Chinese, Bengali, Egyptian Arabic, Moroccan Arabic, Dari, Iranian Persian, English, Chinese, Arabic
Language ID(s): yue, wuu, vie, uzb, urd, tha, tgl, tam, rus, pan, nan, lao, kor, jpn, ita, hin, fas, cmn, ben, arz, ary, prs, pes, eng, zho, ara
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2011S08 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: NIST Multimodal Information Group. 2008 NIST Speaker Recognition Evaluation Test Set LDC2011S08. Web Download. Philadelphia: Linguistic Data Consortium, 2011.


2008 NIST Speaker Recognition Evaluation Test Set, Linguistic Data Consortium (LDC) catalog number LDC2011S08 and ISBN 1-58563-594-4, was developed by LDC and NIST (National Institute of Standards and Technology). It contains 942 hours of multilingual telephone speech and English interview speech along with transcripts and other materials used as test data in the 2008 NIST Speaker Recognition Evaluation (SRE).

NIST SRE is part of an ongoing series of evaluations conducted by NIST. These evaluations are an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text independent speaker recognition. To this end the evaluation is designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible to those wishing to participate.

The 2008 evaluation was distinguished from prior evaluations, in particular those in 2005 and 2006, by including not only conversational telephone speech data but also conversational speech data of comparable duration recorded over a microphone channel involving an interview scenario.

LDC previously released the 2008 NIST SRE Training Set in two parts as LDC2011S05 and LDC2011S07.

Additional documentation is available at the NIST web site for the 2008 SRE and within the 2008 SRE Evaluation Plan.


The speech data in this release was collected in 2007 by LDC at its Human Subjects Data Collection Laboratories in Philadelphia and by the International Computer Science Institute (ICSI) at the University of California, Berkeley. This collection was part of the Mixer 5 project, which was designed to support the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones and in different communicative situations and/or in multiple languages. Mixer participants were native English and bilingual English speakers. The telephone speech in this corpus is predominantly English, but also includes the above languages. All interview segments are in English. Telephone speech represents approximately 368 hours of the data, whereas microphone speech represents the other 574 hours.

The telephone speech segments include two-channel excerpts of approximately 10 seconds and 5 minutes. There are also summed-channel excerpts in the range of 5 minutes. The microphone excerpts are either 3 or 8 minutes in length. As in prior evaluations, intervals of silence were not removed. There are approximately six files distributed as part of SRE08 where each file is a 1024 byte header with no audio. However, these files were not included in the trials or keys distributed in the SRE08 aggregate corpus.

English language transcripts in .cfm format were produced using an automatic speech recognition (ASR) system.


For an example of the data contained in this corpus, review this audio sample.


None at this time.

Available Media

View Fees

Extra Copy
Login for the applicable fee