2006 NIST Speaker Recognition Evaluation Test Set Part 2

Item Name: 2006 NIST Speaker Recognition Evaluation Test Set Part 2
Author(s): NIST Multimodal Information Group
LDC Catalog No.: LDC2012S01
ISBN: 1-58563-602-9
ISLRN: 125-164-075-830-3
DOI: https://doi.org/10.35111/9d4j-4908
Release Date: January 19, 2012
Member Year(s): 2012
DCMI Type(s): Sound
Sample Type: ulaw
Sample Rate: 8000
Data Source(s): microphone speech, telephone speech
Project(s): NIST SRE
Application(s): speaker identification
Language(s): Yue Chinese, Urdu, Thai, Spanish, Russian, Korean, Hindi, Persian, English, Mandarin Chinese, Bengali, Standard Arabic, Dari, Iranian Persian, Chinese, Arabic
Language ID(s): yue, urd, tha, spa, rus, kor, hin, fas, eng, cmn, ben, arb, prs, pes, zho, ara
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2012S01 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: NIST Multimodal Information Group. 2006 NIST Speaker Recognition Evaluation Test Set Part 2 LDC2012S01. Web Download. Philadelphia: Linguistic Data Consortium, 2012.
Related Works: View


2006 NIST Speaker Recognition Evaluation Test Set Part 2 was developed by LDC and NIST (National Institute of Standards and Technology). It contains 568 hours of conversational telephone and microphone speech in English, Arabic, Bengali, Chinese, Farsi, Hindi, Korean, Russian, Spanish, Thai, and Urdu and associated English transcripts used as test data in the NIST-sponsored 2006 Speaker Recognition Evaluation (SRE).

The ongoing series of SRE yearly evaluations conducted by NIST are intended to be of interest to researchers working on the general problem of text independent speaker recognition. To this end the evaluations are designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible to those wishing to participate.

The task of the 2006 SRE evaluation was speaker detection, that is, to determine whether a specified speaker is speaking during a given segment of conversational telephone speech. The task was divided into 15 distinct and separate tests involving one of five training conditions and one of four test conditions. Further information about the test conditions and additional documentation is available in the 2006 SRE Evaluation Plan.

LDC previously published 2006 NIST Speaker Recognition Evaluation Training Set (LDC2011S09) and 2006 NIST Speaker Recognition Evaluation Test Set Part 1 (LDC2011S10).


The speech data in this release was collected by LDC as part of the Mixer project, in particular Mixer Phases 1, 2, and 3. The Mixer project supports the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones and in different communicative situations and/or in multiple languages. The data is mostly English speech, but includes some speech in Arabic, Bengali, Chinese, Farsi, Hindi, Korean, Russian, Spanish, Thai, and Urdu.

The telephone speech segments are multi-channel data collected simultaneously from a number of auxiliary microphones. The files are organized into four types: two-channel excerpts of approximately 10 seconds, two-channel conversations of approximately five minutes, summed-channel conversations also of approximately five minutes and a two-channel conversation with the usual telephone speech replaced by auxiliary microphone data in the putative target speaker channel. The auxiliary microphone conversations are also of approximately five minutes in length.

The speech files are stored as 8-bit u-law speech signals in separate SPHERE files. In addition to the standard header fields, the SPHERE header for each file contains some auxiliary information such as the language of the conversation.

English language time-aligned transcripts in .ctm format were produced using an automatic speech recognition (ASR) system.


For an example of the data contained in this corpus, listen to this sample (WAV).


None at this time.

Available Media

View Fees

Login for the applicable fee