2006 NIST Speaker Recognition Evaluation Test Set Part 1
Item Name: | 2006 NIST Speaker Recognition Evaluation Test Set Part 1 |
Author(s): | NIST Multimodal Information Group |
LDC Catalog No.: | LDC2011S10 |
ISBN: | 1-58563-600-2 |
ISLRN: | 293-615-042-213-8 |
DOI: | https://doi.org/10.35111/rq3p-1h62 |
Release Date: | December 15, 2011 |
Member Year(s): | 2011 |
DCMI Type(s): | Sound |
Sample Type: | ulaw |
Sample Rate: | 8000 |
Data Source(s): | microphone speech, telephone speech |
Project(s): | NIST SRE |
Application(s): | speaker identification |
Language(s): | Yue Chinese, Urdu, Thai, Spanish, Russian, Korean, Hindi, Persian, English, Mandarin Chinese, Bengali, Standard Arabic, Dari, Iranian Persian, Chinese, Arabic |
Language ID(s): | yue, urd, tha, spa, rus, kor, hin, fas, eng, cmn, ben, arb, prs, pes, zho, ara |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2011S10 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | NIST Multimodal Information Group. 2006 NIST Speaker Recognition Evaluation Test Set Part 1 LDC2011S10. Web Download. Philadelphia: Linguistic Data Consortium, 2011. |
Related Works: | View |
Introduction
2006 NIST Speaker Recognition Evaluation Test Set Part 1 was developed by the Linguistic Data Consortium (LDC) and NIST (National Institute of Standards and Technology). It contains 437 hours of conversational telephone and microphone speech in English, Arabic, Bengali, Chinese, Farsi, Hindi, Korean, Russian, Spanish, Thai, and Urdu and associated English transcripts used as test data in the NIST-sponsored 2006 Speaker Recognition Evaluation (SRE).
The ongoing series of SRE yearly evaluations conducted by NIST are intended to be of interest to researchers working on the general problem of text independent speaker recognition. To this end the evaluations are designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible to those wishing to participate.
The task of the 2006 SRE evaluation was speaker detection, that is, to determine whether a specified speaker is speaking during a given segment of conversational telephone speech. The task was divided into 15 distinct and separate tests involving one of five training conditions and one of four test conditions. Further information about the test conditions and additional documentation is available in the 2006 SRE Evaluation Plan.
LDC also previously released 2006 NIST Speaker Recognition Evaluation Training Set (LDC2011S09) and later released 2006 NIST Speaker Recognition Evaluation Test Set Part 2 (LDC2012S01).
Data
The speech data in this release was collected by LDC as part of the Mixer project, in particular Mixer Phases 1, 2, and 3. The Mixer project supports the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones and in different communicative situations and/or in multiple languages. The data is mostly English speech, but includes some speech in Arabic, Bengali, Chinese, Farsi, Hindi, Korean, Russian, Spanish, Thai, and Urdu.
The telephone speech segments are multi-channel data collected simultaneously from a number of auxiliary microphones. The files are organized into four types: two-channel excerpts of approximately 10 seconds, two-channel conversations of approximately five minutes, summed-channel conversations also of approximately five minutes and a two-channel conversation with the usual telephone speech replaced by auxiliary microphone data in the putative target speaker channel. The auxiliary microphone conversations are also of approximately five minutes in length.
The speech files are stored as 8-bit u-law speech signals in separate SPHERE files. In addition to the standard header fields, the SPHERE header for each file contains some auxiliary information such as the language of the conversation.
English language transcripts in .ctm format were produced using an automatic speech recognition (ASR) system.
Samples
For an example of the data contained in this corpus, please listen to this sample (WAV).
Updates
None at this time.