2008 NIST Speaker Recognition Evaluation Training Set Part 2
Item Name: | 2008 NIST Speaker Recognition Evaluation Training Set Part 2 |
Author(s): | NIST Multimodal Information Group |
LDC Catalog No.: | LDC2011S07 |
ISBN: | 1-58563-591-X |
ISLRN: | 956-489-013-269-1 |
DOI: | https://doi.org/10.35111/04ab-xb41 |
Release Date: | September 15, 2011 |
Member Year(s): | 2011 |
DCMI Type(s): | Sound |
Sample Type: | ulaw |
Sample Rate: | 8000 |
Data Source(s): | telephone speech, microphone speech |
Project(s): | NIST SRE |
Application(s): | speaker identification |
Language(s): | Urdu, Tigrinya, Thai, Tagalog, Spanish, Russian, Panjabi, Min Nan Chinese, Lao, Korean, Khmer, Georgian, Japanese, Italian, Hindi, Persian, English, Mandarin Chinese, Bengali, Egyptian Arabic, Moroccan Arabic, Dari, Iranian Persian, Arabic |
Language ID(s): | urd, tir, tha, tgl, spa, rus, pan, nan, lao, kor, khm, kat, jpn, ita, hin, fas, eng, cmn, ben, arz, ary, prs, pes, ara |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2011S07 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | NIST Multimodal Information Group. 2008 NIST Speaker Recognition Evaluation Training Set Part 2 LDC2011S07. Web Download. Philadelphia: Linguistic Data Consortium, 2011. |
Related Works: | View |
Introduction
2008 NIST Speaker Recognition Evaluation Training Set Part 2, Linguistic Data Consortium (LDC) catalog number LDC2011S07 and ISBN 1-58563-591-X , was developed by LDC and NIST (National Institute of Standards and Technology). It contains 950 hours of multilingual telephone speech and English interview speech along with transcripts and other materials used as training data in the 2008 NIST Speaker Recognition Evaluation (SRE).
SRE is part of an ongoing series of evaluations conducted by NIST. These evaluations are an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text independent speaker recognition. To this end the evaluation is designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible to those wishing to participate.
The 2008 evaluation was distinguished from prior evaluations, in particular those in 2005 and 2006, by including not only conversational telephone speech data but also conversational speech data of comparable duration recorded over a microphone channel involving an interview scenario.
Additional documentation is in the 2008 SRE Evaluation Plan.
Data
The speech data in this release was collected in 2007 by LDC at its Human Subjects Data Collection Laboratories in Philadelphia and by the International Computer Science Institute (ICSI) at the University of California, Berkeley. This collection was part of the Mixer 5 project, which was designed to support the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones and in different communicative situations and/or in multiple languages. Mixer participants were native English speakers and bilingual English speakers. The telephone speech in this corpus is predominately English, but also includes the above languages. All interview segments are in English. Telephone speech represents approximately 523 hours of the data, and microphone speech represents the other 427 hours.
The telephone speech segments include summed-channel excerpts in the range of 5 minutes from longer original conversations. The interview material includes single channel conversation interview segments of at least 8 minutes from a longer interview session. As in prior evaluations, intervals of silence were not removed.
English language transcripts in .cfm format were produced using an automatic speech recognition (ASR) system. There are approximately six files distributed as part of SRE08 where each file is a 1024 byte header with no audio. However, these files were not included in the trials or keys distributed in the SRE08 aggregate corpus.
Samples
For an example of the data contained in this corpus, review this audio sample.