2005 NIST Speaker Recognition Evaluation Training Data

Item Name: 2005 NIST Speaker Recognition Evaluation Training Data
Author(s): NIST Multimodal Information Group
LDC Catalog No.: LDC2011S01
ISBN: 1-58563-580-4
ISLRN: 778-313-260-404-1
DOI: https://doi.org/10.35111/d9yp-vk77
Release Date: May 24, 2011
Member Year(s): 2011
DCMI Type(s): Sound
Sample Type: ulaw
Sample Rate: 8000
Data Source(s): telephone speech
Project(s): NIST SRE
Application(s): speaker identification
Language(s): Spanish, Russian, English, Mandarin Chinese, Arabic
Language ID(s): spa, rus, eng, cmn, ara
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2011S01 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: NIST Multimodal Information Group. 2005 NIST Speaker Recognition Evaluation Training Data LDC2011S01. Web Download. Philadelphia: Linguistic Data Consortium, 2011.
Related Works: View

Introduction

2005 NIST Speaker Recognition Evaluation Training Data was developed at the Linguistic Data Consortium (LDC) and NIST (National Institute of Standards and Technology). It consists of 392 hours of conversational telephone speech in English, Arabic, Mandarin Chinese, Russian, and Spanish and associated English transcripts used as training data in the NIST-sponsored 2005 Speaker Recognition Evaluation (SRE).

The ongoing series of SRE yearly evaluations conducted by NIST are intended to be of interest to researchers working on the general problem of text independent speaker recognition. To that end the evaluations are designed to be simple, to focus on core technology issues, to be fully supported and to be accessible to those wishing to participate.

The task of the 2005 SRE evaluation was speaker detection, that is, to determine whether a specified speaker is speaking during a given segment of conversational speech. The task was divided into 20 distinct and separate tests involving one of five training conditions and one of four test conditions. Further information about the task conditions is contained in the NIST Year 2005 Speaker Recognition Evaluation Plan.

Data

The speech data consists of conversational telephone speech with multi-channel data collected simultaneously from a number of auxiliary microphones. The files are organized into two segments: 10 second two-channel excerpts (continuous segments from single conversations that are estimated to contain approximately 10 seconds of actual speech in the channel of interest) and five minute two-channel conversations.

The speech files are stored as 8-bit u-law speech signals in separate SPHERE files. In addition to the standard header fields, the SPHERE header for each file contains some auxiliary information that includes the language of the conversation and whether the data was recorded over a telephone line.

English language word transcripts in .cmt format were produced using an automatic speech recognition system (ASR) with error rates in the range of 15-30%.

Samples

For an example of the data contained in this corpus, please listen to this sample (WAV).

Updates

None at this time.

Available Media

View Fees





Login for the applicable fee