2005 NIST Speaker Recognition Evaluation Training Data


Item Name: 2005 NIST Speaker Recognition Evaluation Training Data
Authors: NIST Multimodal Information Group
LDC Catalog No.: LDC2011S01
ISBN: 1-58563-580-4
Release Date: May 24, 2011
Data Type: speech
Sample Rate: 8000 Hz
Sampling Format: ulaw
Data Source(s): telephone speech
Project(s): NIST SRE
Application(s): speech recognition
Language(s): Arabic, English, Mandarin Chinese, Russian, Spanish
Language ID(s): ara, cmn, eng, rus, spa
Distribution: 6 DVD
Member fee: $0 for 2011 members
Non-member Fee: US $2000.00
Reduced-License Fee: US $1200.00
Extra-Copy Fee: US $1200.00
Non-member License: yes
Online documentation: yes
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: NIST Multimodal Information Group
2011
2005 NIST Speaker Recognition Evaluation Training Data
Linguistic Data Consortium, Philadelphia

Introduction

2005 NIST Speaker Recognition Evaluation Training Data, Linguistic Data Consortium (LDC) catalog number LDC2011S01 and isbn 1-58563-579-0, was developed at LDC and NIST (National Institute of Standards and Technology). It consists of 392 hours of conversational telephone speech in English, Arabic, Mandarin Chinese, Russian and Spanish and associated English transcripts used as training data in the NIST-sponsored 2005 Speaker Recognition Evaluation (SRE). The ongoing series of SRE yearly evaluations conducted by NIST are intended to be of interest to researchers working on the general problem of text independent speaker recognition. To that end the evaluations are designed to be simple, to focus on core technology issues, to be fully supported and to be accessible to those wishing to participate.

The task of the 2005 SRE evaluation was speaker detection, that is, to determine whether a specified speaker is speaking during a given segment of conversational speech. The task was divided into 20 distinct and separate tests involving one of five training conditions and one of four test conditions. Further information about the task conditions is contained in the The NIST Year 2005 Speaker Recognition Evaluation Plan.

Data

The speech data consists of conversational telephone speech with multi-channel data collected simultaneously from a number of auxiliary microphones. The files are organized into two segments: 10 second two-channel excerpts (continuous segments from single conversations that are estimated to contain approximately 10 seconds of actual speech in the channel of interest) and 5 minute two-channel conversations.

The speech files are stored as 8-bit u-law speech signals in separate SPHERE files. In addition to the standard header fields, the SPHERE header for each file contains some auxiliary information that includes the language of the conversation and whether the data was recorded over a telephone line.

English language word transcripts in .cmt format were produced using an automatic speech recognition system (ASR)with error rates in the range of 15-30%.

Samples

For an example of the data contained in this corpus, review this audio sample.

Content Copyright

Portions 2004-2005, 2011 Trustees of the University of Pennsylvania