2003 NIST Speaker Recognition Evaluation

Item Name: 2003 NIST Speaker Recognition Evaluation
Author(s): NIST Multimodal Information Group
LDC Catalog No.: LDC2010S03
ISBN: 1-58563-547-2
ISLRN: 015-759-255-644-6
DOI: https://doi.org/10.35111/fbrr-qd98
Release Date: May 14, 2010
Member Year(s): 2010
DCMI Type(s): Sound
Sample Type: 8 bit u-law
Sample Rate: 8000
Data Source(s): telephone conversations
Project(s): NIST SRE
Application(s): speaker identification
Language(s): English
Language ID(s): eng
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2010S03 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: NIST Multimodal Information Group. 2003 NIST Speaker Recognition Evaluation LDC2010S03. Web Download. Philadelphia: Linguistic Data Consortium, 2010.
Related Works: View


2003 NIST Speaker Recognition Evaluation was developed by the Linguistic Data Consortium (LDC) and NIST (National Institute of Standards and Technology). It consists of just over 120 hours of English conversational telephone speech collected by LDC and used as training and test data in the NIST-sponsored 2003 Speaker Recognition Evaluation (SRE), along with evaluation metadata and test set answer keys.

The ongoing series of yearly evaluations conducted by NIST provide an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text independent speaker recognition. To this end the evaluation was designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible to those wishing to participate.

This speaker recognition evaluation focused on the task of 1-speaker and 2-speaker detection, in the context of conversational telephone speech. The evaluation was designed to foster research progress, with the goals of:

  • Exploring promising new ideas in speaker recognition.
  • Developing advanced technology incorporating these ideas.
  • Measuring the performance of this technology.

The original evaluation consisted of three parts: 1-speaker detection "limited data", 2-speaker detection "limited data", and 1-speaker detection "extended data". This corpus contains training and test data and supporting metadata (including answer keys) for only the 1-speaker "limited data" and 2-speaker "limited data" components of the original evaluation. The 1-speaker "extended data" component of the original evaluation (not included in this corpus) provided metadata only, to be used in conjunction with data from Switchboard-2 Phase II (LDC99S79) and Switchboard-2 Phase III Audio (LDC2002S06). See the original evaluation plan, included with the documentation for this corpus, for more detailed information.


The data in this corpus is a subset of data first made available to the public as Switchboard Cellular Part 2 Audio (LDC2004S07), reorganized (as described below) specifically for use in the 2003 NIST SRE. For details on data collection methodology, see the documentation for the above corpus.

In the 1-speaker "limited data" component, concatenated turns of a single side of a conversation were presented. In the 2-speaker "limited data" component, two sides of conversation were summed together, and both the model speaker and that speaker's conversation partner were represented in the resulting audio file.

For the 1-speaker "limited data" component, 2 minutes of concatenated turns from a single conversation were used for training, and 15-45 seconds of concatenated turns from a 1-minute excerpt of conversation were used for testing.

For the 2-speaker "limited data" component, three whole conversations per participant (minus some introductory comments) were used for training, and 1-minute conversation excerpts were used for testing. In the two-speaker detection task, the evaluation participant was required to separate the speech of the two speakers and then decide (correctly) which side is the model speaker. To make this challenge feasible, the training conversations were chosen so that all speakers other than the model speaker were represented in only one conversation. Thus the model speaker, who is represented in all three conversations, is the only speaker to be represented in more than one conversation.


For an example of the data in this corpus, please listen to this sample (WAV).


None at this time.

Available Media

View Fees

Login for the applicable fee