2000 NIST Speaker Recognition Evaluation
Item Name: | 2000 NIST Speaker Recognition Evaluation |
Author(s): | Mark Przybocki, Alvin Martin |
LDC Catalog No.: | LDC2001S97 |
ISBN: | 1-58563-192-2 |
ISLRN: | 919-164-226-906-5 |
DOI: | https://doi.org/10.35111/ex24-j205 |
Member Year(s): | 2001 |
DCMI Type(s): | Sound |
Data Source(s): | telephone speech |
Project(s): | NIST SRE |
Application(s): | speaker identification |
Language(s): | English |
Language ID(s): | eng |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2001S97 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Przybocki, Mark, and Alvin Martin. 2000 NIST Speaker Recognition Evaluation LDC2001S97. Web Download. Philadelphia: Linguistic Data Consortium, 2001. |
Related Works: | View |
Introduction
2000 NIST Speaker Recognition Evaluation was developed by the Linguistic Data Consortium (LDC) and the National Institute of Standards and Technology (NIST). It contains approximately 150 hours of English conversational telephone speech collected by LDC and used as training and test data in the NIST-sponsored 2000 Speaker Recognition Evaluation.
The ongoing series of yearly evaluations conducted by NIST provide an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text independent speaker recognition. To this end, the evaluation was designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible.
Data
This publication consists of 10,328 single channel SPHERE files encoded in 8-bit mulaw containing a total of approximately 4.31 GB of data, totalling 148.9 hours.
The data is divided into male and female training data and test data for various speaker recognition tasks. Whereas the training for the 1999 evaluation used two-session data, meaning the training files were taken from two conversations for each subject, the 2000 evaluation used one-session data, meaning all the training files for each subject were taken from a single conversation. Evaluation will be performed separately for each of the speaker recognition tasks:
- One-speaker detection
- Two-speaker detection
- Speaker tracking
- Speaker segmentation
An additional corpus, the AHUMADA corpus, was included as an optional Spanish component for the one-speaker detection task in the 2000 evaluation. The results of that task were evaluated separately from the mandatory English component. Information about this corpus can be obtained from Javier Ortega-Garcia, Universidad Politecnica de Madrid.
The primary development data for this evaluation will be the 1999 Speaker Recognition Benchmark (LDC99S81).
Samples
For an example of the data contained in this corpus, please listen to this sample (SPH).
Updates
As of June, 27, 2017, 1,426 files that were not included in this release were added to the corpus. Downloads after that date will contain the complete data set.