1999 Speaker Recognition Benchmark
Item Name: | 1999 Speaker Recognition Benchmark |
Author(s): | Mark Przybocki, Alvin Martin |
LDC Catalog No.: | LDC99S81 |
ISBN: | 1-58563-152-3 |
ISLRN: | 282-712-829-978-2 |
DOI: | https://doi.org/10.35111/fj0k-d582 |
Member Year(s): | 1999 |
DCMI Type(s): | Sound |
Sample Type: | 1-channel ulaw |
Sample Rate: | 8000 |
Data Source(s): | transcribed speech |
Project(s): | NIST SRE |
Application(s): | speaker identification |
Language(s): | English |
Language ID(s): | eng |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC99S81 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Przybocki, Mark, and Alvin Martin. 1999 Speaker Recognition Benchmark LDC99S81. Web Download. Philadelphia: Linguistic Data Consortium, 1999. |
Related Works: | View |
Introduction
1999 Speaker Recognition Benchmark was developed by the Linguistic Data Consortium (LDC) and the National Institute of Standards and Technology (NIST). It contains approximately 76 hours of English conversational telephone speech collected by LDC and used in the NIST-sponsored 1999 Speaker Recognition Evaluation.
The ongoing series of yearly evaluations conducted by NIST provide an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text independent speaker recognition. To this end, the evaluation was designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible.
Technical Objectives of the 1999 speaker recognition evaluation were:
- 1. Exploring promising new ideas in speaker recognition
- 2. Developing advanced technology incorporating these ideas
- 3. Measuring the performance of this technology
Data
The evaluation data was drawn from Switchboard-2 Phase III Audio (LDC2002S06) and is divided into training and test data. Both training and test segments were constructed by concatenating consecutive turns for the desired speaker, similar to what was done in 1996. Each segment is stored as a continuous speech signal in a separate SPHERE file in 8-bit mulaw format.
The training data comprises 1,078 SPHERE files totalling about 18 hours. The test data comprises 5,150 SPHERE files totalling about 58 hours.
Samples
For an example of the data contained in this corpus, please listen to this audio sample (SPH).
Updates
None at this time.