1997 Speaker Recognition Benchmark
Item Name: | 1997 Speaker Recognition Benchmark |
Author(s): | Mark Przybocki, Alvin Martin |
LDC Catalog No.: | LDC99S80 |
ISBN: | 1-58563-142-6 |
ISLRN: | 095-881-879-489-0 |
DOI: | https://doi.org/10.35111/s6aq-4f17 |
Member Year(s): | 1999 |
DCMI Type(s): | Sound |
Sample Type: | 1-channel ulaw |
Sample Rate: | 8000 |
Data Source(s): | transcribed speech |
Project(s): | NIST SRE |
Application(s): | speaker identification |
Language(s): | English |
Language ID(s): | eng |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC99S80 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Przybocki, Mark, and Alvin Martin. 1997 Speaker Recognition Benchmark LDC99S80. Web Download. Philadelphia: Linguistic Data Consortium, 1999. |
Related Works: | View |
Introduction
1997 Speaker Recognition Benchmark was developed by the Linguistic Data Consortium (LDC) and the National Institute of Standards and Technology (NIST). It contains approximately 105 hours of English conversational telephone speech collected by LDC and used in the NIST-sponsored 1997 Speaker Recognition Evaluation.
The ongoing series of yearly evaluations conducted by NIST provide an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text independent speaker recognition. To this end the evaluation was designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible.
Technical Objectives of the 1997 speaker recognition evaluation were:
- 1. Exploring promising new ideas in speaker recognition
- 2. Developing advanced technology incorporating these ideas
- 3. Measuring the performance of this technology
Data
The evaluation data was drawn from Switchboard-2 Phase I and is divided into training and test sets. Both training and test segments were constructed by concatenating consecutive turns for the desired speaker, similar to what was done in 1996. Each segment is stored as a continuous speech signal in a separate SPHERE file. The speech data is stored in 8-bit mulaw format.
The training set comprises 1,604 .law files totalling 27 hours, and the test set comprises 18,351 .wav files totalling 78 hours.
Samples
For an example of the data contained in this corpus, please listen to this audio sample (SPH).
Updates
None at this time.