2004 NIST Speaker Recognition Evaluation
Item Name: | 2004 NIST Speaker Recognition Evaluation |
Author(s): | Alvin Martin, Mark Przybocki |
LDC Catalog No.: | LDC2006S44 |
ISBN: | 1-58563-402-6 |
ISLRN: | 214-123-995-004-3 |
DOI: | https://doi.org/10.35111/rawd-1051 |
Release Date: | October 25, 2006 |
Member Year(s): | 2006 |
DCMI Type(s): | Sound, Text |
Sample Type: | ulaw |
Sample Rate: | 8000 |
Data Source(s): | telephone speech |
Project(s): | NIST SRE |
Application(s): | speaker identification |
Language(s): | English |
Language ID(s): | eng |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2006S44 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Martin, Alvin, and Mark Przybocki. 2004 NIST Speaker Recognition Evaluation LDC2006S44. Web Download. Philadelphia: Linguistic Data Consortium, 2006. |
Related Works: | View |
Introduction
2004 NIST Speaker Recognition Evaluation was developed by the Linguistic Data Consortium (LDC) and the National Institute of Standards and Technology (NIST) and contains approximately 666 hours of English conversational telephone speech collected by LDC and corresponding transcripts used as development and test data in the 2004 evaluation by NIST.
This release is part of an ongoing series of yearly Speaker Recognition Evaluations conducted by NIST since 1996. These evaluations provide an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text-independent speaker recognition. To this end the evaluation was designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible.
The SRE task is speaker detection, that is, to determine whether a specified target speaker is speaking during a segment of speech. Each evaluation begins with the announcement of the official evaluation plan which clearly states the rules and tasks involved and culminates with a follow-up workshop, where NIST reports the official results and researchers share in their findings.
Data
All speech data used for this evaluation was recorded as part of LDC's Mixer Project. The evaluation included 28 different speaker detection tests defined by the duration and type of the training and test segments. The files are presented in 8 kHz, ulaw NIST SPHERE format and generally include segments of 10 seconds, 30 seconds, and a little over five minutes in length. There are a few files of other durations as well.
The training data in this corpus accounts for 444 hours of the total audio data, and the test data accounts for the remaining 222 hours.
In addition to development and evaluation data, this corpus also contains answer keys, trial and train files, and development and evaluation documentation.
Samples
For an example of the data in this corpus, please listen to this audio sample (WAV) and view this transcript sample (TXT).
Updates
None at this time.