2021 NIST Speaker Recognition Evaluation Development and Test Set
| Item Name: | 2021 NIST Speaker Recognition Evaluation Development and Test Set |
| Author(s): | Omid Sadjadi, Craig Greenberg, Kevin Walker, Karen Jones, Christopher Caruso, Stephanie Strassel |
| LDC Catalog No.: | LDC2025S11 |
| ISLRN: | 222-339-765-002-6 |
| DOI: | https://doi.org/10.35111/d8rq-7s56 |
| Release Date: | December 15, 2025 |
| Member Year(s): | 2025 |
| DCMI Type(s): | Image, MovingImage, Sound, StillImage, Text |
| Sample Type: | alaw |
| Sample Rate: | 8000 |
| Data Source(s): | photograph, telephone conversations, video |
| Project(s): | NIST SRE |
| Application(s): | speaker identification, speaker verification |
| Language(s): | Mandarin Chinese, English, Yue Chinese |
| Language ID(s): | cmn, eng, yue |
| License(s): |
LDC User Agreement for Non-Members |
| Online Documentation: | LDC2025S11 Documents |
| Licensing Instructions: | Subscription & Standard Members, and Non-Members |
| Citation: | Sadjadi, Omid, et al. 2021 NIST Speaker Recognition Evaluation Development and Test Set LDC2025S11. Web Download. Philadelphia: Linguistic Data Consortium, 2025. |
| Related Works: | View |
Introduction
2021 NIST Speaker Recognition Evaluation Test Set was developed by the Linguistic Data Consortium (LDC) and NIST (National Institute of Standards and Technology). It contains approximately 447 hours of Cantonese, Mandarin, and English conversational telephone speech (CTS), audio from video (AfV), and image data for development and test, along with answer keys, enrollment, trial files and documentation from the NIST-sponsored 2021 Speaker Recognition Evaluation (SRE).
The ongoing series of SRE evaluations conducted by NIST are intended to be of interest to researchers working on the general problem of text independent speaker recognition. To this end the evaluations are designed to be simple, to focus on core technology issues, to be fully supported and to be accessible to those wishing to participate.
The SRE task is speaker detection, that is, to determine whether a specified target speaker was speaking during a segment of speech. SRE21 focused on telephone speech and audio from video and included close-up images of participants. The evaluation also featured cross-lingual trials, that is, enrollment and test segments spoken in different languages. Further information about the evaluation is contained in the SRE21 evaluation plan included in this release.
Data
The data was drawn from the WeCanTalk corpus collected by LDC in which speakers called friends or relatives who agreed to record their telephone conversations lasting between 8-10 minutes. Subjects contributed multiple conversational telephone speech recordings and audio recordings in which they were talking, plus a single selfie image. Recordings were manually audited to verify speaker, language, and quality.
The corpus contains approximately 355 hours of CTS audio, 53 hours of AfV segments, 39 hours of video clips and 202 selfie images.
The CTS data is presented as sphere files in 8kHz A-law format, AfV segments are presented as 16 kHz FLAC-compressed MS-WAV files, videos are presented in mp4 format. and images are presented in JPG format.
In addition to the development and evaluation data, this corpus also contains answer keys, enrollment, trial files and documentation.
Samples
Please view these samples:
Updates
No updates at this time.