2005 NIST Language Recognition Evaluation
Item Name: | 2005 NIST Language Recognition Evaluation |
Author(s): | Audrey Le, Alvin Martin, Hannah Hadfield, Jacques de Villiers, John-Paul Hosom, Jan van Santen |
LDC Catalog No.: | LDC2008S05 |
ISBN: | 1-58563-477-8 |
ISLRN: | 747-471-848-124-3 |
DOI: | https://doi.org/10.35111/1y55-wx32 |
Release Date: | June 16, 2008 |
Member Year(s): | 2008 |
DCMI Type(s): | Sound |
Sample Type: | ulaw |
Sample Rate: | 8000 |
Data Source(s): | telephone conversations |
Project(s): | NIST LRE |
Application(s): | language identification |
Language(s): | Tamil, Korean, Japanese, Hindi, English, Spanish, Mandarin Chinese |
Language ID(s): | tam, kor, jpn, hin, eng, spa, cmn |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2008S05 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Le, Audrey, et al. 2005 NIST Language Recognition Evaluation LDC2008S05. Web Download. Philadelphia: Linguistic Data Consortium, 2008. |
Related Works: | View |
Introduction
2005 NIST Language Recognition Evaluation was developed by the Linguistic Data Consortium (LDC) and the National Institute of Standards and Technology (NIST). It contains 73 hours of conversational telephone speech in the following languages: English (American), English (Indian), Hindi, Japanese, Korean, Mandarin (Mainland), Mandarin (Taiwan), Spanish (Mexican), and Tamil.
The goal of NIST's Language Recognition Evaluation (LRE) is to establish the baseline of current performance capability for language recognition of conversational telephone speech and to lay the groundwork for further research efforts in the field. NIST conducted two previous evaluations in 1996 and 2003. For the 2005 LRE, the emphasis was on research directed toward a general base of technology to be ported to various language recognition tasks with minimum effort and the development of the ability to make more difficult discriminations between similar languages and dialects of the same language. That focus augmented the traditional evaluation goals, those being:
- to drive the technology forward
- to measure the state-of-the-art
- to find the most promising algorithmic approaches
The task evaluated was the detection of a given target language or dialect. From a test segment of speech and a target language or dialect, the system to be evaluated determined whether the speech was from the target language or dialect. The 2005 NIST Language Recognition Evaluation Plan, which includes a description of the evaluation tasks, is included with this release.
LDC released other LREs as:
- 2003 NIST Language Recognition Evaluation (LDC2006S31)
- 2007 NIST Language Recognition Evaluation Test Set (LDC2009S04)
- 2007 NIST Language Recognition Evaluation Supplemental Training Set (LDC2009S05)
- 2009 NIST Language Recognition Evaluation Test Set (LDC2014S06)
- 2011 NIST Language Recognition Evaluation Test Set (LDC2018S06)
Data
Each speech file is one side of a "4-wire" telephone conversation represented as 8-bit 8-kHz mulaw data. There are 11,106 speech files in SPHERE (.sph) format for a total of 73.2 hours of speech. The speech data was compiled from LDC's CALLFRIEND corpora and from data collected by Oregon Health and Science University (OHSU), Beaverton, Oregon.
Each test segment was prepared using an automatic speech activity detection algorithm to identify areas and durations of speech. The test segments were stored in SPHERE file format, one segment per file. Unlike previous evaluations, areas of silence were not removed from the segments. Segments were chosen to contain a specified approximate duration of actual speech. Auxiliary information was included in the SPHERE headers to document the source file, start time, and duration of all excerpts that were used to construct the segment.
The test segments contain three nominal durations of speech: 3 seconds, 10 seconds, and 30 seconds. Actual speech durations vary, but were constrained to be within the ranges of 2-4 seconds, 7-13 seconds, and 25-35 seconds, respectively. Note that this refers to duration of actual speech contained in segments as determined by the speech activity detection algorithm; signal durations in general are longer due to areas of silence in the segments. Shorter speech duration test segments are subsets of longer speech duration test segments; i.e., each 10-second test segment is a subset of a corresponding 30-second test segment, and each 3-second test segment is a subset of a corresponding 10-second segment. Performance was evaluated separately for test segments of each duration.
NIST recommends using data from the 1996 and 2003 evaluations as development data. This data may be found in 2003 NIST Language Recognition Evaluation (LDC2006S31). Because the 1996 and 2003 evaluations did not cover Indian-accented English, this release includes a development data set of Indian-accented English.
Samples
For an example of the data in this corpus, please listen to the following samples:
Updates
None at this time.