2005 NIST Language Recognition Evaluation
|Item Name:||2005 NIST Language Recognition Evaluation|
|Author(s):||Audrey Le, Alvin Martin, Hannah Hadfield, Jacques de Villiers, John-Paul Hosom, Jan van Santen|
|LDC Catalog No.:||LDC2008S05|
|Release Date:||June 16, 2008|
|Data Source(s):||telephone conversations|
|Application(s):||speech recognition, language identification|
|Language(s):||Tamil, Korean, Japanese, Hindi, English, Spanish, Mandarin Chinese|
|Language ID(s):||tam, kor, jpn, hin, eng, spa, cmn|
LDC User Agreement for Non-Members
|Online Documentation:||LDC2008S05 Documents|
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Le, Audrey, et al. 2005 NIST Language Recognition Evaluation LDC2008S05. Web Download. Philadelphia: Linguistic Data Consortium, 2008.|
2005 NIST Language Recognition Evaluation was prepared by NIST.
The goal of the NIST (National Institute of Standards and Technology) Language Recognition Evaluation (LRE) is to establish the baseline of current performance capability for language recognition of conversational telephone speech and to lay the groundwork for further research efforts in the field. NIST conducted two previous evaluations in 1996 and 2003. For the 2005 LRE, the emphasis was on research directed toward a general base of technology to be ported to various language recognition tasks with minimum effort and the development of the ability to make more difficult discriminations between similar languages and dialects of the same language. That focus augmented the traditional evaluation goals, those being:
- to drive the technology forward
- to measure the state-of-the-art
- to find the most promising algorithmic approaches
The task evaluated was the detection of a given target language or dialect. From a test segment of speech and a target language or dialect, the system to be evaluated determined whether the speech was from the target language or dialect. The evaluation consisted of speech from the following languages and dialects:
- English (American)
- English (Indian)
- Mandarin (Mainland)
- Mandarin (Taiwan)
- Spanish (Mexican)
The 2005 NIST Language Recognition Evaluation Plan, which includes a description of the evaluation tasks, is included with this release.
Each speech file is one side of a "4-wire" telephone conversation represented as 8-bit 8 kHz mulaw data. There are 11,106 speech files in sphere (.sph) format for a total of 73.2 hours of speech. The speech data was compiled from LDC's CALLFRIEND corpora and from data collected by Oregon Health and Science University, Beaverton, Oregon.
Each test segment was prepared using an automatic speech activity detection algorithm to identify areas and durations of speech. The test segments were stored in SPHERE file format, one segment per file. Unlike previous evaluations, areas of silence were not removed from the segments. Segments were chosen to contain a specified approximate duration of actual speech. Auxiliary information was included in the SPHERE headers to document the source file, start time, and duration of all excerpts that were used to construct the segment.
The test segments contain three nominal durations of speech: 3 seconds, 10 seconds, and 30 seconds. Actual speech durations vary, but were constrained to be within the ranges of 2-4 seconds, 7-13 seconds, and 25-35 seconds, respectively. Note that this refers to duration of actual speech contained in segments as determined by the speech activity detection algorithm; signal durations in general are longer due to areas of silence in the segments. Shorter speech duration test segments are subsets of longer speech duration test segments; i.e., each 10-second test segment is a subset of a corresponding 30-second test segment, and each 3-second test segment is a subset of a corresponding 10-second segment. Performance was evaluated separately for test segments of each duration.
NIST recommends using data from the 1996 and 2003 evaluations as development data. This data may be found in 2003 NIST Language Recognition Evaluation, LDC2006S31. Because the 1996 and 2003 evaluations did not cover Indian-accented English, this release includes a development data set of Indian-accented English.
For an example of the data in this corpus, please review the following audio samples(wav format):