2006 NIST Spoken Term Detection Development Set
Item Name: | 2006 NIST Spoken Term Detection Development Set |
Author(s): | NIST Multimodal Information Group |
LDC Catalog No.: | LDC2011S02 |
ISBN: | 1-58563-583-9 |
ISLRN: | 560-424-742-579-6 |
DOI: | https://doi.org/10.35111/ydyw-sd24 |
Release Date: | June 17, 2011 |
Member Year(s): | 2011 |
DCMI Type(s): | Sound |
Sample Type: | ulaw |
Sample Rate: | 8000 |
Data Source(s): | broadcast news, meeting speech, microphone conversation, telephone conversations |
Application(s): | spoken term detection |
Language(s): | English, Mandarin Chinese, Standard Arabic, Arabic |
Language ID(s): | eng, cmn, arb, ara |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2011S02 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | NIST Multimodal Information Group. 2006 NIST Spoken Term Detection Development Set LDC2011S02. Web Download. Philadelphia: Linguistic Data Consortium, 2011. |
Related Works: | View |
Introduction
2006 NIST Spoken Term Detection Development Set, Linguistic Data Consortium (LDC) catalog number LDC2011S02 and isbn 1-58563-583-9, was compiled by researchers at NIST (National Institute of Standards and Technology) and contains approximately eighteen hours of Arabic, Chinese and English broadcast news, English conversational telephone speech and English meeting room speech used in NISTs 2006 Spoken Term Detection (STD) evaluation. The STD initiative is designed to facilitate research and development of technology for retrieving information from archives of speech data with the goals of exploring promising new ideas in spoken term detection, developing advanced technology incorporating these ideas, measuring the performance of this technology and establishing a community for the exchange of research results and technical insights.
The 2006 STD task was to find all of the occurrences of a specified term (a sequence of one or more words) in a given corpus of speech data. The evaluation was intended to develop technology for rapidly searching very large quantities of audio data. Although the evaluation used modest amounts of data, it was structured to simulate the very large data situation and to make it possible to extrapolate the speed measurements to much larger data sets. Therefore, systems were implemented in two phases: indexing and searching. In the indexing phase, the system processes the speech data without knowledge of the terms. In the searching phase, the system uses the terms, the index, and optionally the audio to detect term occurrences.
Data
The development corpus consists of three data genres: broadcast news (BNews), conversational telephone speech (CTS) and conference room meetings (CONFMTG). The broadcast news material was collected in 2001 by LDCs broadcast collection system from the following sources: ABC (English), China Broadcasting System (Chinese), China Central TV (Chinese), China National Radio (Chinese), China Television System (Chinese), CNN (English), MSNBC/NBC (English), Nile TV (Arabic), Public Radio International (English) and Voice of America (Arabic, Chinese, English). The CTS data was taken from the Switchboard data sets (e.g., Switchboard-2 Phase 1 LDC98S75, Switchboard-2 Phase 2 LDC99S79) and the Fisher corpora (e.g., Fisher English Training Speech Part 1 LDC2004S13), also collected by LDC. The conference room meeting material consists of goal-oriented, small group roundtable meetings and was collected in 2001, 2004 and 2005 by NIST, the International Computer Science Institute (Berkely, California), Carnegie Mellon University (Pittsburgh, PA) and Virginia Polytechnic Institute and State University (Blacksburg, VA) as part of the AMI corpus project.
Each BNews recording is a 1-channel, pcm-encoded, 16Khz, SPHERE formatted file. CTS recordings are 2-channel, u-law encoded, 8 Khz, SPHERE formatted files. The CONFMTG files contain a single recorded channel.
Samples
For an example of the data in this corpus, please review this audio sample(wav).