NIST Meeting Pilot Corpus Speech
Item Name: | NIST Meeting Pilot Corpus Speech |
Author(s): | John S. Garofolo, Martial Michel, Vincent M. Stanford, Elham Tabassi, Jonathan G. Fiscus, Christophe D. Laprun, Nicolas Pratz, Jerome Lard |
LDC Catalog No.: | LDC2004S09 |
ISBN: | 1-58563-302-x |
ISLRN: | 706-538-229-826-0 |
DOI: | https://doi.org/10.35111/800p-fv08 |
Release Date: | July 12, 2004 |
Member Year(s): | 2004 |
DCMI Type(s): | Sound |
Sample Type: | pcm |
Sample Rate: | 16000 |
Data Source(s): | meeting speech, microphone conversation, microphone speech |
Project(s): | NIST Automatic Meeting Recognition |
Application(s): | automatic content extraction, discourse analysis, information retrieval, language modeling, speaker identification, speaker verification, speech recognition |
Language(s): | English |
Language ID(s): | eng |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2004S09 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Garofolo, John S., et al. NIST Meeting Pilot Corpus Speech LDC2004S09. Web Download. Philadelphia: Linguistic Data Consortium, 2004. |
Related Works: | View |
Introduction
NIST Meeting Pilot Corpus Speech was developed by the National Institutes of Standards and Technology (NIST) and contains approximately 15 hours of English meeting speech.
The corresponding transcripts for these speech files are available as NIST Meeting Pilot Corpus Transcripts and Metadata (LDC2004T13).
Huge efforts are being expended in mining information in newswire, news broadcasts, and conversational speech, however, little has been done to address such applications in the more challenging and equally important meeting domain. Meetings have several important properties not found in other domains, such as being diverse in formality and vocabulary, being highly interactive across multiple participants, using distant microphones, using overlapping camera views, and necessitating multi-media information integration.
The development of smart meeting room core technologies that can automatically recognize and extract important information from multi-media sensor inputs will provide an invaluable resource for a variety of business, academic, and governmental applications.
Data
The data for the NIST Automatic Meeting Recognition Project was collected at the NIST Gaithersburg, MD, Meeting Data Collection Laboratory. This release contains 369 SPHERE audio files generated from 19 meetings (comprising about 15 hours of meeting room data and amounting to about 32 GB) recorded between November 2001 and December 2003.
Each meeting was recorded using two wireless "personal" mics attached to each meeting participant: a close-talking noise-cancelling boom mic and an omni-directional lapel mic. Each meeting was also recorded using three omni-directional table mics and a four-channel directional table mic covering 365 degrees (each channel is recorded in a separate file). Each individual channel was converted from its 48 kHz, 24-bits, linear PCM source format to 16 kHz, 16-bits, linear PCM-sampled audio SPHERE-formatted files.
A total of 61 subjects were involved in these meetings. The following is a breakdown by participant origin and gender:
# Male Instances | # Unique Males | # Female Instances | # Unique Females | Total Participants Instances | Total Unique Participants | |
---|---|---|---|---|---|---|
Native | 54 | 30 | 33 | 15 | 87 | 45 |
Non-Native | 18 | 11 | 10 | 5 | 28 | 16 |
Total | 72 | 41 | 43 | 20 | 115 | 61 |
Samples
Updates
There are no updates available at this time.