Speech Controlled Computing
Item Name: | Speech Controlled Computing |
Author(s): | Christopher Cieri, David Miller, Nii O. Martey, Kazuaki Maeda |
LDC Catalog No.: | LDC2006S30 |
ISBN: | 1-58563-380-1 |
ISLRN: | 185-835-412-868-8 |
DOI: | https://doi.org/10.35111/6kce-vx52 |
Release Date: | March 24, 2006 |
Member Year(s): | 2006 |
DCMI Type(s): | Sound |
Sample Type: | pcm |
Sample Rate: | 48000 |
Data Source(s): | microphone speech |
Application(s): | machine learning, speech recognition |
Language(s): | English |
Language ID(s): | eng |
License(s): |
Speech Controlled Computing (Non-Members) |
Online Documentation: | LDC2006S30 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Cieri, Christopher, et al. Speech Controlled Computing LDC2006S30. Web Download. Philadelphia: Linguistic Data Consortium, 2006. |
Introduction
Speech Controlled Computing was developed by the Linguistic Data Consortium (LDC) and consists of 261,535 files of American English utterances.
The Speech Controlled Computing corpus was designed to support the development of small footprint, embedded ASR applications in the domain of voice control for the home. It consists of audio files generated from recording sessions that took place between December 2003 and July 2004 with 125 speakers of American English from four dialect regions, three age groups and two gender groups, pronouncing isolated words. The four primary dialect regions covered by the corpus are North, South, West and Midland as defined by Williams Labov's Atlas of North American English. The three primary age groups covered by the corpus are 18-29, 30-49 and 50+.
Data
The recordings were conducted in a sound-attenuated room at LDC with the AKG C4000B studio condenser microphone. The omni-directional mode of the C4000B was used. Each speaker read a randomized word list consisting of 2,100 words (100 distinct words appearing 21 times each). Speech utterances were digitized and recorded to a DAT, as well as to a hard disk drive via the Townshend DATLINK+ digital audio interface. All of the audio files are single-channel 48 kHz 16-bit PCM wav files.
Speech utterances were audited as they were recorded, and any utterances detected by the recorder that were not spoken clearly or correctly were re-recorded. This included extraneous clicks, coughs, sighs, and breathing that may have corrupted the recorded words. Utterances that were spoken too soft or too loud were also re-recorded.
The digitized utterances were automatically segmented and aligned to the word list. Then each utterance was audited and the segmentation was checked, and corrected if necessary, by an annotator using an auditing and segmenting tool developed by LDC.
Finally, sound files containing individual utterances were generated using the alignment and segmentation information. The sound files for this corpus were created with 100 msec of silent time before and after each utterance. Any files that contained noticeable clipping were automatically removed.
Samples
For an example of the data in this corpus, please listen to this sample (WAV)
Updates
None at this time.