Speech Controlled Computing

Item Name: Speech Controlled Computing
Author(s): Christopher Cieri, David Miller, Nii Martey, Kazuaki Maeda
LDC Catalog No.: LDC2006S30
ISBN: 1-58563-380-1
ISLRN: 185-835-412-868-8
Release Date: March 24, 2006
Member Year(s): 2006
DCMI Type(s): Sound
Sample Type: pcm
Sample Rate: 48000
Data Source(s): microphone speech
Application(s): machine learning, speech recognition
Language(s): English
Language ID(s): eng
License(s): Speech Controlled Computing (Non-Members)
Online Documentation: LDC2006S30 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: Cieri, Christopher, et al. Speech Controlled Computing LDC2006S30. DVD. Philadelphia: Linguistic Data Consortium, 2006.

Introduction

This file contains documentation on Speech Controlled Computing, Linguistic Data Consortium (LDC) catalog number LDC2006S30 and ISBN 1-58563-380-1.

The Speech Controlled Computing corpus was designed to support the development of small footprint, embedded ASR applications in the domain of voice control for the home. It consists of the recordings of 125 speakers of American English from four dialect regions, three age groups and two gender groups, pronouncing isolated words. The four primary dialect regions covered by the corpus are North, South, West and Midland as defined by Williams Labov's Atlas of North American English. The three primary age groups covered by the corpus are 18-29, 30-49 and 50+.

The recordings were conducted in a sound-attenuated room at LDC with the AKG C4000B studio condenser microphone. The omni-directional mode of the C4000B was used. Each speaker read a randomized word list consisting of 2,100 words (100 distinct words appearing 21 times each). Speech utterances were digitized and recorded to a DAT, as well as to a hard disk drive via the Townshend DATLINK+ digital audio interface.

Speech utterances were audited as they were recorded, and any utterances detected by the recorder that were not spoken clearly or correctly were re-recorded. This included extraneous clicks, coughs, sighs and breathing that may have corrupted the recorded words. Utterances that were spoken too soft or too loud were also re-recorded.

The digitized utterances were automatically segmented and aligned to the word list. Then each utterance was audited and the segmentation was checked, and corrected if necessary, by an annotator using an auditing and segmenting tool developed by LDC.

Finally, sound files containing individual utterances were generated using the alignment and segmentation information. The sound files for this corpus were created with 100 msec of silent time before and after each utterance. Any files that contained noticeable clipping were automatically removed.

Samples

For an example of this corpus, please listen to this audio sample

Available Media

View Fees





Login for the applicable fee