|Author(s):||Jon Barker, Ricard Marxer, Emmanuel Vincent, Shinji Watanabe|
|LDC Catalog No.:||LDC2017S24|
|Release Date:||December 15, 2017|
|DCMI Type(s):||Sound, Text|
|Data Source(s):||microphone speech|
LDC User Agreement for Non-Members
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Barker, Jon, et al. CHiME3 LDC2017S24. USB Flash Drive. Philadelphia: Linguistic Data Consortium, 2017.|
|Related Works: Hide||View|
CHiME3 was developed as part of The 3rd CHiME Speech Separation and Recognition Challenge and contains approximately 342 hours of English speech and transcripts from noisy environments and 50 hours of noisy environment audio. The CHiME Challenges focus on distant-microphone automatic speech recognition (ASR) in real-world environments. See the CHIME3 home page for more information.
The task in CHiME3 was similar to the medium vocabulary track of the CHiME2 Challenge in that the target utterances were taken from CSR-I (WSJ0) Complete (LDC93S6A), specifically, the 5,000 word subset of read speech from Wall Street Journal news text. CHiME3 involved two types of data: speech data recorded in very noisy environments (on a bus, in a cafe, pedestrian area, and street junction) and noisy utterances generated by artificially mixing clean speech data with noisy backgrounds.
Data is divided into training, development and test sets. All data is provided as 16 bit WAV files sampled at 16 kHz. The audio data consists of the background noises, enhanced speech data using the baseline speech enhancement technique, unsegmented noisy speech data, and segmented noisy speech data.
Please view the following samples:
None at this time.