|Item Name:||CHiME2 Grid|
|Author(s):||Emmanuel Vincent, Jon Barker, Shinji Watanabe, Jonathan Le Roux, Francesco Nesta, Marco Matassoni|
|LDC Catalog No.:||LDC2017S07|
|Release Date:||April 17, 2017|
|Data Source(s):||microphone speech|
LDC User Agreement for Non-Members
|Online Documentation:||LDC2017S07 Documents|
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Vincent, Emmanuel, et al. CHiME2 Grid LDC2017S07. Web Download. Philadelphia: Linguistic Data Consortium, 2017.|
CHiME2 Grid was developed as part of The 2nd CHiME Speech Separation and Recognition Challenge and contains approximately 120 hours of English speech from a noisy living room environment. The CHiME Challenges focus on distant-microphone automatic speech recognition (ASR) in real-world environments.
CHiME2 Grid reflects the small vocabulary track of the CHiME2 Challenge. The target utterances were taken from the Grid corpus and consist of 34 speakers reading simple 6-word sequences.
LDC also released CHiME2 WSJ0 (LDC2017S10) and CHiME3 (LDC2017S24).
Data is divided into training, development and test sets. All data is provided as 16 bit WAV files sampled at 16 kHz. The noisy utterances are provided both in isolated form and in embedded form. The latter either involve five seconds of background noise before and after the utterance (in the training set) or they are mixed in continuous five minute noise background recordings (in the development and test sets). Seven hours of noise background not part of the training set are also included. The data is accompanied by one annotation file per speaker that includes additional technical information.
Also included is a baseline Hidden Markov Model (HMM)-based speech recogniser and a scoring tool designed for the 2nd CHiME Challenge to allow users to obtain keyword recognition scores from formatted result files, perform recognition and score the challenge data, and estimate parameters of speaker dependent HMMs.
Please listen to the following samples:
None at this time.