First DIHARD Challenge Development - Eight Sources
|Item Name:||First DIHARD Challenge Development - Eight Sources|
|Author(s):||Neville Ryant, Mark Liberman, James Fiumara, Christopher Cieri|
|LDC Catalog No.:||LDC2019S09|
|Release Date:||June 17, 2019|
|DCMI Type(s):||Sound, Text, Software|
|Data Source(s):||microphone speech, broadcast conversation, meeting speech, web collection|
|Application(s):||speech activity detection, diarization|
|Language(s):||English, Mandarin Chinese|
|Language ID(s):||eng, cmn|
LDC User Agreement for Non-Members
|Online Documentation:||LDC2019S09 Documents|
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Ryant, Neville, et al. First DIHARD Challenge Development - Eight Sources LDC2019S09. Web Download. Philadelphia: Linguistic Data Consortium, 2019.|
First DIHARD Challenge Development - Eight Sources was developed by the Linguistic Data Consortium (LDC) and contains approximately 17 hours of English and Chinese speech data along with corresponding annotations used in support of the First DIHARD Challenge.
The First DIHARD Challenge was an attempt to reinvigorate work on diarization through a shared task focusing on "hard" diarization; that is, speech diarization for challenging corpora where there was an expectation that existing state-of-the-art systems would fare poorly. As such, it included speech from a wide sampling of domains representing diversity in number of speakers, speaker demographics, interaction style, recording quality, and environmental conditions, including, but not limited to: clinical interviews, extended child language acquisition recordings, YouTube recordings, and conversations collected in restaurants.
This release, when combined with First DIHARD Challenge Development - SEEDLingS (LDC2019S10), contains the development set audio data and annotation as well as the official scoring tool. The evaluation data for the First DIHARD Challenge is also available from LDC as Nine Sources (LDC2019S12) and SEEDLingS (LDC2019S13).
The source data was drawn from the following (all sources are in English unless otherwise indicated):
- Autism Diagnostic Observation Schedule (ADOS) interviews
- DCIEM/HCRC map task (LDC96S38)
- Audiobook recordings from LibriVox
- Meeting speech from 2004 Spring NIST Rich Transcription (RT-04S) Development (LDC2007S11) and Evaluation (LDC2007S12) releases.
- 2001 U.S. Supreme Court oral arguments
- Sociolinguistic interviews from SLX Corpus of Classic Sociolinguistic Interviews (LDC2003T15)
- Chinese video collected by LDC as part of the Video Annotation for Speech Technologies (VAST) project
- YouthPoint radio interviews
All audio is provided in the form of 16 kHz, mono-channel FLAC files. The diarization for each recording is stored as a NIST Rich Transcription Time Marked (RTTM) file. RTTM files are space-separated text files containing one turn per line. Segmentation files are stored as HTK label files. Each of these files contains one speech segment per line. Both of the annotation file types are encoded as UTF-8. More information about the file formats and data sources are in the included documentation.
Please view the following samples:
None at this time.