First DIHARD Challenge Evaluation - Nine Sources
|Item Name:||First DIHARD Challenge Evaluation - Nine Sources|
|Author(s):||Neville Ryant, Mark Liberman, James Fiumara, Christopher Cieri|
|LDC Catalog No.:||LDC2019S12|
|Release Date:||July 15, 2019|
|DCMI Type(s):||Sound, Text, Software|
|Data Source(s):||microphone speech, broadcast conversation, meeting speech, web collection|
|Application(s):||speech activity detection, diarization|
|Language(s):||English, Mandarin Chinese|
|Language ID(s):||eng, cmn|
LDC User Agreement for Non-Members
|Online Documentation:||LDC2019S12 Documents|
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Ryant, Neville, et al. First DIHARD Challenge Evaluation - Nine Sources LDC2019S12. Web Download. Philadelphia: Linguistic Data Consortium, 2019.|
First DIHARD Challenge Evaluation - Nine Sources was developed by the Linguistic Data Consortium (LDC) and contains approximately 18 hours of English and Chinese speech data along with corresponding annotations used in support of the First DIHARD Challenge.
The First DIHARD Challenge was an attempt to reinvigorate work on diarization through a shared task focusing on "hard" diarization; that is, speech diarization for challenging corpora where there was an expectation that existing state-of-the-art systems would fare poorly. As such, it included speech from a wide sampling of domains representing diversity in number of speakers, speaker demographics, interaction style, recording quality, and environmental conditions, including, but not limited to: clinical interviews, extended child language acquisition recordings, YouTube recordings, and conversations collected in restaurants.
This release, when combined with First DIHARD Challenge Evaluation - SEEDLingS (LDC2019S13), contains the evaluation set audio data and annotation as well as the official scoring tool. The development data for the First DIHARD Challenge is also available from LDC as Eight Sources (LDC2019S09) and SEEDLingS (LDC2019S10).
The source data was drawn from the following (all sources are in English unless otherwise indicated):
- Autism Diagnostic Observation Schedule (ADOS) interviews
- Conversations in Restaurants
- DCIEM/HCRC map task (LDC96S38)
- Audiobook recordings from LibriVox
- Meeting speech collected by LDC in 2001 for the ROAR project (see, e.g., ISL Meeting Speech Part 1 (LDC2004S05))
- 2001 U.S. Supreme Court oral arguments
- Mixer 6 Speech (LDC2013S02)
- Chinese video collected by LDC as part of the Video Annotation for Speech Technologies (VAST) project
- YouthPoint radio interviews
All audio is provided in the form of 16 kHz, mono-channel FLAC files. The diarization for each recording is stored as a NIST Rich Transcription Time Marked (RTTM) file. RTTM files are space-separated text files containing one turn per line. Segmentation files are stored as HTK label files. Each of these files contains one speech segment per line. Both of the annotation file types are encoded as UTF-8. More information about the file formats are in the included documentation.
Please view the following samples:
None at this time.