IWSLT 2022-2023 Shared Task Training, Development and Test Set
Item Name: | IWSLT 2022-2023 Shared Task Training, Development and Test Set |
Author(s): | Michael Arrigo, Dana Delgado, Stephanie Strassel, David Graff |
LDC Catalog No.: | LDC2025S05 |
ISLRN: | 259-142-542-173-8 |
DOI: | https://doi.org/10.35111/h29h-2f84 |
Release Date: | June 16, 2025 |
Member Year(s): | 2025 |
DCMI Type(s): | Sound, Text |
Sample Type: | PCM |
Sample Rate: | 8000 |
Data Source(s): | telephone conversations |
Project(s): | NIST SRE |
Application(s): | cross-lingual information retrieval, information retrieval, machine translation, speaker identification |
Language(s): | English, Tunisian Arabic |
Language ID(s): | eng, aeb |
License(s): |
LDC User Agreement for Non-Members |
Online Documentation: | LDC2025S05 Documents |
Licensing Instructions: | Subscription & Standard Members, and Non-Members |
Citation: | Arrigo, Michael, et al. IWSLT 2022-2023 Shared Task Training, Development and Test Set LDC2025S05. Web Download. Philadelphia: Linguistic Data Consortium, 2025. |
Related Works: | View |
Introduction
IWSLT 2022 - 2023 Shared Task Training, Development and Test Set was developed by the Linguistic Data Consortium (LDC). It contains 210 hours of Tunisian Arabic conversational telephone speech, transcripts and their English translations covering 175 hours of that speech, speaker metadata, and documentation. This material constitutes the training, development and test data used in the International Conference on Spoken Language Translation (IWSLT) Dialectal Speech Translation task (2022) and the Dialectal and Low-resource track (2023).
Data
The telephone speech was collected by LDC in 2016-2017 from native speakers of Tunisian Arabic in Tunis. Speakers were recruited to make telephone calls to people in their social networks from a variety of noise conditions and handsets. The calls were recorded using a robot operator system that captured digital audio samples directly from the regional public telephone network with the informed consent of participants. The audio files are two-channel recordings across 1,188 conversations.
Transcripts are orthographic following Buckwalter transliteration. IPA (International Phonetic Alphabet) transcripts were added to a subset of the data. All transcribed segments were translated into English. Further information on the transcription and translation methodologies is contained in the documentation accompanying this release.
The audio, transcripts and translations are stored as pairs of single-channel files representing the two sides ("A" and "B") of each conversation. Speech data is presented as FLAC-compressed MS-WAV files in 16-bit 8 kHz PCM format. All text data is UTF-8 encoded.
Samples
Please view the following samples:
Updates
No updates at this time.