|Item Name:||CIEMPIESS Light|
|Author(s):||Carlos Daniel Hernández Mena, Abel Herrera|
|LDC Catalog No.:||LDC2017S23|
|Release Date:||November 17, 2017|
|DCMI Type(s):||Sound, Text|
|Data Source(s):||broadcast conversation|
LDC User Agreement for Non-Members
|Online Documentation:||LDC2017S23 Documents|
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Mena, Carlos Daniel Hernández, and Abel Herrera. CIEMPIESS Light LDC2017S23. Web Download. Philadelphia: Linguistic Data Consortium, 2017.|
CIEMPIESS (Corpus de Investigación en Español de México del Posgrado de Ingeniería Eléctrica y Servicio Social) Light was developed by the Speech Processing Laboratory of the Faculty of Engineering at the National Autonomous University of Mexico (UNAM) and consists of approximately 18 hours of Mexican Spanish radio and television speech and associated transcripts. The goal of this work was to create acoustic models for automatic speech recognition. For more information and documentation see the CIEMPIESS-UNAM Project website.
CIEMPIESS Light is an updated version of CIEMPIESS, released by LDC as LDC2015S07. This "light" version contains speech and transcripts presented in a revised directory structure that allows for use with the Kaldi toolkit.
CIEMPIESS Balance (LDC2018S11) is a companion to this corpus. When combined, they produce a gender balanced corpus.
LDC has released the following data sets in the CIEMPIESS series:
- CIEMPIESS (LDC2015S07)
- CHM150 (LDC2016S04)
- CIEMPIESS Balance (LDC2018S11)
- CIEMPIESS Experimentation (LDC2019S07)
The speech recordings were collected from Podcast UNAM, a program created by Radio-IUS, and Mirador Universitario, a TV program broadcast by UNAM. They are comprised of spontaneous conversations in Mexican Spanish between a moderator and guests. Approximately 75% of the speakers were male, and 25% of the speakers were female.
The audio was recorded in MP3 stereo format, using a 44.1 kHz sample rate and bit-rate of 128 kbps or higher. Only "clean" utterances were selected from the raw data, meaning that the utterances were made by only one person with no background noises, whispers, music, foreign accents, white noise or static. The audio files were converted to 16 kHz, 16-bit PCM flac format for this release.
Transcripts are presented as UTF-8 encoded plain text.
The authors would like to thank Alejandro V. Mena, Elena Vera, Angélica Gutiérrez and Beatriz Ancira for their support with the social service program: "Desarrollo de Tecnologías del Habla.” They would also like to thank the social service students for their hard work.
None at this time.