The DKU-JNU-EMA Electromagnetic Articulography Database

Item Name: The DKU-JNU-EMA Electromagnetic Articulography Database
Author(s): Xiaoyi Qin, Xinzhong Liu, Zexin Cai, Ming Li
LDC Catalog No.: LDC2019S14
ISBN: 1-58563-894-3
ISLRN: 147-070-436-975-2
Release Date: July 15, 2019
Member Year(s): 2019
DCMI Type(s): Sound, Text
Sample Type: pcm
Sample Rate: 16000
Data Source(s): microphone speech
Application(s): language identification, phonetics, pronunciation modeling
Language(s): Yue Chinese, Hakka Chinese, Min Nan Chinese, Mandarin Chinese
Language ID(s): yue, hak, nan, cmn
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2019S14 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: Qin, Xiaoyi, et al. The DKU-JNU-EMA Electromagnetic Articulography Database LDC2019S14. Web Download. Philadelphia: Linguistic Data Consortium, 2019.
Related Works: Hide View

Introduction

The DKU-JNU-EMA Electromagnetic Articulography Database was developed by Duke Kunshan University and Jinan University and contains approximately 10 hours of articulography and speech data in Mandarin, Cantonese, Hakka, and Teochew Chinese from two to seven native speakers for each dialect.

Electromagnetic articulography (EMA) is a method of measuring the position of parts of the mouth and their movement over time during speech and swallowing. Measurements are made from sensors placed in the mouth to capture real-time vocal tract variable trajectories. EMA is used in linguistics and language-related research to study phonetics, in particular, articulation (how sounds are made).

Data

Articulatory measurements were made using the NDI electromagnetic articulography wave research system. Subjects had six sensors placed in various locations in their mouth and one reference sensor was placed on the bridge of their nose. For simultaneous recording of speech signals, subjects also wore a head-mounted close-talk microphone.

Speakers engaged in four different types of recording sessions: one in which they read complete sentences or short texts, and three sessions in which they read related words of a specific common consonant, vowel or tone.

Audio data is presented as single channel, 16kHz, 16-bit flac compressed wav files. Articulography data is stored as UTF-8 plain text files.

Samples

Please view the following samples:

Updates

None at this time.

Available Media

View Fees





Login for the applicable fee