Documentation for Speech in Noisy Environments (SPINE) Evaluation Audio

Introduction

This publication contains the Speech in Noisy Environments (SPINE) Evaluation Audio Corpus created for the Department of Defense (DoD) Digital Voice Processing Consortium (DDVPC) by Arcon Corp., and produced by the Linguistic Data Consortium (LDC) as catalog number LDC2000S96 with isbn 1-58563-188-4. A companion corpus, Speech in Noisy Environments (SPINE) Audio Transcripts, was also produced by the Linguistic Data Consortium (LDC) catalog number LDC2000T54 and isbn 1-58563-189-2. These corpora support the 2000 Speech in Noisy Environments (SPINE1) evaluation. There are a total of 120 files, one conversation each, for a rough total of 9 hours and 22 minutes (2.2 Gigabytes) of audio data.

The 2000 Speech in Noisy Environments Evaluation (SPINE1) is a first attempt to assess the state of the art and practice in speech recognition technology in noisy military environments and to exchange information on innovative speech recognition technology in the context of fully implemented systems that perform realistic tasks. It is intended to be of interest to all university, industrial and commercial speech system developers working on the problem of robust speech recognition. The evaluation gives participants the opportunity to participate in a flexible evaluation, suited to development needs and abilities.

Technical Objective

The SPINE1 evaluation focuses on the task of transcribing speech produced in noisy environments with emphasis on noisy military environments. The evaluation is designed to promote research progress in this area, to provide the opportunity for participants to try out new ideas for developing robust speech recognition systems that are of both scientific and practical interest, and to measure the performance of this technology.

Task

The evaluation task is to transcribe speech produced in noisy environments. The training and test speech data to be used for this evaluation were generated by ARCON Corp. for the DoD Digital Voice Processing Consortium (DDVPC) under controlled conditions. The speech data consists of conversations between two communicators working on a collaborative, Battleship-like task in which they seek and shoot at targets (ARCON Communicability Exercise, ACE). Participants may talk freely, but the total vocabulary used is fairly limited. Each person is seated in a sound chamber in which a previously recorded military background noise environment is accurately reproduced. The participants use handsets and transmission channels that are resident to the particular environment. The evaluation data includes twenty talker-pairs with six five-minute conversations per talker-pair (about 600 minutes total), from a set of four scenarios as described below.

The speech data is viewed as a sequence of "turns," where each turn is the period of time when one speaker is speaking. By its nature, the task induces short utterances with relatively long periods of silence intervening. There may be multiple speaker turns for each speaker, i.e. each successive turn may not result in a reversal of speaking and listening roles for the conversation participants. The transcription task is to produce the correct transcription for each of the specified turns.

Please see file.tbl for the directory structure of this publication, as well as a complete list of files in the Audio corpus.

Data Format

The audio files in this corpus are 2-channel, 16 KHz, 16 bit linear SPHERE files.

The file eval_list.tbl has file information in six tab separated columns as follows:

# file              pair     spkr1    spkr2    scen   vocoder
# ---------------   ------   -----    -----    ----   -------
  spine1_eval_001   pair02   1266=A   1884=B   AF     LPC 
  spine1_eval_002   pair02   1884=B   1266=A   AF     LPC 
  spine1_eval_003   pair02   1266=A   1884=B   Army   CVSD 
  spine1_eval_004   pair02   1884=B   1266=A   Army   CVSD 

	

"File" contains the filename, without the .sph or .typ extension that indicate Sphere audio files and transcripts respectively. "Pair" contains the speaker pair number, while "Spkr 1" and "Spkr 2" contain the individual speaker id's and which environment they were in. Scenarios and Environments are described below. "Scen" contains the scenario, and "Vocoder" contains the vocoder type. The file name format used in the NRL SPINE1 Training Corpus contained information which is either redundant or irrelevant; in the publication of the evaluation data, this column has been replaced by totally arbitrary filenames.

There are 20 speaker pairs in the evaluation set, numbers 2-22. The individual speaker ID's can be used to look up speaker information in speaker.tbl which has six tab separated columns, e.g.:

# Pair	Speaker	Age	Sex	Educ	Dialect		Native
# ----	-------	---	---	----	-------		------
  02	1266	34	M	16	New England	Y
  02	1884	28	M	16	New England	Y
  03	9693	32	F	12	New England	Y
  03	2788	30	F		New England	Y
	

"Pair" contains the pair number, and "Speaker" contains the individual speaker ID. "Age" contains the age, calculated as of 9/1/1995; the recordings were made in the fall of 1995. "Sex" contains the sex, "Educ" contains the years of education, "Dialect" contains the region or dialect, and "Native" indicates whether or not the speaker is a native speaker of English. The information is self-reported; gaps indicate information that was not given.

The following Environment table shows the channel, the two noise environments, and the two handsets used in the recordings for each scenario.

Environment Table

# Name        Environment A        Handset A  Environment B  Handset B Channel
# ----        -------------        ---------  -------------  --------- -------
  DOD         Quiet                STU-III    Office         STU-III   POTS/STU-III
  Navy        Aircraft Carrier CIC TA840      Office         STU-III   HF
  Army        HMMWV                H250       Quiet          STU-III   Satellite Delay
  Air Force   E3A AWACS            R215       MCE            EV M87    JTIDS
	

The combination of environment and scenario can be used to determine the noises, handsets, and channels involved in the recording from the environment table.

Updates

Should any additional information, updates or bug fixes become available, they will appear in the LDC catalog entry for this corpus: LDC2000S96.



Contact: ldc@ldc.upenn.edu
© 1996-2000 Linguistic Data Consortium, University of Pennsylvania. All Rights Reserved.