TRECVID 2003 Keyframes & Transcripts

Item Name: TRECVID 2003 Keyframes & Transcripts
Author(s): Georges Quenot, Paul Over, Kevin Walker
LDC Catalog No.: LDC2007V02
ISBN: 1-58563-436-0
ISLRN: 558-793-302-438-0
DOI: https://doi.org/10.35111/0kxe-zq83
Release Date: April 18, 2007
Member Year(s): 2007
DCMI Type(s): MovingImage
Data Source(s): broadcast news
Project(s): TDT, TREC
Application(s): content-based retrieval
Language(s): English
Language ID(s): eng
License(s): LDC User Agreement for Non-Members
Online Documentation: LDC2007V02 Documents
Licensing Instructions: Subscription & Standard Members, and Non-Members
Citation: Quenot, Georges, Paul Over, and Kevin Walker. TRECVID 2003 Keyframes & Transcripts LDC2007V02. Web Download. Philadelphia: Linguistic Data Consortium, 2007.
Related Works: View

Introduction

The TREC Video Retrieval Evaluation (TRECVID) was sponsored by the National Institute of Standards and Technology (NIST) to promote progress in content-based retrieval from digital video via open, metrics-based evaluation. The keyframes in this release were extracted for use in the NIST TRECVID 2003 Evaluation.

TRECVID was a laboratory-style evaluation that attempted to model real world situations or significant component tasks involved in such situations. In 2003 there were four main tasks with associated tests:

  • shot boundary determination
  • story segtmentation
  • high-level feature extraction
  • search (interactive and manual)

For a detailed description of the TRECVID Evaluation Tasks, please refer to the NIST TRECVID 2003 Evaluation Description.

Data

The source data is English language broadcast programming collected by the Linguistic Data Consortium in 1998 from ABC ("World News Tonight") and CNN ("CNN Headline News").

Shots are fundamental units of video, useful for higher-level processing. To create the master list of shots, the video was segmented. The results of this pass are called subshots. Because the master shot reference is designed for use in manual assessment, a second pass over the segmentation was made to create the master shots of at least 2 seconds in length. These master shots are the ones used in submitting results for the feature and search tasks in the evaluation. In the second pass, starting at the beginning of each file, the subshots were aggregated, if necessary, until the currrent shot was at least 2 seconds in duration, at which point the aggregation began anew with the next subshot.

The keyframes were selected by going to the middle frame of the shot boundary, then parsing left and right of that frame to locate the nearest I-Frame. This then became the keyframe and was extracted. Keyframes have been provided at both the subshot (NRKF) and master shot (RKF) levels.

In a small number of cases (all of them subshots) there was no I-Frame within the subshot boundaries. When this occured, the middle frame was selected. There is one anomaly: at the end of the first video in the test collection, a subshot occurs outside a master shot.)

The emphasis in the common shot boundary reference is on the shots, not the transitions. The shots are contiguous. There are no gaps between them. They do not overlap. The media time format is based on the Gregorian day time (ISO 8601) norm. Fractions are defined by counting pre-specified fractions of a second.

Samples

For an example of the data in this corpus, please see the keyframe and annotation files below:

Available Media

View Fees





Login for the applicable fee