TRECVID 2005 Keyframes & Transcripts
|Item Name:||TRECVID 2005 Keyframes & Transcripts|
|Author(s):||Peter Wilkins, Christian Petersohn, Kevin Walker|
|LDC Catalog No.:||LDC2007V01|
|Release Date:||March 16, 2007|
|Data Source(s):||broadcast news|
|Application(s):||content-based retrieval, event detection, information extraction|
|Language(s):||English, Standard Arabic, Mandarin Chinese|
|Language ID(s):||eng, arb, cmn|
LDC User Agreement for Non-Members
|Online Documentation:||LDC2007V01 Documents|
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Wilkins, Peter, Christian Petersohn, and Kevin Walker. TRECVID 2005 Keyframes & Transcripts LDC2007V01. Web Download. Philadelphia: Linguistic Data Consortium, 2007.|
TREC Video Retrieval Evaluation (TRECVID) was sponsored by the National Institute of Standards and Technology (NIST) to promote progress in content-based retrieval from digital video via open, metrics-based evaluation. TRECVID 2005 Keyframes & Transcripts was developed for use in the NIST TRECVID 2005 Evaluation.
TRECVID was a laboratory-style evaluation that attempted to model real world situations or significant component tasks involved in such situations. In 2005 there were four main tasks with associated tests:
- shot boundary determination
- low-level feature extraction
- high-level feature extraction
- search (interactive, manual, and automatic)
For a detailed description of the TRECVID Evaluation Tasks, please refer to the NIST TRECVID 2005 Evaluation Description.
The source data is Arabic, Chinese and English language broadcast programming collected in November 2004 from the following sources: Lebanese Broadcasting Corp. (Arabic); China Central TV and New Tang Dynasty TV (Chinese); and CNN and MSNBC/NBC (English).
Shots are fundamental units of video, useful for higher-level processing. To create the master list of shots, the video was segmented. The results of this pass are called subshots. Because the master shot reference is designed for use in manual assessment, a second pass over the segmentation was made to create the master shots of at least 2 seconds in length. These master shots were the ones used in submitting results for the feature and search tasks in the evaluation. In the second pass, starting at the beginning of each file, the subshots were aggregated, if necessary, until the currrent shot was at least 2 seconds in duration, at which point the aggregation began anew with the next subshot.
The keyframes were selected by going to the middle frame of the shot boundary, then parsing left and right of that frame to locate the nearest I-Frame. This then became the keyframe and was extracted. Keyframes have been provided at both the subshot (NRKF) and master shot (RKF) levels.
In a small number of cases (all of them subshots) there was no I-Frame within the subshot boundaries. When this occured, the middle frame was selected. There is one anomaly: at the end of the first video in the test collection, a subshot occurs outside a master shot.)
The emphasis in the common shot boundary reference is on the shots, not the transitions. The shots are contiguous. There are no gaps between them. They do not overlap. The media time format is based on the Gregorian day time (ISO 8601) norm. Fractions are defined by counting pre-specified fractions of a second.
The Keyframe below is a sample of the data contained in this corpus.
For information about this frame, please examine this annotation file.