2006 NIST/USF Evaluation Resources for the VACE Program - Meeting Data Test Set Part 1
|Item Name:||2006 NIST/USF Evaluation Resources for the VACE Program - Meeting Data Test Set Part 1|
|Author(s):||Rangachar Kasturi, Dmitry Goldgof, Vasant Manohar, Padmanabhan Soundararajan, John Garofolo, Rachel Bowers, Travis Rose, Jonathan Fiscus, Martial Michel|
|LDC Catalog No.:||LDC2011V05|
|Release Date:||September 15, 2011|
|Sample Type:||720x480, 29.97 fps|
|Data Source(s):||meeting speech|
|Application(s):||event detection, information extraction, content-based retrieval|
LDC User Agreement for Non-Members
|Online Documentation:||LDC2011V05 Documents|
|Licensing Instructions:||Subscription & Standard Members, and Non-Members|
|Citation:||Kasturi, Rangachar, et al. 2006 NIST/USF Evaluation Resources for the VACE Program - Meeting Data Test Set Part 1 LDC2011V05. Web Download. Philadelphia: Linguistic Data Consortium, 2011.|
2006 NIST/USF Evaluation Resources for the VACE Program - Meeting Data Test Set Part 1, Linguistic Data Consortium (LDC) catalog number LDC2011V05 and isbn 1-58563-576-6, was developed by researchers at the Department of Computer Science and Engineering, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group at the National Institute of Standards and Technology (NIST). It contains approximately fifteen hours of meeting room video data collected in 2005 and 2006and annotated for the VACE (Video Analysis and Content Extraction) 2006 face and person tracking tasks.
The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences.
Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. In 2006, the VACE program and the European Unions Computers in the Human Interaction Loop (CHIL) collaborated to hold the CLassification of Events, Activities and Relationships (CLEAR) Evaluation. This was an international effort to evaluate systems designed to analyze people, their identities, activities, interactions and relationships in human-human interaction scenarios, as well as related scenarios. The VACE program contributed the evaluation infrastructure (e.g., data., scoring, tools) for a specific set of tasks, and the CHIL consortium, coordinated by the Karlsruhe Institute of Technology, contributed a separate set of evaluation infrastructure. To the extent possible, the VACE and CHIL programs harmonized their evaluation protocols and metrics.
LDC has previously released NIST/USF Evaluation Resources for the VACE Program -- Meeting Data Training Set Part 1 LDC2011V01 NIST/USF Evaluation Resources for the VACE Program -- Meeting Data Training Set Part 2 LDC2011V02 NIST/USF Evaluation Resources for the VACE Program -- Meeting Data Test Set Part 1 LDC2011V03 and NIST/USF Evaluation Resources for the VACE Program -- Meeting Data Test Set Part 2 LDC2011V04.
The meeting room data used for the 2006 test set was collected by the following sites in 2005 and 2006: Carnegie Mellon University (USA), University of Edinburgh (Scotland), IDIAP Research Institute (Switzerland), NIST (USA), Netherlands Organization for Applied Scientific Research (Netherlands) and Virginia Polytechnic Institute and State University (USA). Each site had its own independent camera setup, illuminations, viewpoints, people and topics. Most of the datasets included High-Definition (HD) recordings, but those were subsequently formatted to MPEG-2 for the evaluation.
The VACE evaluation tools have been integrated into NISTs downloadable Framework for Detection Evaluation (F4DE) Toolkit. The toolkit contains small example files for each of the task/object/domain scoring combinations.