Discipline: Computer Sciences and Information Management
Subcategory: Computer Science & Information Systems
Willie McClinton - National Institute of Standards and Technology
Co-Author(s): David Joy, National Institute of Standards and Technology, Gaithersburg MD; Jonathan Fiscus, National Institute of Standards and Technology, Gaithersburg MD.
TREC Video Retrieval Evaluation (TRECVID) is an independent evaluation that stemmed from the TREC conference series whose initiative is to support research in information retrieval by offering collections of data, uniform scoring measures, comparing results, and connecting organizations interested in working together. Multimedia event detection is a growing technology and clearly defined metrics will help contributors to compare and evaluate their algorithms. One of the TRECVID tracks is the Multimedia Event Detection (MED) evaluation, which provides an open forum to catalyze the advancement of event detection systems. The MED evaluation analyzes event detection technologies with the ability to quickly and accurately search large datasets of multimedia clips based on user defined queries. These systems are evaluated primarily based on Mean Inferred Average Precision, and the speed of different processing modules. The evaluation defines a set of events which consist of complex directly observable activities involving people interacting with other people and/or objects. These events are precisely defined and vetted to avoid semantic or interpretative complications. This year’s MED17 evaluation will be changing from the previously labeled HAVIC Progress set to a newer unlabeled subset of Yahoo’s YFCC100M dataset. For the purpose of gathering new events and examples, searching through new dataset manually was infeasible due to the YFCC100M collection being unlabeled. A solution to this problem was the utilization of previous year’s event detection systems to rank the items in the dataset. This reduced the large search space into manageable subsets, where the most useful videos were brought to the surface. These subsets could then be manually probed for new events and examples. Along with the new dataset, this year’s MED17 will be adding 10 new Ad-Hoc events and this reboot will give more breadth to evaluation, allowing for a larger variety of tests to more accurately see the full extent of these state-of-the-art systems. This project and the right to use the data mentioned has been reviewed by ITL-17-0025.
Funder Acknowledgement(s): Thank you to the National Institute of Standards and Technology and their contributors.
Faculty Advisor: David Joy, firstname.lastname@example.org
Role: I helped with this year's MED 2017 evaluation by creating new Ad Hoc Event Kits, while transitioning from the previously labeled HAVIC Progress set to a newer unlabeled subset of Yahoo's YFCC100M dataset. Also I wrote scripts to score and process the results for the evaluation.