The ICCV 2015 paper Objects2action: Classifying and localizing actions without any video example by Mihir Jain, Jan van Gemert, Thomas Mensink and Cees Snoek is now available. The goal of this paper is to recognize actions in video without the need for examples. Different from traditional zero-shot approaches authors do not demand the design and specification of attribute classifiers and class-to-attribute mappings to allow for transfer from seen classes to unseen classes. The key contribution is objects2action, a semantic word embedding that is spanned by a skip-gram model of thousands of object categories. Action labels are assigned to an object encoding of unseen video based on a convex combination of action and object affinities. Their semantic embedding has three main characteristics to accommodate for the specifics of actions. First, they propose a mechanism to exploit multiple-word descriptions of actions and objects. Second, they incorporate the automated selection of the most responsive objects per action. And finally, they demonstrate how to extend our zero-shot approach to the spatio-temporal localization of actions in video. Experiments on four action datasets demonstrate the potential of the approach.
Article from http://www.ceessnoek.info/
No comments:
Post a Comment