Describing Unseen Classes by Exemplars: Zero-Shot Learning Using Grouped Simile Ensemble

Yang Long, Ling Shao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

17 Citations (Scopus)
8 Downloads (Pure)

Abstract

Learning visual attributes is an effective approach for zero-shot recognition. However, existing methods are restricted to learning explicitly nameable attributes and cannot tell which attributes are more important to the recognition task. In this paper, we propose a unified framework named Grouped Simile Ensemble (GSE). We claim our contributions as follows. 1) We propose to substitute explicit attribute annotation by similes, which are more natural expressions that can describe complex unseen classes. Similes do not involve extra concepts of attributes, i.e. only exemplars of seen classes are needed. We provide an efficient scenario to annotate similes for two benchmark datasets, AwA and aPY. 2) We propose a graph-cut-based class clustering algorithm to effectively discover implicit attributes from the similes. 3) Our GSE can automatically find the most effective simile groups to make the prediction. On both datasets, extensive experimental results manifest that our approach can significantly improve the performance over the state-of-the-art methods.
Original languageEnglish
Title of host publication2017 IEEE Winter Conference on Applications of Computer Vision (WACV)
PublisherIEEE Press
ISBN (Electronic)978-1-5090-4822-9
ISBN (Print)978-1-5090-4822-9
DOIs
Publication statusPublished - 15 May 2017

Cite this