Adversarial unseen visual feature synthesis for Zero-shot Learning

Haofeng Zhang, Yang Long, Li Liu, Ling Shao

Research output: Contribution to journalArticlepeer-review

48 Citations (Scopus)
9 Downloads (Pure)

Abstract

Due to the extreme imbalance of training data between seen classes and unseen classes, most existing methods fail to achieve satisfactory results in the challenging task of Zero-shot Learning (ZSL). To avoid the need for labelled data of unseen classes, in this paper, we investigate how to synthesize visual features for ZSL problem. The key challenge is how to capture the realistic feature distribution of unseen classes without training samples. To this end, we propose a hybrid model consists of Random Attribute Selection (RAS) and conditional Generative Adversarial Network (cGAN). RAS aims to learn the realistic generation of attributes by their correlations in nature. To improve the discrimination for the large number of classes, we add a reconstruction loss in the generative network, which can solve the domain shift problem and significantly improve the classification accuracy. Extensive experiments on four benchmarks demonstrate that our method can outperform all the state-of-the-art methods. Qualitative results show that, compared to conventional generative models, our method can capture more realistic distribution and remarkably improve the variability of the synthesized data.
Original languageEnglish
Pages (from-to)12-20
Number of pages9
JournalNeurocomputing
Volume329
Early online date24 Oct 2018
DOIs
Publication statusPublished - 15 Feb 2019

Keywords

  • Zero Shot Learning
  • Generative Adversary Network
  • Random Attribute Selection

Cite this