A region-based image caption generator with refined descriptions

Philip Kinghorn, Li Zhang, Ling Shao

Research output: Contribution to journalArticlepeer-review

82 Citations (Scopus)
10 Downloads (Pure)

Abstract

Describing the content of an image is a challenging task. To enable detailed description, it requires the detection and recognition of objects, people, relationships and associated attributes. Currently, the majority of the existing research relies on holistic techniques, which may lose details relating to important aspects in a scene. In order to deal with such a challenge, we propose a novel region-based deep learning architecture for image description generation. It employs a regional object detector, recurrent neural network (RNN)-based attribute prediction, and an encoder-decoder language generator embedded with two RNNs to produce refined and detailed descriptions of a given image. Most importantly, the proposed system focuses on a local based approach to further improve upon existing holistic methods, which relates specifically to image regions of people and objects in an image. Evaluated with the IAPR TC-12 dataset, the proposed system shows impressive performance, and outperforms state-of-the-art methods using various evaluation metrics. In particular, the proposed system shows superiority over existing methods when dealing with cross-domain indoor scene images.
Original languageEnglish
Pages (from-to)416-424
JournalNeurocomputing
Volume272
Early online date12 Jul 2017
DOIs
Publication statusPublished - 10 Jan 2018

Keywords

  • Image Description Generation
  • Convolutional and Recurrent Neural Networks
  • Description Generation

Cite this