Abstract
Cross-modal hashing is usually regarded as an effective technique for large-scale textual-visual cross retrieval, where data from different modalities are mapped into a shared Hamming space for matching. Most of the traditional textual-visual binary encoding methods only consider holistic image representations and fail to model descriptive sentences. This renders existing methods inappropriate to handle the rich semantics of informative cross-modal data for quality textual-visual search tasks. To address the problem of hashing cross-modal data with semantic-rich cues, in this paper, a novel integrated deep architecture is developed to effectively encode the detailed semantics of informative images and long descriptive sentences, named as Textual-Visual Deep Binaries (TVDB). In particular, region-based convolutional networks with long short-term memory units are introduced to fully explore image regional details while semantic cues of sentences are modeled by a text convolutional network. Additionally, we propose a stochastic batch-wise training routine, where high-quality binary codes and deep encoding functions are efficiently optimized in an alternating manner. Experiments are conducted on three multimedia datasets, i.e. Microsoft COCO, IAPR TC-12, and INRIA Web Queries, where the proposed TVDB model significantly outperforms state-of-the-art binary coding methods in the task of cross-modal retrieval.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Computer Vision |
Publisher | The Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 4117-4126 |
Number of pages | 10 |
DOIs | |
Publication status | Published - 25 Dec 2017 |
Event | 2017 IEEE International Conference on Computer Vision - Venice, Italy Duration: 22 Oct 2017 → 29 Oct 2017 |
Conference
Conference | 2017 IEEE International Conference on Computer Vision |
---|---|
Abbreviated title | ICCV |
Country/Territory | Italy |
City | Venice |
Period | 22/10/17 → 29/10/17 |