Abstract
This paper addresses the object recognition problem using multiple-domain inputs. We present a novel approach that utilizes labeled RGB-D data in the training stage, where depth features are extracted for enhancing the discriminative capability of the original learning system that only relies on RGB images. The highly dissimilar source and target domain data are mapped into a unified feature space through transfer at both feature and classifier levels. In order to alleviate cross-domain discrepancy, we employ a state-of-the-art domain-adaptive dictionary learning algorithm that updates image representations in both domains and the classifier parameters simultaneously. The proposed method is trained on a RGB-D Object dataset and evaluated on the Caltech-256 dataset. Experimental results suggest that our approach can lead to significant performance gain over the state-of-the-art methods.
Original language | English |
---|---|
Title of host publication | 2016 IEEE International Conference on Robotics and Automation (ICRA) |
Publisher | The Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 1672-1677 |
Number of pages | 6 |
DOIs | |
Publication status | Published - 9 Jun 2016 |
Event | 2016 IEEE International Conference on Robotics and Automation (ICRA) - http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7478842, Stockholm, Sweden Duration: 16 May 2016 → 21 May 2016 |
Conference
Conference | 2016 IEEE International Conference on Robotics and Automation (ICRA) |
---|---|
Country/Territory | Sweden |
City | Stockholm |
Period | 16/05/16 → 21/05/16 |