Action recognition from arbitrary views using transferable dictionary learning

Jingtian Zhang, Hubert P. H. Shum, Jungong Han, Ling Shao

Research output: Contribution to journalArticlepeer-review

54 Citations (Scopus)
10 Downloads (Pure)

Abstract

Human action recognition is crucial to many practical applications, ranging from human-computer interaction to video surveillance. Most approaches either recognize the human action from a fixed view or require the knowledge of view angle, which is usually not available in practical applications. In this paper, we propose a novel end-to-end framework to jointly learn a view-invariance transfer dictionary and a view-invariant classifier. The result of the process is a dictionary that can project real-world 2D video into a view-invariant sparse representation, and a classifier to recognize actions with an arbitrary view. The main feature of our algorithm is the use of synthetic data to extract view-invariance between 3D and 2D videos during the pre-training phase. This guarantees the availability of training data, and removes the hassle of obtaining real-world videos in specific viewing angles. Additionally, for better describing the actions in 3D videos, we introduce a new feature set called the 3D dense trajectories to effectively encode extracted trajectory information on 3D videos. Experimental results on the IXMAS, N-UCLA, i3DPost and UWA3DII data sets show improvements over existing algorithms.
Original languageEnglish
Pages (from-to)4709-4723
Number of pages15
JournalIEEE Transactions on Image Processing
Volume27
Issue number10
Early online date15 May 2018
DOIs
Publication statusPublished - 1 Oct 2018

Cite this