Extending Temporal Data Augmentation for Video Action Recognition

Artjoms Gorpincenko, Michal Mackiewicz

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)
4 Downloads (Pure)

Abstract

Pixel space augmentation has grown in popularity in many Deep Learning areas, due to its effectiveness, simplicity, and low computational cost. Data augmentation for videos, however, still remains an under-explored research topic, as most works have been treating inputs as stacks of static images rather than temporally linked series of data. Recently, it has been shown that involving the time dimension when designing augmentations can be superior to its spatial-only variants for video action recognition . In this paper, we propose several novel enhancements to these techniques to strengthen the relationship between the spatial and temporal domains and achieve a deeper level of perturbations. The video action recognition results of our techniques outperform their respective variants in Top-1 and Top-5 settings on the UCF-101 and the HMDB-51 datasets.
Original languageEnglish
Title of host publicationImage and Vision Computing
Subtitle of host publication37th International Conference, IVCNZ 2022, Auckland, New Zealand, November 24–25, 2022, Revised Selected Papers
EditorsWei Qi Yan, Minh Nguyen, Martin Stommel
PublisherSpringer
Pages104-118
Number of pages15
ISBN (Electronic)978-3-031-25825-1
ISBN (Print)978-3-031-25824-4
DOIs
Publication statusPublished - 2023

Publication series

NameLecture Notes in Computer Science

Keywords

  • Action recognition
  • Data augmentation
  • Temporal domain

Cite this