TY - GEN
T1 - SVW-UCF Dataset for Video Domain Adaptation
AU - Gorpincenko, Artjoms
AU - Mackiewicz, Michal
N1 - Funding Information:
The project was jointly funded by Innovate UK (grant #102072), Cefas, Cefas Technology Limited and EDF Energy, and has also been supported by by the Natural Environment Research Council; and Engineering and Physical Sciences Research Council through the NEXUSS Centre for Doctoral Training (grant #NE/RO12156/1).
Publisher Copyright:
Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
PY - 2021/4
Y1 - 2021/4
N2 - Unsupervised video domain adaptation (DA) has recently seen a lot of success, achieving almost if not perfect results on the majority of various benchmark datasets. Therefore, the next natural step for the field is to come up with new, more challenging problems that call for creative solutions. By combining two well known sets of data - SVW and UCF, we propose a large-scale video domain adaptation dataset that is not only larger in terms of samples and average video length, but also presents additional obstacles, such as orientation and intra-class variations, differences in resolution, and greater domain discrepancy, both in terms of content and capturing conditions. We perform an accuracy gap comparison which shows that both SVW→UCF and UCF→SVW are empirically more difficult to solve than existing adaptation paths. Finally, we evaluate two state of the art video DA algorithms on the dataset to present the benchmark results and provide a discussion on the properties which create the most confusion for modern video domain adaptation methods.
AB - Unsupervised video domain adaptation (DA) has recently seen a lot of success, achieving almost if not perfect results on the majority of various benchmark datasets. Therefore, the next natural step for the field is to come up with new, more challenging problems that call for creative solutions. By combining two well known sets of data - SVW and UCF, we propose a large-scale video domain adaptation dataset that is not only larger in terms of samples and average video length, but also presents additional obstacles, such as orientation and intra-class variations, differences in resolution, and greater domain discrepancy, both in terms of content and capturing conditions. We perform an accuracy gap comparison which shows that both SVW→UCF and UCF→SVW are empirically more difficult to solve than existing adaptation paths. Finally, we evaluate two state of the art video DA algorithms on the dataset to present the benchmark results and provide a discussion on the properties which create the most confusion for modern video domain adaptation methods.
KW - Dataset
KW - Deep Learning
KW - Domain Adaptation
KW - Video
UR - http://www.scopus.com/inward/record.url?scp=85125173008&partnerID=8YFLogxK
U2 - 10.5220/0010460901070111
DO - 10.5220/0010460901070111
M3 - Conference contribution
T3 - Proceedings of the International Conference on Image Processing and Vision Engineering, IMPROVE 2021
SP - 107
EP - 111
BT - Proceedings of the International Conference on Image Processing and Vision Engineering (IMPROVE 2021)
A2 - Imai, Francisco
A2 - Distante, Cosimo
A2 - Battiato, Sebastiano
ER -