K-centered Patch Sampling for Efficient Video Recognition

Cited 5 time in webofscience Cited 0 time in scopus
  • Hit : 119
  • Download : 0
For decades, it has been a common practice to choose a subset of video frames for reducing the computational burden of a video understanding model. In this paper, we argue that this popular heuristic might be sub-optimal under recent transformer-based models. Specifically, inspired by that transformers are built upon patches of video frames, we propose to sample patches rather than frames using the greedy K-center search, i.e., the farthest patch to what has been chosen so far is sampled iteratively. We then show that a transformer trained with the selected video patches can outperform its baseline trained with the video frames sampled in the traditional way. Furthermore, by adding a certain spatiotemporal structuredness condition, the proposed K-centered patch sampling can be even applied to the recent sophisticated video transformers, boosting their performance further. We demonstrate the superiority of our method on Something-Something and Kinetics datasets.
Publisher
SPRINGER INTERNATIONAL PUBLISHING AG
Issue Date
2022-10
Language
English
Citation

17th European Conference on Computer Vision (ECCV), pp.160 - 176

ISSN
0302-9743
DOI
10.1007/978-3-031-19833-5_10
URI
http://hdl.handle.net/10203/305865
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 5 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0