Defending Video Recognition Model against Adversarial Perturbations via Defense Patterns

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 2
  • Download : 0
Deep Neural Networks (DNNs) have been widely successful in various domains, but they are vulnerable to adversarial attacks. Recent studies have also demonstrated that video recognition models are susceptible to adversarial perturbations, but the existing defense strategies in the image domain do not transfer well to the video domain due to the lack of considering temporal development and require a high computational cost for training video recognition models. This paper, first, investigates the temporal vulnerability of video recognition models by quantifying the effect of temporal perturbations on the model's performance. Based on these investigations, we propose Defense Patterns (DPs) that can effectively protect video recognition models by adding them to the input video frames. The DPs are generated on top of a pre-trained model, eliminating the need for retraining or fine-tuning, which significantly reduces the computational cost. Experimental results on two benchmark datasets and various action recognition models demonstrate the effectiveness of the proposed method in enhancing the robustness of video recognition models.
Publisher
IEEE COMPUTER SOC
Issue Date
2024-07
Language
English
Citation

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, v.21, pp.4110 - 4121

ISSN
1545-5971
DOI
10.1109/TDSC.2023.3346064
URI
http://hdl.handle.net/10203/321170
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0