Adaptive Warping Network for Transferable Adversarial Attacks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 66
  • Download : 0
Deep Neural Networks (DNNs) are extremely susceptible to adversarial examples, which are crafted by intentionally adding imperceptible perturbations to clean images. Due to potential threats of adversarial attacks in practice, black-box transfer-based attacks are carefully studied to identify the vulnerability of DNNs. Unfortunately, transfer-based attacks often fail to achieve high transferability because the adversarial examples tend to overfit the source model. Applying input transformation is one of the most effective methods to avoid such overfitting. However, most previous input transformation methods obtain limited transferability because these methods utilize fixed transformations for all images. To solve the problem, we propose an Adaptive Warping Network (AWN), which searches for appropriate warping to the individual data. Specifically, AWN optimizes the warping, which mitigates the effect of adversarial perturbations in each iteration. The adversarial examples are generated to become robust against such strong transformations. Extensive experimental results on the ImageNet dataset demonstrate that AWN outperforms the existing input transformation methods in terms of transferability.
Publisher
IEEE
Issue Date
2022-10
Language
English
Citation

IEEE International Conference on Image Processing, ICIP 2022, pp.3056 - 3060

ISSN
1522-4880
DOI
10.1109/ICIP46576.2022.9897701
URI
http://hdl.handle.net/10203/300310
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0