Adaptive Warping Network for Transferable Adversarial Attacks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 67
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorSon, Minjiko
dc.contributor.authorKwon, Myung Joonko
dc.contributor.authorKim, Hee-Seonko
dc.contributor.authorBYUN, JUNYOUNGko
dc.contributor.authorCho, Seungjuko
dc.contributor.authorKim, Changickko
dc.date.accessioned2022-11-21T08:01:52Z-
dc.date.available2022-11-21T08:01:52Z-
dc.date.created2022-11-18-
dc.date.created2022-11-18-
dc.date.issued2022-10-
dc.identifier.citationIEEE International Conference on Image Processing, ICIP 2022, pp.3056 - 3060-
dc.identifier.issn1522-4880-
dc.identifier.urihttp://hdl.handle.net/10203/300310-
dc.description.abstractDeep Neural Networks (DNNs) are extremely susceptible to adversarial examples, which are crafted by intentionally adding imperceptible perturbations to clean images. Due to potential threats of adversarial attacks in practice, black-box transfer-based attacks are carefully studied to identify the vulnerability of DNNs. Unfortunately, transfer-based attacks often fail to achieve high transferability because the adversarial examples tend to overfit the source model. Applying input transformation is one of the most effective methods to avoid such overfitting. However, most previous input transformation methods obtain limited transferability because these methods utilize fixed transformations for all images. To solve the problem, we propose an Adaptive Warping Network (AWN), which searches for appropriate warping to the individual data. Specifically, AWN optimizes the warping, which mitigates the effect of adversarial perturbations in each iteration. The adversarial examples are generated to become robust against such strong transformations. Extensive experimental results on the ImageNet dataset demonstrate that AWN outperforms the existing input transformation methods in terms of transferability.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleAdaptive Warping Network for Transferable Adversarial Attacks-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85146732188-
dc.type.rimsCONF-
dc.citation.beginningpage3056-
dc.citation.endingpage3060-
dc.citation.publicationnameIEEE International Conference on Image Processing, ICIP 2022-
dc.identifier.conferencecountryFR-
dc.identifier.conferencelocationBordeaux-
dc.identifier.doi10.1109/ICIP46576.2022.9897701-
dc.contributor.localauthorKim, Changick-
dc.contributor.nonIdAuthorSon, Minji-
dc.contributor.nonIdAuthorKim, Hee-Seon-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0