DIVERSE GENERATIVE PERTURBATIONS ON ATTENTION SPACE FOR TRANSFERABLE ADVERSARIAL ATTACKS

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 58
  • Download : 0
Adversarial attacks with improved transferability - the ability of an adversarial example crafted on a known model to also fool unknown models - have recently received much attention due to their practicality. Nevertheless, existing transferable attacks craft perturbations in a deterministic manner and often fail to fully explore the loss surface, thus falling into a poor local optimum and suffering from low transferability. To solve this problem, we propose Attentive-Diversity Attack (ADA), which disrupts diverse salient features in a stochastic manner to improve transferability. Primarily, we perturb the image attention to disrupt universal features shared by different models. Then, to effectively avoid poor local optima, we disrupt these features in a stochastic manner and explore the search space of transferable perturbations more exhaustively. More specifically, we use a generator to produce adversarial perturbations that each disturbs features in different ways depending on an input latent code. Extensive experimental evaluations demonstrate the effectiveness of our method, outperforming the transferability of state-of-the-art methods. Codes are available at https://github.com/wkim97/ADA.
Publisher
IEEE Computer Society
Issue Date
2022-10
Language
English
Citation

29th IEEE International Conference on Image Processing, ICIP 2022, pp.281 - 285

ISSN
1522-4880
DOI
10.1109/ICIP46576.2022.9897346
URI
http://hdl.handle.net/10203/312093
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0