Diverse generative perturbations on attention space for transferable adversarial attacks전이 가능한 적대적 공격을 위한 이미지 어텐션 공간상의 섭동 생성 방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 80
  • Download : 0
Improving the adversarial attack transferability, or the ability of an adversarial example crafted on a known model to also fool unknown models, has recently received much attention due to their practicality in real-world scenarios. However, existing methods that try to improve such attack transferability craft perturbations in a deterministic manner. Thus, adversarial examples crafted in this manner often fail to fully explore the loss surface and fall into a poor local optimum, suffering from low transferability. To solve this problem, we propose Attentive-Diversity Attack (ADA), which disrupts diverse salient features in a stochastic manner to improve transferability. We first disrupt the image attention to perturb universal features shared by different models. We also disturb these features in a stochastic manner to explore the search space of transferable perturbations more exhaustively and thus to avoid poor local optima. To this end, we use a generator to produce adversarial perturbations that each disturbs features in different ways depending on an input latent code. Extensive experimental evaluations demonstrate the effectiveness of our method, outperforming the transferability of state-of-the-art methods.
Advisors
Yoon, Sung-Euiresearcher윤성의researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2023.2,[iii, 20 p. :]

Keywords

Deep learning▼aComputer vision▼aAdversarial attack; 딥러닝▼a컴퓨터 비전▼a적대적 공격

URI
http://hdl.handle.net/10203/309559
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032962&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0