Adversarial attack with frequency blended image주파수 혼합 이미지를 이용한 적대적 공격

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 135
  • Download : 0
Deep neural networks perform well in various computer vision tasks, but they are vulnerable to adversarial attacks. Since adversarial examples are transferable between models, adversaries can induce a model to predict a wrong class even if the target model's interior is hidden. For these transfer-based attacks, preventing adversarial examples from overfitting the source model is the most important challenge in improving transferability. To improve the adversarial transferability, I introduce the Frequency-Blended Image(FBI) method, which diversifies inputs when generating an adversarial image. Specifically, sensitive-frequency and insensitive-frequency components to the model are extracted from the original image and mixed into the original image with different weights to further increase input diversity. Extensive experiments on the ImageNet dataset show that the attack with the FBI significantly improves the transfer-based attack success rates.
Advisors
Kim, Changickresearcher김창익researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[iii, 19 p. :]

Keywords

Adversarial attacks▼aTransfer-based attack▼aTransferability▼aInput transformations; 적대적 공격▼a전이 기반 공격▼a전이성▼a입력 변환

URI
http://hdl.handle.net/10203/309876
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032882&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0