Algorithm / hardware co-design for adversarial training acceleration through selective computations적대적 학습의 가속을 위한 선택적 연산 알고리즘 및 하드웨어 공동설계

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 101
  • Download : 0
The robustness of neural networks has been emphasized since some works discovered that small perturbations could cause misclassification. Various defense methods against adversarial attacks have been proposed, and adversarial training is one of the most effective. Adversarial training generates adversarial examples and learns such data so that models can classify adversarial examples accurately. Multiple steps of forward pass and backpropagation are necessary to generate adversarial examples, and it consumes enormous computation costs and latency. This work accelerates adversarial training by selective computation of features of which gradients are large. This work also proposes a fast gradient top-k method exploiting the distribution of exponents and selectively computes large gradients in FP32. Efficient hardware to support selective computation, exponent approximation, and gradient top-k is designed with a small overhead.
Advisors
Kim, Lee-Supresearcher김이섭researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[iv, 35 p. :]

Keywords

CNN▼aAdversarial attack▼aAdversarial Training▼aSelective activation computation▼aSelective gradient computation; 컨볼루셔널 뉴럴 네트워크▼a적대적 공격▼a적대적 훈련▼a선택적 활성화 연산▼a선택적 그래디언트 연산

URI
http://hdl.handle.net/10203/309838
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1033102&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0