Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 96
  • Download : 0
Adversarial examples provoke weak reliability and potential security issues in deep neural networks. Although adversarial training has been widely studied to improve adversarial robustness, it works in an over-parameterized regime and requires high computations and large memory budgets. To bridge adversarial robustness and model compression, we propose a novel adversarial pruning method, Masking Adversarial Damage (MAD) that employs second-order information of adversarial loss. By using it, we can accurately estimate adversarial saliency for model parameters and determine which parameters can be pruned without weakening adversarial robustness. Furthermore, we reveal that model parameters of initial layer are highly sensitive to the adversarial examples and show that compressed feature representation retains semantic information for the target objects. Through extensive experiments on three public datasets, we demonstrate that MAD effectively prunes adversarially trained networks without loosing adversarial robustness and shows better performance than previous adversarial pruning methods.
Publisher
Computer Vision Foundation, IEEE Computer Society
Issue Date
2022-06-21
Language
English
Citation

IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, pp.15105 - 15115

ISSN
1063-6919
DOI
10.1109/CVPR52688.2022.01470
URI
http://hdl.handle.net/10203/299895
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0