DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation

Cited 11 time in webofscience Cited 9 time in scopus
  • Hit : 229
  • Download : 0
Nowadays, deep learning techniques show dramatic performance in computer vision areas, and they even outperform humans on complex tasks such as ImageNet classification. But it turns out a deep learning based model is vulnerable to some small perturbation called an adversarial attack. This is a problem in the view of the safety and security of artificial intelligence, which has recently been studied a lot. These attacks have shown that they can easily fool models of image classification, semantic segmentation, and object detection. We focus on the adversarial attack in semantic segmentation task since there is little work in this task. We point out this attack can be protected by denoise autoencoder, which is used for denoising the perturbation and restoring the original images. We build a deep denoise autoencoder model for removing the adversarial perturbation and restoring the clean image. We experiment with various noise distributions and verify the effect of denoise autoencoder against adversarial attack in semantic segmentation task.
Publisher
IEEE Computational Intelligence Society (CIS)
Issue Date
2020-07-20
Language
English
Citation

2020 International Joint Conference on Neural Networks, IJCNN 2020

DOI
10.1109/IJCNN48605.2020.9207291
URI
http://hdl.handle.net/10203/277510
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 11 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0