Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example

Cited 10 time in webofscience Cited 11 time in scopus
  • Hit : 565
  • Download : 319
DC FieldValueLanguage
dc.contributor.authorKwon, Hyunko
dc.contributor.authorYoon, Hyunsooko
dc.contributor.authorChoi, Daeseonko
dc.date.accessioned2019-06-17T01:30:02Z-
dc.date.available2019-06-17T01:30:02Z-
dc.date.created2019-05-26-
dc.date.created2019-05-26-
dc.date.created2019-05-26-
dc.date.created2019-05-26-
dc.date.issued2019-05-
dc.identifier.citationIEEE ACCESS, v.7, pp.60908 - 60919-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/262605-
dc.description.abstractDeep neural networks (DNNs) show superior performance in image and speech recognition. However, adversarial examples created by adding a little noise to an original sample can lead to misclassification by a DNN. Conventional studies on adversarial examples have focused on ways of causing misclassification by a DNN by modulating the entire image. However, in some cases, a restricted adversarial example may be required in which only certain parts of the image are modified rather than the entire image and that results in misclassification by the DNN. For example, when the placement of a road sign has already been completed, an attack may be required that will change only a specific part of the sign, such as by placing a sticker on it, to cause misidentification of the entire image. As another example, an attack may be required that causes a DNN to misinterpret images according to a minimal modulation of the outside border of the image. In this paper, we propose a new restricted adversarial example that modifies only a restricted area to cause misclassification by a DNN while minimizing distortion from the original sample. It can also select the size of the restricted area. We used the CIFAR10 and ImageNet datasets to evaluate the performance. We measured the attack success rate and distortion of the restricted adversarial example while adjusting the size, shape, and position of the restricted area. The results show that the proposed scheme generates restricted adversarial examples with a 100% attack success rate in a restricted area of the whole image (approximately 14% for CIFAR10 and 1.07% for ImageNet) while minimizing the distortion distance.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleRestricted Evasion Attack: Generation of Restricted-Area Adversarial Example-
dc.typeArticle-
dc.identifier.wosid000469416600001-
dc.identifier.scopusid2-s2.0-85066832747-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.beginningpage60908-
dc.citation.endingpage60919-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/ACCESS.2019.2915971-
dc.contributor.localauthorYoon, Hyunsoo-
dc.contributor.nonIdAuthorChoi, Daeseon-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDeep neural network (DNN)-
dc.subject.keywordAuthoradversarial example-
dc.subject.keywordAuthormachine learning-
dc.subject.keywordAuthorevasion attack-
dc.subject.keywordAuthorrestricted area-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 10 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0