Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes

Cited 3 time in webofscience Cited 3 time in scopus
  • Hit : 630
  • Download : 410
DC FieldValueLanguage
dc.contributor.authorKwon, Hyunko
dc.contributor.authorKim, Yongchulko
dc.contributor.authorYoon, Hyunsooko
dc.contributor.authorChoi, Daeseonko
dc.date.accessioned2019-07-05T02:30:17Z-
dc.date.available2019-07-05T02:30:17Z-
dc.date.created2019-06-14-
dc.date.created2019-06-14-
dc.date.created2019-06-14-
dc.date.created2019-06-14-
dc.date.issued2019-06-
dc.identifier.citationIEEE ACCESS, v.7, pp.73493 - 73503-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/262945-
dc.description.abstractDeep neural networks (DNNs) have useful applications in machine learning tasks involving recognition and pattern analysis. Despite the favorable applications of DNNs, these systems can be exploited by adversarial examples. An adversarial example, which is created by adding a small amount of noise to an original sample, can cause misclassification by the DNN. Under specific circumstances, it may be necessary to create a selective untargeted adversarial example that will not be classified as certain avoided classes. Such is the case, for example, if a modified tank cover can cause misclassification by a DNN, but the bandit equipped with the DNN must misclassify the modified tank as a class other than certain avoided classes, such as a tank, armored vehicle, or self-propelled gun. That is, selective untargeted adversarial examples are needed that will not be perceived as certain classes, such as tanks, armored vehicles, or self-propelled guns. In this study, we propose a selective untargeted adversarial example that exhibits 100% attack success with minimum distortions. The proposed scheme creates a selective untargeted adversarial example that will not be classified as certain avoided classes while minimizing distortions in the original sample. To generate untargeted adversarial examples, a transformation is performed to minimize the probability of certain avoided classes and distortions in the original sample. As experimental datasets, we used MNIST and CIFAR-10, including the Tensorflow library. The experimental results demonstrate that the proposed scheme creates a selective untargeted adversarial example that exhibits 100% attack success with minimum distortions (1.325 and 34.762 for MNIST and CIFAR-10, respectively).-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleSelective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes-
dc.typeArticle-
dc.identifier.wosid000472205800001-
dc.identifier.scopusid2-s2.0-85067693598-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.beginningpage73493-
dc.citation.endingpage73503-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/ACCESS.2019.2920410-
dc.contributor.localauthorYoon, Hyunsoo-
dc.contributor.nonIdAuthorKim, Yongchul-
dc.contributor.nonIdAuthorChoi, Daeseon-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorMachine learning-
dc.subject.keywordAuthoradversarial example-
dc.subject.keywordAuthordeep neural network (DNN)-
dc.subject.keywordAuthoravoided classes-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0