Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

Cited 23 time in webofscience Cited 0 time in scopus
  • Hit : 293
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorByun, JunYoungko
dc.contributor.authorCho, Seungjuko
dc.contributor.authorKwon, Myung Joonko
dc.contributor.authorKim, Hee-Seonko
dc.contributor.authorKim, Changickko
dc.date.accessioned2022-11-25T03:00:28Z-
dc.date.available2022-11-25T03:00:28Z-
dc.date.created2022-11-18-
dc.date.created2022-11-18-
dc.date.issued2022-06-23-
dc.identifier.citation2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, pp.15223 - 15232-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10203/300964-
dc.description.abstractThe transferability of adversarial examples allows the deception on black-box models, and transfer-based targeted attacks have attracted a lot of interest due to their practical applicability. To maximize the transfer success rate, adversarial examples should avoid overfitting to the source model, and image augmentation is one of the primary approaches for this. However, prior works utilize simple image transformations such as resizing, which limits input diversity. To tackle this limitation, we propose the objectbased diverse input (ODI) method that draws an adversarial image on a 3D object and induces the rendered image to be classified as the target class. Our motivation comes from the humans’ superior perception of an image printed on a 3D object. If the image is clear enough, humans can recognize the image content in a variety of viewing conditions. Likewise, if an adversarial example looks like the target class to the model, the model should also classify the rendered image of the 3D object as the target class. The ODI method effectively diversifies the input by leveraging an ensemble of multiple source objects and randomizing viewing conditions. In our experimental results on the ImageNet-Compatible dataset, this method boosts the average targeted attack success rate from 28.3% to 47.0% compared to the state-of-the-art methods. We also demonstrate the applicability of the ODI method to adversarial examples on the face verification task and its superior performance improvement. Our code is available at https://github.com/dreamflake/ODI.-
dc.languageEnglish-
dc.publisherIEEE Conference on Computer Vision and Pattern Recognition-
dc.titleImproving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input-
dc.typeConference-
dc.identifier.wosid000870783001004-
dc.identifier.scopusid2-s2.0-85139309775-
dc.type.rimsCONF-
dc.citation.beginningpage15223-
dc.citation.endingpage15232-
dc.citation.publicationname2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationNew Orleans-
dc.identifier.doi10.1109/CVPR52688.2022.01481-
dc.contributor.localauthorKim, Changick-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 23 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0