PSAT-GAN: Efficient Adversarial Attacks against Holistic Scene Understanding

Cited 6 time in webofscience Cited 0 time in scopus
  • Hit : 349
  • Download : 99
DC FieldValueLanguage
dc.contributor.authorLin, Wangko
dc.contributor.authorYoon, Kuk-Jinko
dc.date.accessioned2021-09-24T01:30:11Z-
dc.date.available2021-09-24T01:30:11Z-
dc.date.created2021-04-21-
dc.date.created2021-04-21-
dc.date.created2021-04-21-
dc.date.issued2021-08-
dc.identifier.citationIEEE TRANSACTIONS ON IMAGE PROCESSING, v.30, pp.7541 - 7553-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10203/287814-
dc.description.abstractRecent advances in deep neural networks (DNNs) have facilitated high-end applications, including holistic scene understanding (HSU), in which many tasks run in parallel with the same visual input. Following this trend, various methods have been proposed to use DNNs to perform multiple vision tasks. However, these methods are task-specific and less effective when considering multiple HSU tasks. End-to-end demonstrations of adversarial examples, which generate one-to-many heterogeneous adversarial examples in parallel from the same input, are scarce. Additionally, one-to-many mapping of adversarial examples for HSU usually requires joint representation learning and flexible constraints on magnitude, which can render the prevalent attack methods ineffective. In this paper, we propose PSAT-GAN, an end-to-end framework that follows the pipeline of HSU. It is based on a mixture of generative models and an adversarial classifier that employs partial weight sharing to learn a one-to-many mapping of adversarial examples in parallel, each of which is effective for its corresponding task in HSU attacks. PSAT-GAN is further enhanced by applying novel adversarial and soft-constraint losses to generate effective perturbations and avoid studying transferability. Experimental results indicate that our method is efficient in generating both universal and image-dependent adversarial examples to fool HSU tasks under either targeted or non-targeted settings.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titlePSAT-GAN: Efficient Adversarial Attacks against Holistic Scene Understanding-
dc.typeArticle-
dc.identifier.wosid000693758500002-
dc.identifier.scopusid2-s2.0-85114651183-
dc.type.rimsART-
dc.citation.volume30-
dc.citation.beginningpage7541-
dc.citation.endingpage7553-
dc.citation.publicationnameIEEE TRANSACTIONS ON IMAGE PROCESSING-
dc.identifier.doi10.1109/TIP.2021.3106807-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorYoon, Kuk-Jin-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorPerturbation methods-
dc.subject.keywordAuthorVisualization-
dc.subject.keywordAuthorPipelines-
dc.subject.keywordAuthorAutonomous vehicles-
dc.subject.keywordAuthorSemantics-
dc.subject.keywordAuthorGenerative adversarial networks-
dc.subject.keywordAuthorAdversarial attack-
dc.subject.keywordAuthorholistic scene understanding-
dc.subject.keywordAuthormulti-task learning-
dc.subject.keywordAuthorgenerative model-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0