Interpreting Deep Neural Networks with Relative Sectional Propagation by Analyzing Comparative Gradients and Hostile Activations

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 179
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorNam, Woo Jeongko
dc.contributor.authorChoi, Jaesikko
dc.contributor.authorLee, Seong-Whanko
dc.date.accessioned2021-07-07T05:10:16Z-
dc.date.available2021-07-07T05:10:16Z-
dc.date.created2021-07-07-
dc.date.created2021-07-07-
dc.date.created2021-07-07-
dc.date.created2021-07-07-
dc.date.created2021-07-07-
dc.date.issued2021-02-06-
dc.identifier.citationAAAI Conference on Artificial Intelligence, pp.11604 - 11612-
dc.identifier.issn2159-5399-
dc.identifier.urihttp://hdl.handle.net/10203/286471-
dc.description.abstractThe clear transparency of Deep Neural Networks (DNNs) is hampered by complex internal structures and nonlinear transformations along deep hierarchies. In this paper, we propose a new attribution method, Relative Sectional Propagation (RSP), for fully decomposing the output predictions with the characteristics of class-discriminative attributions and clear objectness. We carefully revisit some shortcomings of backpropagation-based attribution methods, which are trade-off relations in decomposing DNNs. We define hostile factor as an element that interferes with finding the attributions of the target and propagate it in a distinguishable way to overcome the non-suppressed nature of activated neurons. As a result, it is possible to assign the bi-polar relevance scores of the target (positive) and hostile (negative) attributions while maintaining each attribution aligned with the importance. We also present the purging techniques to prevent the decrement of the gap between the relevance scores of the target and hostile attributions during backward propagation by eliminating the conflicting units to channel attribution map. Therefore, our method makes it possible to decompose the predictions of DNNs with clearer class-discriminativeness and detailed elucidations of activation neurons compared to the conventional attribution methods. In a verified experimental environment, we report the results of the assessments: (i) Pointing Game, (ii) mIoU, and (iii) Model Sensitivity with PASCAL VOC 2007, MS COCO 2014, and ImageNet datasets. The results demonstrate that our method outperforms existing backward decomposition methods, including distinctive and intuitive visualizations.-
dc.languageEnglish-
dc.publisherAssociation for the Advancement of Artificial Intelligence-
dc.titleInterpreting Deep Neural Networks with Relative Sectional Propagation by Analyzing Comparative Gradients and Hostile Activations-
dc.typeConference-
dc.identifier.wosid000681269803032-
dc.identifier.scopusid2-s2.0-85111434655-
dc.type.rimsCONF-
dc.citation.beginningpage11604-
dc.citation.endingpage11612-
dc.citation.publicationnameAAAI Conference on Artificial Intelligence-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorChoi, Jaesik-
dc.contributor.nonIdAuthorNam, Woo Jeong-
dc.contributor.nonIdAuthorLee, Seong-Whan-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0