Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks

Cited 32 time in webofscience Cited 0 time in scopus
  • Hit : 196
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorNam, Woo-Jeoungko
dc.contributor.authorGur, Shirko
dc.contributor.authorChoi, Jaesikko
dc.contributor.authorWolf, Liorko
dc.contributor.authorLee, Seong-Whanko
dc.date.accessioned2020-12-29T02:50:11Z-
dc.date.available2020-12-29T02:50:11Z-
dc.date.created2020-12-02-
dc.date.created2020-12-02-
dc.date.created2020-12-02-
dc.date.issued2020-02-11-
dc.identifier.citation34th AAAI Conference on Artificial Intelligence, AAAI 2020, pp.2501 - 2508-
dc.identifier.issn2159-5399-
dc.identifier.urihttp://hdl.handle.net/10203/279219-
dc.description.abstractAs Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, there is an increasing interest in understanding the complex internal mechanisms of DNNs. In this paper, we propose Relative Attributing Propagation (RAP), which decomposes the output predictions of DNNs with a new perspective of separating the relevant (positive) and irrelevant (negative) attributions according to the relative influence between the layers. The relevance of each neuron is identified with respect to its degree of contribution, separated into positive and negative, while preserving the conservation rule. Considering the relevance assigned to neurons in terms of relative priority, RAP allows each neuron to be assigned with a bi-polar importance score concerning the output: from highly relevant to highly irrelevant. Therefore, our method makes it possible to interpret DNNs with much clearer and attentive visualizations of the separated attributions than the conventional explaining methods. To verify that the attributions propagated by RAP correctly account for each meaning, we utilize the evaluation metrics: (i) Outside-inside relevance ratio, (ii) Segmentation mIOU and (iii) Region perturbation. In all experiments and metrics, we present a sizable gap in comparison to the existing literature.-
dc.languageEnglish-
dc.publisherAAAI-
dc.titleRelative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks-
dc.typeConference-
dc.identifier.wosid000667722802070-
dc.identifier.scopusid2-s2.0-85090402924-
dc.type.rimsCONF-
dc.citation.beginningpage2501-
dc.citation.endingpage2508-
dc.citation.publicationname34th AAAI Conference on Artificial Intelligence, AAAI 2020-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationHilton New York Midtown-
dc.contributor.localauthorChoi, Jaesik-
dc.contributor.nonIdAuthorNam, Woo-Jeoung-
dc.contributor.nonIdAuthorGur, Shir-
dc.contributor.nonIdAuthorWolf, Lior-
dc.contributor.nonIdAuthorLee, Seong-Whan-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 32 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0