Separable explanations of deep neural network decisions심층 신경망 의사 결정의 분리 가능한 설명

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 366
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKim, Kee Eung-
dc.contributor.advisor김기응-
dc.contributor.advisorCheong, Otfried-
dc.contributor.advisor정지원-
dc.contributor.authorRieger, Laura Simone-
dc.date.accessioned2019-09-04T02:45:55Z-
dc.date.available2019-09-04T02:45:55Z-
dc.date.issued2018-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734104&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/267005-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2018.2,[iv, 46 p. :]-
dc.description.abstractIn this thesis we examine an effect that occurs when applying Deep Taylor Decomposition to Neural Networks.Deep Taylor Decomposition is a method to explain decisions of Neural Networks by mapping the output decision back onto the input. If it is applied to a non-dominant class, the resulting heatmap looks identical to the heatmap for the dominant class with the highest output. Therefore it is not a meaningful explanation of the non-dominant class. We examine this diffusion of different explanations by training Neural Networks with varying numbers of classes and varying networks structures and identify two potential causes. Subsequently we explore and analyze strategies to counter these causes and reduce diffusion. All strategies follow one of two approaches. Either the structure of the Neural Networks is changed before training to encourage separation of heatmaps or the relevance propagation rules are changed. To objectively rate and compare the validity of those strategies we introduce a metric to measure the quantity of diffusion present and examine its validity when compared to visible differences of the explanation. The metric alongside exemplary images is used to judge the success of mitigation strategies. For large numbers of classes, we found no efficient way to reduce diffusion by changing the network structure. Changed relevance propagation rules can be applied and lead to sensible explanations. For small numbers of classes to be classified, it is possible to either change the network structure without large losses in performance or training cost to acquire sensible explanations for non-dominant classes.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectneural networks▼aexplainability▼adeep learning▼ainterpretability▼adeep taylor-
dc.subject뉴럴 네트워크▼a설명 가능성▼a딥러닝▼a해석 가능성▼a딥 테일러-
dc.titleSeparable explanations of deep neural network decisions-
dc.title.alternative심층 신경망 의사 결정의 분리 가능한 설명-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthor리거 로라-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0