DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Kee Eung | - |
dc.contributor.advisor | 김기응 | - |
dc.contributor.advisor | Cheong, Otfried | - |
dc.contributor.advisor | 정지원 | - |
dc.contributor.author | Rieger, Laura Simone | - |
dc.date.accessioned | 2019-09-04T02:45:55Z | - |
dc.date.available | 2019-09-04T02:45:55Z | - |
dc.date.issued | 2018 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734104&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/267005 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2018.2,[iv, 46 p. :] | - |
dc.description.abstract | In this thesis we examine an effect that occurs when applying Deep Taylor Decomposition to Neural Networks.Deep Taylor Decomposition is a method to explain decisions of Neural Networks by mapping the output decision back onto the input. If it is applied to a non-dominant class, the resulting heatmap looks identical to the heatmap for the dominant class with the highest output. Therefore it is not a meaningful explanation of the non-dominant class. We examine this diffusion of different explanations by training Neural Networks with varying numbers of classes and varying networks structures and identify two potential causes. Subsequently we explore and analyze strategies to counter these causes and reduce diffusion. All strategies follow one of two approaches. Either the structure of the Neural Networks is changed before training to encourage separation of heatmaps or the relevance propagation rules are changed. To objectively rate and compare the validity of those strategies we introduce a metric to measure the quantity of diffusion present and examine its validity when compared to visible differences of the explanation. The metric alongside exemplary images is used to judge the success of mitigation strategies. For large numbers of classes, we found no efficient way to reduce diffusion by changing the network structure. Changed relevance propagation rules can be applied and lead to sensible explanations. For small numbers of classes to be classified, it is possible to either change the network structure without large losses in performance or training cost to acquire sensible explanations for non-dominant classes. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | neural networks▼aexplainability▼adeep learning▼ainterpretability▼adeep taylor | - |
dc.subject | 뉴럴 네트워크▼a설명 가능성▼a딥러닝▼a해석 가능성▼a딥 테일러 | - |
dc.title | Separable explanations of deep neural network decisions | - |
dc.title.alternative | 심층 신경망 의사 결정의 분리 가능한 설명 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 리거 로라 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.