Interpretation of Lesional Detection via Counterfactual Generation

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 189
  • Download : 0
To interpret the decision of Deep Neural Networks (DNNs), explainable artificial intelligence research has been widely investigated. Especially, visualizing the attribution maps is known as one of the efficient ways to provide explanations for the trained networks. Applying existing visualization methods on medical images has significant issues in that the medical images commonly have inherent imbalanced data poses and scarcity. To tackle such issues and provide more accurate explanations in medical images, in this paper, we propose a new explainable framework, Counterfactual Generative Network (CGN). We embed counterfactual lesion prediction of DNNs to our explainable framework as prior conditions and guide to generate various counterfactual lesional images from normal input sources, or vice versa. By doing so, CGN can represent detailed attribution maps and generate corresponding normal images from leisonal inputs. Extensive experiments are conducted on the two chest X-ray datasets to verify the effectiveness of our method.
Publisher
IEEE Signal Processing Society
Issue Date
2021-09-20
Language
English
Citation

IEEE International Conference on Image Processing (ICIP), pp.96 - 100

ISSN
1522-4880
DOI
10.1109/ICIP42928.2021.9506282
URI
http://hdl.handle.net/10203/287917
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0