Training Auxiliary Prototypical Classifiers for Explainable Anomaly Detection in Medical Image Segmentation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 106
  • Download : 0
Machine learning-based algorithms using fully convolutional networks (FCNs) have been a promising option for medical image segmentation. However, such deep networks silently fail if input samples are drawn far from the training data distribution, thus causing critical problems in automatic data processing pipelines. To overcome such out-of-distribution (OoD) problems, we propose a novel OoD score formulation and its regularization strategy by applying an auxiliary add-on classifier to an intermediate layer of an FCN, where the auxiliary module is helfpul for analyzing the encoder output features by taking their class information into account. Our regularization strategy train the module along with the FCN via the principle of outlier exposure so that our model can be trained to distinguish OoD samples from normal ones without modifying the original network architecture. Our extensive experiment results demonstrate that the proposed approach can successfully conduct effective OoD detection without loss of segmentation performance. In addition, our module can provide reasonable explanation maps along with OoD scores, which can enable users to analyze the reliability of predictions.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2023-01
Language
English
Citation

23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, pp.2623 - 2632

DOI
10.1109/WACV56688.2023.00265
URI
http://hdl.handle.net/10203/305994
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0