Gaussian Soft Decision Trees for Interpretable Feature-Based Classification

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 50
  • Download : 0
How can we accurately classify feature-based data such that the learned model and results are more interpretable? Interpretability is beneficial in various perspectives, such as in checking for compliance with exiting knowledge and gaining insights from decision processes. To gain in both accuracy and interpretability, we propose a novel tree-structured classifier called Gaussian Soft Decision Trees (GSDT). GSDT is characterized by multi-branched structures, Gaussian mixture-based decisions, and a hinge loss with path regularization. The three key features make it learn short trees where the weight vector of each node is a prototype for data that mapped to the node. We show that GSDT results in the best average accuracy compared to eight baselines. We also perform an ablation study of the various structures of covariance matrix in the Gaussian mixture nodes in GSDT and demonstrate the interpretability of GSDT in a case study of classification in a breast cancer dataset.
Publisher
Springer Science and Business Media Deutschland GmbH
Issue Date
2021-05-13
Language
English
Citation

25th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2021, pp.143 - 155

ISSN
2945-9133
DOI
10.1007/978-3-030-75765-6_12
URI
http://hdl.handle.net/10203/311556
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0