Self-explaining deep models with logic rule reasoning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 213
  • Download : 0
We present SELOR, a framework for integrating self-explaining capabilities into a given deep model to achieve both high prediction performance and human precision. By "human precision", we refer to the degree to which humans agree with the reasons models provide for their predictions. Human precision affects user trust and allows users to collaborate closely with the model. We demonstrate that logic rule explanations naturally satisfy human precision with the expressive power required for good predictive performance. We then illustrate how to enable a deep model to predict and explain with logic rules. Our method does not require predefined logic rule sets or human annotations and can be learned efficiently and easily with widely-used deep learning modules in a differentiable way. Extensive experiments show that our method gives explanations closer to human decision logic than other methods while maintaining the performance of deep learning models.
Publisher
Neural Information Processing Systems (NeurIPS)
Issue Date
2022-12
Language
English
Citation

36th Annual Conference on Neural Information Processing Systems (NeurIPS 2022)

URI
http://hdl.handle.net/10203/299698
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0