Interpreting and Explaining Deep Neural Networks: A Perspective on Time Series Data

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 166
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChoi, Jaesikko
dc.date.accessioned2021-07-07T01:10:15Z-
dc.date.available2021-07-07T01:10:15Z-
dc.date.created2021-07-07-
dc.date.issued2020-08-23-
dc.identifier.citation26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2020, pp.3563 - 3564-
dc.identifier.urihttp://hdl.handle.net/10203/286463-
dc.description.abstractExplainable and interpretable machine learning models and algorithms are important topics which have received growing attention from research, application and administration. Many complex Deep Neural Networks (DNNs) are often perceived as black-boxes. Researchers would like to be able to interpret what the DNN has learned in order to identify biases and failure models and improve models. In this tutorial, we will provide a comprehensive overview on methods to analyze deep neural networks and an insight how those interpretable and explainable methods help us understand time series data.-
dc.languageEnglish-
dc.publisherAssociation for Computing Machinery-
dc.titleInterpreting and Explaining Deep Neural Networks: A Perspective on Time Series Data-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85090401893-
dc.type.rimsCONF-
dc.citation.beginningpage3563-
dc.citation.endingpage3564-
dc.citation.publicationname26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2020-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1145/3394486.3406478-
dc.contributor.localauthorChoi, Jaesik-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0