Most Relevant Explanation in Bayesian Networks

Cited 42 time in webofscience Cited 0 time in scopus
  • Hit : 896
  • Download : 259
DC FieldValueLanguage
dc.contributor.authorYuan, Changheko
dc.contributor.authorLim, Heejinko
dc.contributor.authorLu, Tsai-Chingko
dc.date.accessioned2013-03-11T18:40:24Z-
dc.date.available2013-03-11T18:40:24Z-
dc.date.created2012-05-15-
dc.date.created2012-05-15-
dc.date.issued2011-
dc.identifier.citationJOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, v.42, pp.309 - 352-
dc.identifier.issn1076-9757-
dc.identifier.urihttp://hdl.handle.net/10203/99944-
dc.description.abstractA major inference task in Bayesian networks is explaining why some variables are observed in their particular states using a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, conditional Bayes factor (CBF), defined as the GBF of a new explanation conditioned on an existing explanation, provides a soft measure on the degree of relevance of the variables in the new explanation in explaining the evidence given the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions and use the relations to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.-
dc.languageEnglish-
dc.publisherAI ACCESS FOUNDATION-
dc.subjectCONSULTATION-
dc.subjectPROBABILITIES-
dc.subjectINDEPENDENCE-
dc.subjectSYSTEMS-
dc.subjectMODEL-
dc.titleMost Relevant Explanation in Bayesian Networks-
dc.typeArticle-
dc.identifier.wosid000296924200001-
dc.identifier.scopusid2-s2.0-82355183915-
dc.type.rimsART-
dc.citation.volume42-
dc.citation.beginningpage309-
dc.citation.endingpage352-
dc.citation.publicationnameJOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH-
dc.contributor.localauthorLim, Heejin-
dc.contributor.nonIdAuthorYuan, Changhe-
dc.contributor.nonIdAuthorLu, Tsai-Ching-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordPlusCONSULTATION-
dc.subject.keywordPlusPROBABILITIES-
dc.subject.keywordPlusINDEPENDENCE-
dc.subject.keywordPlusSYSTEMS-
dc.subject.keywordPlusMODEL-
Appears in Collection
RIMS Journal Papers
Files in This Item
74936.pdf(501.18 kB)Download
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 42 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0