Why did the AI make that decision? Towards an explainable artificial intelligence (XAI) for autonomous driving systems

Cited 13 time in webofscience Cited 0 time in scopus
  • Hit : 132
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorDong, Jiqianko
dc.contributor.authorChen, Sikaiko
dc.contributor.authorMiralinaghi, Mohammadko
dc.contributor.authorCHEN, Tiantianko
dc.contributor.authorLi, Peiko
dc.contributor.authorLabi, Samuelko
dc.date.accessioned2023-11-28T07:00:10Z-
dc.date.available2023-11-28T07:00:10Z-
dc.date.created2023-11-28-
dc.date.created2023-11-28-
dc.date.created2023-11-28-
dc.date.issued2023-11-
dc.identifier.citationTRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, v.156-
dc.identifier.issn0968-090X-
dc.identifier.urihttp://hdl.handle.net/10203/315331-
dc.description.abstractUser trust has been identified as a critical issue that is pivotal to the success of autonomous vehicle (AV) operations where artificial intelligence (AI) is widely adopted. For such integrated AI-based driving systems, one promising way of building user trust is through the concept of explainable artificial intelligence (XAI) which requires the AI system to provide the user with the explanations behind each decision it makes. Motivated by both the need to enhance user trust and the promise of novel XAI technology in addressing such need, this paper seeks to enhance trustworthiness in autonomous driving systems through the development of explainable Deep Learning (DL) models. First, the paper casts the decision-making process of the AV system not as a classification task (which is the traditional process) but rather as an image-based language generation (image captioning) task. As such, the proposed approach makes driving decisions by first generating textual descriptions of the driving scenarios, which serve as explanations that humans can understand. To this end, a novel multi-modal DL architecture is proposed to jointly model the correlation between an image (driving scenario) and language (descriptions). It adopts a fully Transformer-based structure and therefore has the potential to perform global attention and imitate effectively, the learning processes of human drivers. The results suggest that the proposed model can and does generate legal and meaningful sentences to describe a given driving scenario, and subsequently to correctly generate appropriate driving decisions in autonomous vehicles (AVs). It is also observed that the proposed model significantly outperforms multiple baseline models in terms of generating both explanations and driving actions. From the end user’s perspective, the proposed model can be beneficial in enhancing user trust because it provides the rationale behind an AV’s actions. From the AV developer’s perspective, the explanations from this explainable system could serve as a “debugging” tool to detect potential weaknesses in the existing system and identify specific directions for improvement.-
dc.languageEnglish-
dc.publisherPERGAMON-ELSEVIER SCIENCE LTD-
dc.titleWhy did the AI make that decision? Towards an explainable artificial intelligence (XAI) for autonomous driving systems-
dc.typeArticle-
dc.identifier.wosid001097618600001-
dc.identifier.scopusid2-s2.0-85173541940-
dc.type.rimsART-
dc.citation.volume156-
dc.citation.publicationnameTRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES-
dc.identifier.doi10.1016/j.trc.2023.104358-
dc.contributor.localauthorCHEN, Tiantian-
dc.contributor.nonIdAuthorDong, Jiqian-
dc.contributor.nonIdAuthorChen, Sikai-
dc.contributor.nonIdAuthorMiralinaghi, Mohammad-
dc.contributor.nonIdAuthorLi, Pei-
dc.contributor.nonIdAuthorLabi, Samuel-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorExplainable AI (XAI)-
dc.subject.keywordAuthorAutonomous driving-
dc.subject.keywordAuthorUser trust-
dc.subject.keywordAuthorComputer vision-
dc.subject.keywordAuthorEnd -to -end transformer-
dc.subject.keywordAuthorVisual attention-
dc.subject.keywordPlusNEURAL-NETWORK-
dc.subject.keywordPlusARCHITECTURE-
dc.subject.keywordPlusATTENTION-
dc.subject.keywordPlusVISION-
Appears in Collection
GT-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 13 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0