(The) conflict between explainability and responsibility in algorithmic decision-making설명가능성과 책임성의 충돌이 알고리즘 의사결정에 미치는 영향 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 228
  • Download : 0
Artificial intelligence (AI) systems are becoming pervasive in high-risk environments. Some prominent examples include the use of AI in the legal, medical, hiring, and transportation domains. These algorithms, however, are far from perfect. Algorithms deployed in the real world have shown discriminatory tendencies against racial and gender minorities, and self-driving cars have even caused the death of pedestrians. A natural question that arises is who should be held responsible when AI systems cause harm to individuals. Scholars contend that the technological and organizational complexity involved in developing and deploying AI in high-risk scenarios makes determining who is responsible difficult, if not impossible, creating a ''responsibility gap.'' One of the sources of this gap is the opacity of current AI systems. Users, policymakers, and even developers do not understand how algorithms make decisions, making it difficult to identify the root cause of the harm. This limitation has fueled research on Explainable AI (XAI), which attempts to develop systems that provide explanations for their decisions and actions, helping to establish who is responsible for AI-caused harm. In contrast to the prevalent perspective that XAI could help bridge the responsibility gap in the context of algorithmic decision-making, I argue that algorithms providing post-hoc explanations could complicate the search for a responsible entity. I first show how post-hoc explanations could create the perception that AI systems (i.e., decision-making algorithms) and those subjected to algorithmic decisions (i.e., patients) are responsible when things go wrong. This perception can be exploited by AI developers, who can use these mistaken intuitions to escape deserved responsibility by using algorithms and patients as scapegoats. I then present three empirical studies (N = 1,153) exploring whether providing simple post-hoc explanations impacts the extent to which laypeople blame AI systems, their users, and developers for AI-caused harm. The findings suggest that while explainability alone does not affect folk perceptions of responsibility, specific types of explanations influence whom laypeople blame for AI-caused harm. I also explore what determines laypeople's blame judgments of AI and show that moral judgments of algorithms have unique features that are not found in judgments of human actors. Finally, I discuss how the control that developers have over explainable algorithms may allow them to implement specific post-hoc explanations that shift perceived responsibility to other actors, potentially impacting how AI is regulated. I show how this conflict between explainability and responsibility may be dealt with during the development of AI systems and defend hard regulation to prevent scapegoating.
Advisors
Cha, Meeyoungresearcher차미영researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2023.2,[v, 59 p. :]

Keywords

responsibility▼aexplainability▼aAI▼adecision-making▼aalgorithm; 책임성▼a설명▼a인공지능▼a의사 결정▼a알고리즘

URI
http://hdl.handle.net/10203/309579
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032988&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0