The Conflict Between People's Urge to Punish AI and Legal Systems

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 325
  • Download : 90
Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios. This paper explores the proposal to extend legal personhood to AI and robots, which had not yet been examined through the lens of the general public. We present two studies (N = 3,559) to obtain people's views of electronic legal personhood vis-a-vis existing liability models. Our study reveals people's desire to punish automated agents even though these entities are not recognized any mental state. Furthermore, people did not believe automated agents' punishment would fulfill deterrence nor retribution and were unwilling to grant them legal punishment preconditions, namely physical independence and assets. Collectively, these findings suggest a conflict between the desire to punish automated agents and its perceived impracticability. We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents' wrongdoings.
Publisher
FRONTIERS MEDIA SA
Issue Date
2021-11
Language
English
Article Type
Article
Citation

FRONTIERS IN ROBOTICS AND AI, v.8

ISSN
2296-9144
DOI
10.3389/frobt.2021.756242
URI
http://hdl.handle.net/10203/289672
Appears in Collection
CS-Journal Papers(저널논문)STP-Journal Papers(저널논문)
Files in This Item
122546.pdf(1.44 MB)Download

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0