The possibility of extending legal personhood to artificial intelligence (AI) and robots has raised many questions on how these agents could be held liable given existing legal doctrines. Intending to promote a broader discussion, we conducted a survey (N=3315) asking online users' impressions of electronic agents' liability. Results suggest the existence of what we call the punishment gap that refers to the public's demand to punish automated agents upon a legal offense, even though their punishment is currently not feasible. Participants were also negative in granting assets or physical independence to electronic agents, which are crucial liability requirements. We discuss possible solutions to this punishment gap and present how legal systems might handle this contradiction while maintaining existing legal persons liable for the actions of automated agents.