This paper argues that interactions with chatbots are a form of engaging with fictional characters; so, by comparing chatbots with novels and video games as mediums of fictional engagement, we can gain a clearer understanding of who, if anyone, is responsible when users’ interactions with chatbots lead to self-harm or harm to others. We explore the differences between novels, video games, and chatbots across four dimensions: the degree of creators’ control over the content and user experience, the nature of the fictional world, the type of engagement each medium fosters, and the structure of the engagement experience. We take a minimal account of what it takes to be morally responsible and consider how responsibility can be assigned when engagement with fictional worlds results in harm caused to or by users. We argue that because AI companies have some control over chatbots after public release, and because they can monitor user engagement, they are morally responsible when chatbot use leads to harm, even if they can’t perfectly control chatbots’ outputs. In the last section, we point to what AI companies can do to mitigate chatbots’ negative influence on users.
Journal article
Springer Nature
2026-02-12T00:00:00+00:00
1 - 14
13
50 Philosophy and Religious Studies, 5003 Philosophy, Machine Learning and Artificial Intelligence