Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

This paper argues that interactions with chatbots are a form of engaging with fictional characters; so, by comparing chatbots with novels and video games as mediums of fictional engagement, we can gain a clearer understanding of who, if anyone, is responsible when users’ interactions with chatbots lead to self-harm or harm to others. We explore the differences between novels, video games, and chatbots across four dimensions: the degree of creators’ control over the content and user experience, the nature of the fictional world, the type of engagement each medium fosters, and the structure of the engagement experience. We take a minimal account of what it takes to be morally responsible and consider how responsibility can be assigned when engagement with fictional worlds results in harm caused to or by users. We argue that because AI companies have some control over chatbots after public release, and because they can monitor user engagement, they are morally responsible when chatbot use leads to harm, even if they can’t perfectly control chatbots’ outputs. In the last section, we point to what AI companies can do to mitigate chatbots’ negative influence on users.

More information Original publication

DOI

10.1007/s11245-026-10371-z

Type

Journal article

Publisher

Springer Nature

Publication Date

2026-02-12T00:00:00+00:00

Pages

1 - 14

Total pages

13

Keywords

50 Philosophy and Religious Studies, 5003 Philosophy, Machine Learning and Artificial Intelligence