Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

A lot of the language we use to refer to AI, including in healthcare, uses terminology that originally and literally applies to humans and human relationships. Such terminology includes both non-evaluative terms like ‘learning’, ‘memory’, ‘intelligence’ etc. and evaluative terms, like ‘trust’ or ‘responsibility’. In this article I focus on the latter type and the way it is applied specifically to the case of medical AI. Focusing on the discussion of ‘responsibility gaps’ that, according to some, AI generates, I will suggest that such terminology is revealing of the nature of healthcare professional obligations and responsibility prior to and independently of the assessment of the use of AI tools in healthcare. The point I make is generalizable to AI as used and discussed more broadly: the language used to refer to AI often tells more about humans and human relationships than about AI itself and our relationship with it. In healthcare, whatever else AI will allow to do, it can prompt us to reflect more thoroughly on professional responsibility and professional obligations.

Type

Journal

Journal of bioethical inquiry.

Publisher

Springer Verlag

Publication Date

28/01/2025