It is not about AI, it is about humans. Responsibility gaps and medical AI
GIUBILINI A.
A lot of the language we use to refer to AI, including in healthcare, uses terminology that originally and literally applies to humans and human relationships. Such terminology includes both non-evaluative terms like ‘learning’, ‘memory’, ‘intelligence’ etc. and evaluative terms, like ‘trust’ or ‘responsibility’. In this article I focus on the latter type and the way it is applied specifically to the case of medical AI. Focusing on the discussion of ‘responsibility gaps’ that, according to some, AI generates, I will suggest that such terminology is revealing of the nature of healthcare professional obligations and responsibility prior to and independently of the assessment of the use of AI tools in healthcare. The point I make is generalizable to AI as used and discussed more broadly: the language used to refer to AI often tells more about humans and human relationships than about AI itself and our relationship with it. In healthcare, whatever else AI will allow to do, it can prompt us to reflect more thoroughly on professional responsibility and professional obligations.