Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

THIS SEMINAR WILL BE RESCHEDULED.

abstract

Technological innovations in healthcare, perhaps now more than ever, are posing decisive opportunities for improvements in diagnostics, treatment, and overall quality of life. The use of artificial intelligence and big data processing, in particular, stands to revolutionize healthcare systems as we once knew them. But what effect do these technologies have on human agency and moral responsibility in healthcare? How can patients, practitioners, and the general public best respond to potential obscurities in responsibility? In this project, I investigate the social and ethical challenges arising with newfound medical technologies, specifically the ways in which artificially intelligent systems may be enhancing or threatening responsibility in the delivery of healthcare. I suggest that if our ability to locate responsibility becomes threatened, we are left with a great dilemma. In short, it might seem that we should exercise extreme caution or even restraint in our use of state-of-the-art systems, but thereby lose out on such benefits as improved quality of care. Alternatively, we might need to loosen our commitment to locating moral responsibility when patients come to harm. What is clear, at least, is that the shift toward artificial intelligence and big data calls for a significant shift in expectations on how, if at all, we might locate notions of agency and responsibility in emerging models of healthcare.