What do we owe to novel synthetic beings and how can we be sure?
Dr. Alex McKeown, MRC Postdoctoral Research Fellow, NEUROSEC, Department of Psychiatry, and Wellcome Centre for Ethics and Humanities, University of Oxford
Wednesday, 11 March 2020, 11am to 12.30pm
Ethox and the Wellcome Centre for Ethics and Humanities are based at the Big Data Institute, University of Oxford, Li Ka Shing Centre for Health Information and Discovery, Old Road Campus, Oxford OX3 7FZ. The talk will be held in seminar room 0.
abstract
Here I argue that embodiment has hitherto been given insufficient weight in debates concerning the moral status of novel synthetic beings (NSBs) such as sentient or sapient Artificial Intelligences (AIs) - the focus of this paper. Discussion about moral status of AIs and our obligations to them typically turns on whether they are conscious, i.e. their cognitive sophistication or self-awareness. Even if this is sufficient for moral status in an AI, however, it does not exhaust what is morally relevant. Since moral agency encompasses what a sentient or sapient being wants or ought to do, the means by which it can translate thought into action and enact choices in the world, or is restricted from doing so, is a feature of such agency. As such, in determining the moral status of NSBs and our obligations to them, we must consider how their corporeality shapes their options, choices, preferences, values, and is thus constitutive of their moral universe. By analysing the concept of embodiment in AI and the coupling between cognition and the world, I demonstrate the integral role that physical instantiation plays in defining the terms of moral agency. I use Peter Hacker’s critique of the language of cognitive and neuroscience to show how determination of moral status is only sensible at the level of the agent and not in terms of mental sophistication alone, and why to fail to do this commits a mereological fallacy leading to an impoverished and incomplete account of our obligations to NSBs such as AIs.