Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

abstract

Here I argue that embodiment has hitherto been given insufficient weight in debates concerning the moral status of novel synthetic beings (NSBs) such as sentient or sapient Artificial Intelligences (AIs) - the focus of this paper. Discussion about moral status of AIs and our obligations to them typically turns on whether they are conscious, i.e. their cognitive sophistication or self-awareness. Even if this is sufficient for moral status in an AI, however, it does not exhaust what is morally relevant. Since moral agency encompasses what a sentient or sapient being wants or ought to do, the means by which it can translate thought into action and enact choices in the world, or is restricted from doing so, is a feature of such agency. As such, in determining the moral status of NSBs and our obligations to them, we must consider how their corporeality shapes their options, choices, preferences, values, and is thus constitutive of their moral universe. By analysing the concept of embodiment in AI and the coupling between cognition and the world, I demonstrate the integral role that physical instantiation plays in defining the terms of moral agency. I use Peter Hacker’s critique of the language of cognitive and neuroscience to show how determination of moral status is only sensible at the level of the agent and not in terms of mental sophistication alone, and why to fail to do this commits a mereological fallacy leading to an impoverished and incomplete account of our obligations to NSBs such as AIs.