Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

It is sometimes suggested that new research in such areas as artificial intelligence, nanotechnology and genetic engineering should be halted or otherwise restricted because of concerns about possible catastrophic scenarios. Proponents of such restrictions typically invoke the precautionary principle, understood as a tool of policy formulation, as part of their case. Here I examine the application of the precautionary principle to possible catastrophic scenarios. I argue, along with Sunstein (Risk and Reason: Safety, Law and the Environment. Cambridge University Press, Cambridge, 2002) and Manson (Environmental Ethics, 24: 263-274, 2002), that variants of the precautionary principle that appear strong enough to support significant restrictions on future technologies actually lead to contradictory policy recommendations. Weaker versions of the precautionary principle, which do not have this feature, do not appear strong enough to support restrictions on future technologies. © Springer 2006.

Original publication

DOI

10.1007/s10676-006-0007-1

Type

Journal article

Journal

Ethics and Information Technology

Publication Date

01/09/2005

Volume

7

Pages

121 - 126