Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Glass Orb with Patterns. Photo credit: Michael Dziedzic @lazycreekimages on Unsplash.

Discrimination is at the heart of many contemporary debates in medical ethics. Racial, sexual, gender, age, ethnic and disability discrimination dominate the implementation of genomic medicine, AI, pandemic response and allocation of finite resources. It is clearly unjust, and prohibited by law, to discriminate directly on the basis of these “protected” categories. But the issue is more vexed when there is a correlation or causal relationship between these and factors which are commonly employed to establish prognosis: probability of successful outcome, length and quality of life. There are different options, depending on the normative weight given to non-discrimination and the moral or political reasons to protect those who have a particular characteristic. For example, should those with a particular characteristic receive preferential treatment because of past injustice, even if that would yield worse outcome overall? How should the structural causes of poor prognosis be accounted for?

One of the challenges is to explicitly marry a commitment to equality (egalitarianism) with a commitment to efficiency and bringing about the best outcome (utilitarianism). The pandemic saw a reluctance in the UK to develop explicit decision procedures around the allocation of limited resources, and fears of discrimination. Guidance was based on vague criteria like “frailty”. Such concepts need careful explication and definition, and robust, explicit algorithms developed which allow context sensitive, participatory decision making, capable of integration into AI – what we have called algorithmic ethics.

This project will define key normative concepts including justice, fairness, discrimination and map these onto non-normative concepts such as probability length of life, functional status, and the structural/institutional/societal and cultural factors contributing to these. In this way, we will link values to facts, ethics to science, broadly construed.


Photo credit

Photo by Michael Dziedzic on Unsplash

Research Team

Julian Savulescu

Centre Co-Director

Professor julian-savulescu.jpg

Ilina Singh

Centre Co-Director

Professor ilina Singh.jpg

Alberto Giubilini

Senior Research Fellow