Rethinking Justice and Discrimination
Discrimination is at the heart of many contemporary debates in medical ethics. Racial, sexual, gender, age, ethnic and disability discrimination dominate the implementation of genomic medicine, AI, pandemic response and allocation of finite resources. It is clearly unjust, and prohibited by law, to discriminate directly on the basis of these “protected” categories. But the issue is more vexed when there is a correlation or causal relationship between these and factors which are commonly employed to establish prognosis: probability of successful outcome, length and quality of life. There are different options, depending on the normative weight given to non-discrimination and the moral or political reasons to protect those who have a particular characteristic. For example, should those with a particular characteristic receive preferential treatment because of past injustice, even if that would yield worse outcome overall? How should the structural causes of poor prognosis be accounted for?
One of the challenges is to explicitly marry a commitment to equality (egalitarianism) with a commitment to efficiency and bringing about the best outcome (utilitarianism). The pandemic saw a reluctance in the UK to develop explicit decision procedures around the allocation of limited resources, and fears of discrimination. Guidance was based on vague criteria like “frailty”. Such concepts need careful explication and definition, and robust, explicit algorithms developed which allow context sensitive, participatory decision making, capable of integration into AI – what we have called algorithmic ethics.
This project will define key normative concepts including justice, fairness, discrimination and map these onto non-normative concepts such as probability length of life, functional status, and the structural/institutional/societal and cultural factors contributing to these. In this way, we will link values to facts, ethics to science, broadly construed.