Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Risk prediction in emergency medicine (EM) holds unique challenges due to issues surrounding urgency, blurry research-practise distinctions, and the high-pressure environment in emergency departments (ED). Artificial intelligence (AI) risk prediction tools have been developed with the aim of streamlining triaging processes and mitigating perennial issues affecting EDs globally, such as overcrowding and delays. The implementation of these tools is complicated by the potential risks associated with over-triage and under-triage, untraceable false positives, as well as the potential for the biases of healthcare professionals toward technology leading to the incorrect usage of such tools. This paper explores risk surrounding these issues in an analysis of a case study involving a machine learning triage tool called the Score for Emergency Risk Prediction (SERP) in Singapore. This tool is used for estimating mortality risk in presentation at the ED. After two successful retrospective studies demonstrating SERP's strong predictive accuracy, researchers decided that the pre-implementation randomised controlled trial (RCT) would not be feasible due to how the tool interacts with clinical judgement, complicating the blinded arm of the trial. This led them to consider other methods of testing SERP's real-world capabilities, such as ongoing-evaluation type studies. We discuss the outcomes of a risk-benefit analysis to argue that the proposed implementation strategy is ethically appropriate and aligns with improvement-focused and systemic approaches to implementation, especially the learning health systems framework (LHS) to ensure safety, efficacy, and ongoing learning.

Original publication

DOI

10.1007/s41649-024-00348-8

Type

Journal

Asian Bioeth Rev

Publication Date

01/2025

Volume

17

Pages

187 - 205

Keywords

Artificial intelligence, Implementation, Risk, Triage