Search results
Found 3132 matches for
Generative AI in healthcare education: How AI literacy gaps could compromise learning and patient safety.
AIM: To examine the challenges and opportunities presented by generative artificial intelligence in healthcare education and explore how it can be used ethically to enhance rather than compromise future healthcare workforce competence. BACKGROUND: Generative artificial intelligence is fundamentally changing healthcare education, yet many universities and healthcare educators have failed to keep pace with its rapid development. DESIGN: A discussion paper. METHODS: Discussion and analysis of the challenges and opportunities presented by students' increasing use of generative artificial intelligence in healthcare education, with particular focus on assessment approaches, critical thinking development and artificial intelligence literacy. RESULTS: Students' widespread use of generative artificial intelligence threatens assessment integrity and may inhibit critical thinking, problem-solving skills and knowledge acquisition. Without adequate artificial intelligence literacy there is a risk of eroding future healthcare workforce competence and compromising patient safety and professional integrity. CONCLUSION: While generative artificial intelligence presents significant challenges to healthcare education, it offers great promise if used carefully with awareness of its limitations. The development of artificial intelligence literacy is crucial for maintaining professional standards and ensuring patient safety and mitigating its potentially negative impact on the formation of critical thinking skills.
Distance caregiving using smart home technologies: balancing ethical priorities in family decision-making by only children.
BACKGROUND: The parallel growth of population ageing and international migration have introduced a unique population of transnational caregivers in elder care. Particularly for only children who face conflicting obligations and reduced caregiving resources, smart home devices could be technical tools to care for older parents from a distance. Research towards the use of these technologies has unearthed ethical issues such as privacy, autonomy, stigma and beneficence, but has not been fully explored in distance care. In this paper, we explore the ethical issues expressed by a group of only children towards integrating assistive, monitoring, and robotic technologies in their transnational care plans. METHODS: Purposive snowball sampling was used for the recruitment of 26 distance caregivers aged between 28 and 45, who were their parent's only children. They lived in Europe for at least 5 years, with at least one parent residing in the home country. In semi-structured interviews, participants discussed the ethical issues of wearable devices, ambient and visual remote monitoring technologies, as well as the possible use of one assistive robot in the context of distance caregiving for older parents. We used the applied thematic analysis methodology to analyze the data. RESULTS: We highlight two ethical considerations. First, participants saw the need for maximizing good outcomes in caring for their older parents and fulfilling their responsibilities to ensure their health and safety, balanced against the respect for the parents' autonomy, dignity, and privacy. Second, they weighed the benefits and harms of technologies at a distance to provide companionship and support against the intrinsic value placed on care received from one's only child. CONCLUSIONS: Discussions to involve technologies in elder care at a distance prompted complex decision-making processes to balance, weigh, and rationalize their ethical concerns as foreseen by the caregivers. The importance of maximizing the health and safety of older parents came at an unavoidable cost of the respect to autonomy, privacy, and dignity. Participants valued their own emotional connection and relationship to their parents, which they prioritized above the instrumental value of technological support. We further discuss our findings within the ethics of care theory and concepts within transnational care literature to make sense of the broader ethical implications of this empirical study.
To Be Human is to Be Better: A Discussion with Julian Savulescu
In this paper, Julian Savulescu discusses humanity’s trajectory – past, present, and future. As the world undergoes relentless transformation driven by technological advancements, some pressing questions arise: Is it time to provide modern solutions to old problems such as discrimination, inequality, and crime? Should people retain absolute autonomy over their decisions, even in the case that their judgment may falter? What role is Artificial Intelligence going to play in our day-to-day lives, and how far could it go? This dialogue unveils a visionary blueprint for humanity, regarding how much could really be achieved with the help of technology, what are some of the difficult decisions we would have to make, and ultimately what would it look like if we tried to use the tools we have to actually create a society that values justice and equality above individual freedom.
AI preference prediction and policy making
Democratic decision-making is difficult. Representatives often fail to represent the preferences of their constituents, and directly consulting members of the public can be costly. Inspired by these difficulties, several scholars have discussed the use of artificial intelligence (AI) models to support democratic decision-making. One such particular application is the use of AI to represent public policy preferences by predicting them. In this paper, we perform an analysis on the different ways AI models can be used to represent public policy preferences. We make distinctions between using AI as epistemic tools and for procedure; group and individual predictions; and predictions about preferences and inferences about values. We also describe how AI models can help policymakers screen policies for potential worries and objections, double-check any beliefs they have about the acceptability of their policies, and justify policy proposals. We also consider a number of worries about the use of AI in policymaking. We argue that these worries, while legitimate, can be mitigated or avoided in the way we have proposed the use of AI.
Road Rage Against the Machine: Humans and LLMs Share a Blame Bias Against Driverless Cars
Human language reflects our social values, biases, and moral judgments. Large language models (LLMs) trained on extensive human texts may therefore learn or encode such information, allowing them to generate responses within moral and ethical domains. Investigating whether LLMs exhibit human-like (including potentially biased or skewed) moral judgments is therefore crucial. Recent moral psychology research suggests that humans tend to have stronger negative reactions toward, and attribute more blame to, intelligent autonomous machines than to fellow humans for identical harm. Here we examine whether LLMs (OpenAI’s GPT-3.5 and GPT-4) exhibit a similar bias against machines in the specific domain of driverless cars. We replicate experiments from two previous studies in the USA and China and find that GPT-4 (but not GPT-3.5), similar to human participants reported previously, consistently rates machine drivers as more blameworthy and causally responsible than human drivers for identical traffic harm (Study 1), while also rating machine versus human drivers’ identical actions as more harmful and morally wrong (preregistered Study 2). This asymmetry in moral judgments is replicated across both LLMs and human participants in a new crash scenario that is unlikely to have been included in the LLMs’ training sets (preregistered Study 3). We discuss whether the blame bias against machines might be morally justified, and also propose that its presence in humans and LLMs could be due to different mechanisms.
Relational Responsibility: Bringing the Wider Social Environment into the Analysis
Current conceptions of special responsibilities often adopt a narrow, individualistic lens that fails to consider the broader socio-relational context. In response to this gap, we propose a concept of relational responsibility that emphasises the interconnectedness of individuals and the wider societal context in which they exist. We posit that assigning relational responsibilities should not solely hinge on the voluntary nature of one’s relationships, but rather on the intrinsic value of these connections, as determined by individuals who hold pertinent roles within those relationships or who would be impacted by the definition of value. Our account acknowledges that many responsibilities, especially in caregiving contexts, are not chosen freely, and there should be normative limits to protect individuals from unreasonable burdens. Recognising the role of structural conditions in shaping responsibilities, we argue that collectives with the capacity and resources have an obligation to support individuals by mitigating these burdens and creating just conditions for care. This relational and structural reframing offers a more ethically attuned and practically responsive understanding of responsibility.
Practitioners’ attitudes and approaches to assessing comorbid depression among patients seeking assisted dying in New Zealand
Depressive disorders are prevalent among the terminally ill and often impact decision-making capacity. However, routine screening for depression is not currently included in assisted dying assessments. This qualitative study aimed to explore the attitudes and approaches of ten New Zealand assisted dying practitioners in assessing comorbid depression among patients seeking assisted dying. Four main themes emerged: (i) depression was viewed as a minor concern in patients seeking assisted dying, (ii) practitioners used informal approaches to assess depression, (iii) there was overlap in symptoms of terminal illness and depression, and (iv) there was opposition to introducing new mandatory processes to assess depression. This study highlights a generally informal, non-systematised approach to depression screening as part of the assisted dying assessment process. Additions to the process, including routine depression screening will require input from assisted dying stakeholders due to concern about barriers or delays for patients seeking this.
Violence
The causal relationship between religion and violence is examined. It is argued that it is currently unclear whether or not religion is a significant cause of violence. Three types of argument relating religion to violence are then considered. It is sometimes argued that a lack of religion makes people less moral than they would be otherwise, and, therefore more inclined to violence. It is sometimes argued that religion makes people tolerant, and it is sometimes argued that religion makes people intolerant. If people become more intolerant, they can reasonably be expected to become more violent and if people become more tolerant then they can reasonably be expected to become less violent. It is sometimes also argued that religion offers forms of justifications for violence that are unavailable to atheists and that this may lead the religious to cause more violence than is caused by atheists.
Metaphysics and the disunity of scientific knowledge
First published in 1998, this volume’s primary concern is to demonstrate how a metaphysics can be developed which enables us to make do in an uncertain world and to develop a pragmatic alternative to postmodernism. Opposing unificationist view of science, Clarke suggests, needs to be understood in the context of the perceived threat of metaphysical disorder. He explores this through issues including epistemology, fundamentalism, pluralism and idealisation and identifies a potential solution similar to the work of Otto Neurath.
Human-Like Epistemic Trust? A Conceptual and Normative Analysis of Conversational AI in Mental Healthcare.
The attribution of human concepts to conversational artificial intelligence (CAI) simulating human characteristics and conversation in psychotherapeutic settings presents significant conceptual and normative challenges. First, this article analyzes the concept of epistemic trust by identifying its problematic conditions when attributed to CAI, arguing for conceptual shift. We propose a conceptual, visual tool to navigate this shift. Second, three conceptualizations of AI are analyzed to understand their influence on the interpretation and evaluation of conceptual shift of epistemic trust and associated risks. We contrast two common AI conceptualizations from literature: a dichotomic account, distinguishing between AI's real and simulated abilities, and a relational account. Finally, we propose a novel approach: conceptualizing AI as a fictional character to combine their strengths, arguing for shifting focus from merely simulating human abilities to addressing CAI's actual strengths and weaknesses. The article sheds light on underlying theoretical assumptions that influence the ethical analysis of CAI.
“Looking at the Big Picture”: A Qualitative Study of Ethics in Science Communication and Engagement
Ethical issues arise in many communication and engagement settings. Such issues can, however, fall into the gaps between what is seen as “research” and what is seen as “dissemination.” Semi-structured interviews (n = 17) and focus groups (n = 2) with researchers and science communication and public engagement specialists at U.K. academic institutions and in practice settings suggest that while normative principles for ethical science communication remain fluid, ethical questions are often an area of considerable reflection for those communicating, particularly when they reflect wider social issues and different people in the process: communities, researchers, and institutions.
When to create embryos or organoids for research.
The development of brain organoids and use of human embryonic neural structures for research each raise distinct ethical considerations that require careful analysis. We propose that rather than attempting to resolve longstanding debates about embryonic moral status, a more productive approach is to examine how different positions on this fundamental question lead to distinct conclusions about appropriate research strategies. For those who ground moral status in species membership or developmental potential, even early-stage embryo research may be ethically impermissible, suggesting focus on carefully bounded organoid development. Conversely, for those who ground moral status in current capacities, embryonic neural tissue studied before the emergence of consciousness may offer significant advantages over organoids while raising fewer novel ethical concerns. Our analysis reveals inadequacies in current policies, particularly the 14-day rule, which appears difficult to justify under either ethical framework. We demonstrate how careful attention to the relationship between ethical premises and research implications can advance both scientific progress and ethical oversight, while suggesting specific policy reforms including capacity-based research guidelines and sophisticated monitoring protocols.