Department of Philosophy and Department of Clinical Medicine, Macquarie University, Sydney, New South Wales, Australia.
Health Science, Warwick Medical School, University of Warwick, Coventry, United Kingdom of Great Britain and Northern Ireland.
Bioethics. 2021 Sep;35(7):623-633. doi: 10.1111/bioe.12885. Epub 2021 May 28.
This paper is one of the first to analyse the ethical implications of specific healthcare artificial intelligence (AI) applications, and the first to provide a detailed analysis of AI-based systems for clinical decision support. AI is increasingly being deployed across multiple domains. In response, a plethora of ethical guidelines and principles for general AI use have been published, with some convergence about which ethical concepts are relevant to this new technology. However, few of these frameworks are healthcare-specific, and there has been limited examination of actual AI applications in healthcare. Our ethical evaluation identifies context- and case-specific healthcare ethical issues for two applications, and investigates the extent to which the general ethical principles for AI-assisted healthcare expressed in existing frameworks capture what is most ethically relevant from the perspective of healthcare ethics. We provide a detailed description and analysis of two AI-based systems for clinical decision support (Painchek and IDx-DR). Our results identify ethical challenges associated with potentially deceptive promissory claims, lack of patient and public involvement in healthcare AI development and deployment, and lack of attention to the impact of AIs on healthcare relationships. Our analysis also highlights the close connection between evaluation and technical development and reporting. Critical appraisal frameworks for healthcare AIs should include explicit ethical evaluation with benchmarks. However, each application will require scrutiny across the AI life-cycle to identify ethical issues specific to healthcare. This level of analysis requires more attention to detail than is suggested by current ethical guidance or frameworks.
本文是首批分析特定医疗人工智能 (AI) 应用的伦理含义的论文之一,也是首批对基于 AI 的临床决策支持系统进行详细分析的论文之一。AI 正在多个领域得到越来越多的应用。为了应对这一情况,已经发布了大量针对通用 AI 使用的伦理准则和原则,其中一些准则就与这项新技术相关的伦理概念达成了共识。然而,这些框架中很少有专门针对医疗保健的,对医疗保健中实际的 AI 应用也进行了有限的研究。我们的伦理评估为两种应用确定了特定于上下文和案例的医疗保健伦理问题,并调查了现有框架中表达的用于辅助医疗保健的 AI 的一般伦理原则在多大程度上从医疗保健伦理的角度捕捉到了最相关的伦理问题。我们详细描述和分析了两种基于 AI 的临床决策支持系统(Painchek 和 IDx-DR)。我们的研究结果确定了与可能具有欺骗性的承诺声明、缺乏患者和公众参与医疗保健 AI 开发和部署以及对 AI 对医疗保健关系的影响缺乏关注相关的伦理挑战。我们的分析还强调了评估与技术开发和报告之间的紧密联系。医疗保健 AI 的批判性评估框架应包括具有基准的明确伦理评估。然而,每个应用程序都需要在 AI 生命周期内进行审查,以确定特定于医疗保健的伦理问题。这种分析水平需要比当前的伦理指导或框架更关注细节。