School of Medicine, Trinity College Dublin, Dublin 2, College Green, Ireland.
Ir J Med Sci. 2022 Oct;191(5):1991-1994. doi: 10.1007/s11845-021-02853-3. Epub 2021 Nov 16.
Contemporary discourse on Artificial Intelligence (AI) in medicine is oft-sensationalised to the point of bearing no resemblance to its everyday impact and potential - either to proselytise it as a saviour or to condemn its perilous, amoral and sprawling reach.This report aims to unravel the paucity of understanding underpinning this hyperbolic duality, whilst addressing the potential clearly defining its ethical use poses to the semi-public healthcare models in Ireland and Europe.
The report contrasts the challenge of regulating the breakneck development of AI, with healthcare's necessity for stringent quality control in ethical technological development to ensure patients' well-being.Physical, practical and philosophical approaches to Artificial Intelligence in medicine are explored through Beauchamp and Childress' principles of delivering care with beneficence, non maleficence, justice and autonomy. AI is scrutinised under Kantian deontological, Benthamite utilitarian and Rawlsian perspectives on health justice. Actor Network theory is used to explain sociotechnical interactions governing human stakeholders developing ethical AI.These analyses operate firstly to define AI concisely, then ground it in its contemporary and future functions in healthcare. They highlight the importance of aligning medical AI with accepted ethical standards as a necessity of its integrated use across healthcare.
This report concludes that balanced assessment of AI's role in healthcare requires improvement in three areas: improving clarity in definition of AI and its extant remit in medicine; aligning contemporary discourse on AI use with contemporary objective ethical, legal and system frameworks; and clearly identifying for dismissal a number of logical fallacies deliberately sensationalising AI's potential.
当前关于医学人工智能(AI)的讨论常常被夸大其词,以至于与它的日常影响和潜力毫不相干——要么将其宣扬为救世主,要么谴责其危险、不道德和无限制的扩张。本报告旨在揭示这种夸张的双重性背后理解的匮乏,同时解决其在爱尔兰和欧洲半公共医疗模式中明确规定其伦理使用所带来的潜在问题。
报告对比了监管 AI 飞速发展的挑战,以及医疗保健在伦理技术发展中严格质量控制的必要性,以确保患者的福祉。通过比彻姆和蔡尔德斯的原则,探讨了医学中人工智能的物理、实践和哲学方法,这些原则涉及在有益、无害、公正和自主的原则下提供护理。从康德义务论、边沁功利主义和罗尔斯健康正义的角度审视了 AI。采用行动者网络理论来解释管理开发伦理 AI 的人类利益相关者的社会技术互动。这些分析首先简要定义 AI,然后将其置于其在医疗保健中的当代和未来功能中。它们强调将医疗 AI 与公认的伦理标准保持一致作为其在医疗保健中综合使用的必要性。
本报告得出结论,要平衡评估 AI 在医疗保健中的作用,需要在三个方面进行改进:提高 AI 的定义和现有医学任务的清晰度;使关于 AI 使用的当代论述与当代客观的伦理、法律和系统框架保持一致;并明确排除一些故意夸大 AI 潜力的逻辑谬误。