Poltava Law Institute of Yaroslav Mudryi National Law University, Poltava, Ukraine.
Laboratory for the Study of National Security Problems in the Field of Public Health of Аcademician Stashis Scientific Research Institute for the Study of Crime Problems, Kharkiv, Ukraine.
Wiad Lek. 2020;73(12 cz 2):2722-2727.
The aim of the research is to identify specific of AI in healthcare, its nature, and specifics and to establish complexities of AI implementation in healthcare and to propose ways to eliminate them.
Materials and methods: This study was conducted during June-October of 2020. Through a broad literature review, analysis of EU, USA regulation acts, scientific researches and opinions of progressive-minded people in this sphere this paper provide a guide to understanding the essence of AI in healthcare and specifics of its regulation. It is based on dialectical, comparative, analytic, synthetic and comprehensive methods.
Results: One of the first broad definitions of AI sounded like "Artificial Intelligence is the study of ideas which enable computers to do the things that make people seem intelligent ... The central goals of Artificial Intelligence are to make computers more useful and to understand the principles which make intelligence possible." There are two approaches to name this technology - "Artificial intelligence" and "Augmented Intelligence." We prefer to use a more common category of "Artificial intelligence" rather than "Augmented Intelligence" because the last one, from our point of view, leaves much space for "human supervision" meaning, and that will limit the sense of AI while it will undoubtedly develop in future. AI in current practice is interpreted in three forms, they are: AI as a simple electronic tool without any level of autonomy (like electronic assistant, "calculator"), AI as an entity with some level of autonomy, but under human control, and AI as an entity with broad autonomy, substituting human's activity wholly or partly, and we have to admit that the first one cannot be considered as AI at all in current conditions of science development. Description of AI often tends to operate with big technological products like DeepMind (by Google), Watson Health (by IBM), Healthcare's Edison (by General Electric), but in fact, a lot of smaller technologies also use AI in the healthcare field - smartphone applications, wearable health devices and other examples of the Internet of Things. At the current stage of development AI in medical practice is existing in three technical forms: software, hardware, and mixed forms using three main scientific-statistical approaches - flowchart method, database method, and decision-making method. All of them are useable, but they are differently suiting for AI implementation. The main issues of AI implementation in healthcare are connected with the nature of technology in itself, complexities of legal support in terms of safety and efficiency, privacy, ethical and liability concerns.
Conclusion: The conducted analysis makes it possible to admit a number of pros and cons in the field of AI using in healthcare. Undoubtedly this is a promising area with a lot of gaps and grey zones to fill in. Furthermore, the main challenge is not on technology itself, which is rapidly growing, evolving, and uncovering new areas of its use, but rather on the legal framework that is clearly lacking appropriate regulations and some political, ethical, and financial transformations. Thus, the core questions regarding is this technology by its nature is suitable for healthcare at all? Is the current legislative framework looking appropriate to regulate AI in terms of safety, efficiency, premarket, and postmarked monitoring? How the model of liability with connection to AI technology using in healthcare should be constructed? How to ensure privacy without the restriction of AI technology use? Should intellectual privacy rights prevail over public health concerns? Many questions to address in order to move in line with technology development and to get the benefits of its practical implementation.
本研究旨在确定人工智能在医疗保健中的具体特点、性质和特点,并确定人工智能在医疗保健中的实施的复杂性,并提出消除这些复杂性的方法。
材料和方法:本研究于 2020 年 6 月至 10 月进行。通过广泛的文献回顾、分析欧盟、美国的监管法案、科学研究以及该领域进步人士的意见,本文提供了理解人工智能在医疗保健中的本质和其监管特点的指南。它基于辩证、比较、分析、综合和综合方法。
结果:人工智能的第一个广泛定义之一是“人工智能是研究使计算机能够完成使人们看起来智能的任务的思想……人工智能的核心目标是使计算机更有用,并理解使智能成为可能的原理。”有两种命名这种技术的方法-“人工智能”和“增强智能”。我们更愿意使用更常见的“人工智能”类别,而不是“增强智能”,因为从我们的角度来看,后者为“人类监督”留下了很大的空间,这将限制人工智能的意义,而它无疑将在未来发展。目前在实践中,人工智能被解释为三种形式,它们是:人工智能作为一种没有任何自治水平的简单电子工具(如电子助理、“计算器”),人工智能作为一种具有一定自治水平但受人类控制的实体,以及人工智能作为一种具有广泛自治水平、完全或部分替代人类活动的实体,我们必须承认,在当前科学发展的条件下,第一种形式根本不能被认为是人工智能。对人工智能的描述往往倾向于使用大型技术产品,如 DeepMind(由谷歌)、Watson Health(由 IBM)、Healthcare's Edison(由通用电气),但实际上,许多较小的技术也在医疗保健领域使用人工智能-智能手机应用程序、可穿戴健康设备和物联网的其他示例。在人工智能在医疗实践中的当前发展阶段,存在三种技术形式:软件、硬件和使用三种主要的科学统计方法的混合形式-流程图方法、数据库方法和决策方法。所有这些都是可用的,但它们的适用性不同。人工智能在医疗保健中实施的主要问题与技术本身的性质有关,在安全性和效率、隐私、道德和责任方面的法律支持的复杂性。
结论:所进行的分析使我们有可能承认在人工智能在医疗保健中的应用领域的一些利弊。毫无疑问,这是一个充满希望的领域,有许多差距和灰色地带需要填补。此外,主要的挑战不是技术本身,而是法律框架,该框架显然缺乏适当的法规,以及一些政治、道德和金融变革。因此,核心问题是这种技术本身是否适合医疗保健?当前的立法框架是否适合从安全性、效率、上市前和上市后监测的角度监管人工智能?在医疗保健中使用人工智能技术的责任模型应该如何构建?如何在不限制人工智能技术使用的情况下确保隐私?是否应该保护知识产权隐私权而不是公共卫生问题?为了与技术发展保持一致并获得其实际实施的好处,还有许多问题需要解决。