Suppr超能文献

医疗保健领域可解释人工智能技术调查。

Survey of Explainable AI Techniques in Healthcare.

机构信息

School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China.

The Laboratory for Imagery Vision and Artificial Intelligence, Ecole de Technologie Superieure, 1100 Rue Notre Dame O, Montreal, QC H3C 1K3, Canada.

出版信息

Sensors (Basel). 2023 Jan 5;23(2):634. doi: 10.3390/s23020634.

Abstract

Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient's symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.

摘要

人工智能(AI)与深度学习模型已广泛应用于多个领域,包括医学成像和医疗保健任务。在医学领域,任何判断或决策都充满风险。医生会根据患者的症状和/或检查仔细判断患者是否患病,然后根据患者的症状和/或检查结果形成合理的解释。因此,要成为一种可行且被接受的工具,人工智能需要模拟人类的判断和解释技能。具体来说,可解释人工智能(XAI)旨在解释深度学习黑盒模型背后的信息,揭示决策是如何做出的。本文对医疗保健和相关医学成像应用中使用的最新 XAI 技术进行了调查。我们总结并分类了 XAI 类型,并强调了用于提高医学成像主题可解释性的算法。此外,我们专注于医学应用中的具有挑战性的 XAI 问题,并提供了使用 XAI 概念在医学图像和文本分析中开发更好的深度学习模型解释的指导方针。此外,该调查提供了未来的方向,以指导开发者和研究人员进行未来关于临床主题的前瞻性研究,特别是在具有医学成像的应用方面。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ee9e/9862413/b7e22e073325/sensors-23-00634-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验