Suppr超能文献

人工神经网络的可解释性:一项综述。

On Interpretability of Artificial Neural Networks: A Survey.

作者信息

Fan Feng-Lei, Xiong Jinjun, Li Mengzhou, Wang Ge

机构信息

Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA.

IBM Thomas J. Watson Research Center, Yorktown Heights, NY, 10598, USA.

出版信息

IEEE Trans Radiat Plasma Med Sci. 2021 Nov;5(6):741-760. doi: 10.1109/trpms.2021.3066428. Epub 2021 Mar 17.

Abstract

Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success recently in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide adoption in mission-critical applications such as medical diagnosis and therapy. Because of the huge potentials of deep learning, increasing the interpretability of deep neural networks has recently attracted much research attention. In this paper, we propose a simple but comprehensive taxonomy for interpretability, systematically review recent studies in improving interpretability of neural networks, describe applications of interpretability in medicine, and discuss possible future research directions of interpretability, such as in relation to fuzzy logic and brain science.

摘要

以人工深度神经网络(DNN)为代表的深度学习最近在处理文本、图像、视频、图形等的许多重要领域取得了巨大成功。然而,DNN的黑箱性质已成为其在医疗诊断和治疗等关键任务应用中广泛采用的主要障碍之一。由于深度学习的巨大潜力,提高深度神经网络的可解释性最近引起了很多研究关注。在本文中,我们提出了一种简单但全面的可解释性分类法,系统地回顾了最近在提高神经网络可解释性方面的研究,描述了可解释性在医学中的应用,并讨论了可解释性未来可能的研究方向,例如与模糊逻辑和脑科学相关的方向。

相似文献

1
On Interpretability of Artificial Neural Networks: A Survey.
IEEE Trans Radiat Plasma Med Sci. 2021 Nov;5(6):741-760. doi: 10.1109/trpms.2021.3066428. Epub 2021 Mar 17.
3
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.
6
An Occupational Science Contribution to Camouflaging Scholarship: Centering Intersectional Experiences of Occupational Disruptions.
Autism Adulthood. 2025 May 28;7(3):238-248. doi: 10.1089/aut.2023.0070. eCollection 2025 Jun.
9
Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods.
Cochrane Database Syst Rev. 2015 Jul 27;2015(7):MR000042. doi: 10.1002/14651858.MR000042.pub2.
10
A Survey on Knowledge Editing of Neural Networks.
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):11759-11775. doi: 10.1109/TNNLS.2024.3498935.

引用本文的文献

1
Machine learning-assisted finite element modeling of additively manufactured meta-materials.
3D Print Med. 2025 Jul 14;11(1):36. doi: 10.1186/s41205-025-00286-7.
2
Applications of neural networks in liver transplantation.
ILIVER. 2022 Aug 9;1(2):101-110. doi: 10.1016/j.iliver.2022.07.002. eCollection 2022 Jun.
4
Building a neural network model to define DNA sequence specificity in V(D)J recombination.
Nucleic Acids Res. 2025 Jun 20;53(12). doi: 10.1093/nar/gkaf551.
5
Universal multilayer network embedding reveals a causal link between GABA neurotransmitter and cancer.
BMC Bioinformatics. 2025 Jun 2;26(1):149. doi: 10.1186/s12859-025-06158-5.
6
Beyond human perception: challenges in AI interpretability of orangutan artwork.
Primates. 2025 May;66(3):249-258. doi: 10.1007/s10329-025-01185-5. Epub 2025 Feb 24.
8
Cloud and IoT based smart agent-driven simulation of human gait for detecting muscles disorder.
Heliyon. 2025 Jan 20;11(2):e42119. doi: 10.1016/j.heliyon.2025.e42119. eCollection 2025 Jan 30.

本文引用的文献

1
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
2
Deep Learning-based Image Segmentation on Multimodal Medical Imaging.
IEEE Trans Radiat Plasma Med Sci. 2019 Mar;3(2):162-169. doi: 10.1109/trpms.2018.2890359. Epub 2019 Jan 1.
3
Application and Construction of Deep Learning Networks in Medical Imaging.
IEEE Trans Radiat Plasma Med Sci. 2021 Mar;5(2):137-159. doi: 10.1109/trpms.2020.3030611. Epub 2020 Oct 13.
4
Human-in-the-Loop Interpretability Prior.
Adv Neural Inf Process Syst. 2018 Dec;31.
5
Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction.
Nat Mach Intell. 2019 Jun;1(6):269-276. doi: 10.1038/s42256-019-0057-9. Epub 2019 Jun 10.
6
Understanding the role of individual units in a deep neural network.
Proc Natl Acad Sci U S A. 2020 Dec 1;117(48):30071-30078. doi: 10.1073/pnas.1907375117. Epub 2020 Sep 1.
7
Neural Networks, Hypersurfaces, and the Generalized Radon Transform.
IEEE Signal Process Mag. 2020 Jul;37(4):123-133. doi: 10.1109/msp.2020.2978822. Epub 2020 Jun 29.
8
Multi-Organ Segmentation Over Partially Labeled Datasets With Multi-Scale Feature Abstraction.
IEEE Trans Med Imaging. 2020 Nov;39(11):3619-3629. doi: 10.1109/TMI.2020.3001036. Epub 2020 Oct 28.
9
GNNExplainer: Generating Explanations for Graph Neural Networks.
Adv Neural Inf Process Syst. 2019 Dec;32:9240-9251.
10
Shape and margin-aware lung nodule classification in low-dose CT images via soft activation mapping.
Med Image Anal. 2020 Feb;60:101628. doi: 10.1016/j.media.2019.101628. Epub 2019 Dec 12.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验