Fan Feng-Lei, Xiong Jinjun, Li Mengzhou, Wang Ge
Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA.
IBM Thomas J. Watson Research Center, Yorktown Heights, NY, 10598, USA.
IEEE Trans Radiat Plasma Med Sci. 2021 Nov;5(6):741-760. doi: 10.1109/trpms.2021.3066428. Epub 2021 Mar 17.
Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success recently in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide adoption in mission-critical applications such as medical diagnosis and therapy. Because of the huge potentials of deep learning, increasing the interpretability of deep neural networks has recently attracted much research attention. In this paper, we propose a simple but comprehensive taxonomy for interpretability, systematically review recent studies in improving interpretability of neural networks, describe applications of interpretability in medicine, and discuss possible future research directions of interpretability, such as in relation to fuzzy logic and brain science.
以人工深度神经网络(DNN)为代表的深度学习最近在处理文本、图像、视频、图形等的许多重要领域取得了巨大成功。然而,DNN的黑箱性质已成为其在医疗诊断和治疗等关键任务应用中广泛采用的主要障碍之一。由于深度学习的巨大潜力,提高深度神经网络的可解释性最近引起了很多研究关注。在本文中,我们提出了一种简单但全面的可解释性分类法,系统地回顾了最近在提高神经网络可解释性方面的研究,描述了可解释性在医学中的应用,并讨论了可解释性未来可能的研究方向,例如与模糊逻辑和脑科学相关的方向。