Erik Jonsson School of Eng. & Computer Science, University of Texas, Dallas, Richardson, TX, USA.
Computer Science, Oklahoma State University, Stillwater, OK, USA.
Artif Intell Med. 2021 May;115:102059. doi: 10.1016/j.artmed.2021.102059. Epub 2021 Mar 26.
In the healthcare domain, trust, confidence, and functional understanding are critical for decision support systems, therefore, presenting challenges in the prevalent use of black-box deep learning (DL) models. With recent advances in deep learning methods for classification tasks, there is an increased use of deep learning in healthcare decision support systems, such as detection and classification of abnormal Electrocardiogram (ECG) signals. Domain experts seek to understand the functional mechanism of black-box models with an emphasis on understanding how these models arrive at specific classification of patient medical data. In this paper, we focus on ECG data as the healthcare data signal to be analyzed. Since ECG is a one-dimensional time-series data, we target 1D-CNN (Convolutional Neural Networks) as the candidate DL model. Majority of existing interpretation and explanations research has been on 2D-CNN models in non-medical domain leaving a gap in terms of explanation of CNN models used on medical time-series data. Hence, we propose a modular framework, CNN Explanations Framework for ECG Signals (CEFEs), for interpretable explanations. Each module of CEFEs provides users with the functional understanding of the underlying CNN models in terms of data descriptive statistics, feature visualization, feature detection, and feature mapping. The modules evaluate a model's capacity while inherently accounting for correlation between learned features and raw signals which translates to correlation between model's capacity to classify and it's learned features. Explainable models such as CEFEs could be evaluated in different ways: training one deep learning architecture on different volumes/amounts of the same dataset, training different architectures on the same data set or a combination of different CNN architectures and datasets. In this paper, we choose to evaluate CEFEs extensively by training on different volumes of datasets with the same CNN architecture. The CEFEs' interpretations, in terms of quantifiable metrics, feature visualization, provide explanation as to the quality of the deep learning model where traditional performance metrics (such as precision, recall, accuracy, etc.) do not suffice.
在医疗保健领域,信任、信心和功能理解对于决策支持系统至关重要,因此,这给普遍使用黑盒深度学习(DL)模型带来了挑战。随着分类任务深度学习方法的最新进展,深度学习在医疗保健决策支持系统中的应用越来越多,例如异常心电图(ECG)信号的检测和分类。领域专家希望了解黑盒模型的功能机制,重点是了解这些模型如何对患者医疗数据进行特定分类。在本文中,我们专注于 ECG 数据作为要分析的医疗保健数据信号。由于 ECG 是一维时间序列数据,因此我们将 1D-CNN(卷积神经网络)作为候选 DL 模型。现有的大多数解释和说明研究都集中在非医疗领域的 2D-CNN 模型上,因此在用于医疗时间序列数据的 CNN 模型解释方面存在差距。因此,我们提出了一个模块化框架,即 ECG 信号可解释性卷积神经网络解释框架(CEFEs),用于可解释的解释。CEFEs 的每个模块都为用户提供了对底层 CNN 模型的功能理解,包括数据描述性统计、特征可视化、特征检测和特征映射。这些模块评估模型的能力,同时固有地考虑到学习特征与原始信号之间的相关性,这转化为模型分类能力与其学习特征之间的相关性。像 CEFEs 这样的可解释模型可以通过不同的方式进行评估:在同一数据集的不同卷/数量上训练一个深度学习架构,在同一数据集上训练不同的架构,或者在不同的 CNN 架构和数据集上进行组合。在本文中,我们选择通过在具有相同 CNN 架构的不同数据集卷上进行训练来广泛评估 CEFEs。CEFEs 的解释(以可量化指标和特征可视化的形式)提供了关于深度学习模型质量的解释,而传统的性能指标(如精度、召回率、准确性等)则不足以说明问题。