Department of Computer Science, CHRIST (Deemed to be University), Delhi NCR, 201003, India.
Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), 11432, Riyadh, Saudi Arabia.
Comput Biol Med. 2024 Sep;179:108874. doi: 10.1016/j.compbiomed.2024.108874. Epub 2024 Jul 15.
Smart healthcare has advanced the medical industry with the integration of data-driven approaches. Artificial intelligence and machine learning provided remarkable progress, but there is a lack of transparency and interpretability in such applications. To overcome such limitations, explainable AI (EXAI) provided a promising result. This paper applied the EXAI for disease diagnosis in the advancement of smart healthcare. The paper combined the approach of transfer learning, vision transformer, and explainable AI and designed an ensemble approach for prediction of disease and its severity. The result is evaluated on a dataset of Alzheimer's disease. The result analysis compared the performance of transfer learning models with the ensemble model of transfer learning and vision transformer. For training, InceptionV3, VGG19, Resnet50, and Densenet121 transfer learning models were selected for ensembling with vision transformer. The result compares the performance of two models: a transfer learning (TL) model and an ensemble transfer learning (Ensemble TL) model combined with vision transformer (ViT) on ADNI dataset. For the TL model, the accuracy is 58 %, precision is 52 %, recall is 42 %, and the F1-score is 44 %. Whereas, the Ensemble TL model with ViT shows significantly improved performance i.e., 96 % of accuracy, 94 % of precision, 90 % of recall and 92 % of F1-score on ADNI dataset. This shows the efficacy of the ensemble model over transfer learning models.
智能医疗通过数据驱动方法的融合推动了医疗行业的发展。人工智能和机器学习取得了显著的进展,但在这些应用中缺乏透明度和可解释性。为了克服这些限制,可解释人工智能(EXAI)提供了有希望的结果。本文将 EXAI 应用于智能医疗的疾病诊断中。该论文结合了迁移学习、视觉转换器和可解释人工智能的方法,并设计了一种用于疾病及其严重程度预测的集成方法。该方法在阿尔茨海默病数据集上进行了评估。结果分析比较了迁移学习模型与迁移学习和视觉转换器的集成模型的性能。在训练过程中,选择了 InceptionV3、VGG19、Resnet50 和 Densenet121 迁移学习模型与视觉转换器进行集成。结果比较了两种模型的性能:一种是基于 ADNI 数据集的迁移学习(TL)模型,另一种是与视觉转换器(ViT)相结合的集成迁移学习(Ensemble TL)模型。对于 TL 模型,准确性为 58%,精度为 52%,召回率为 42%,F1 得分为 44%。而 Ensemble TL 模型与 ViT 结合后,在 ADNI 数据集上的性能显著提高,准确性为 96%,精度为 94%,召回率为 90%,F1 得分为 92%。这表明集成模型优于迁移学习模型。