Lamba Kamini, Rani Shalli, Shabaz Mohammad
Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India.
Model Institute of Engineering and Technology, Jammu, Jammu and Kashmir, India.
Sci Rep. 2025 Jul 1;15(1):20489. doi: 10.1038/s41598-025-07524-2.
Brain tumor causes life-threatening consequences due to which its timely detection and accurate classification are critical for determining appropriate treatment plans while focusing on the improved patient outcomes. However, conventional approaches of brain tumor diagnosis, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans, are often labor-intensive, prone to human error, and completely reliable on expertise of radiologists.Thus, the integration of advanced techniques such as Machine Learning (ML) and Deep Learning (DL) has brought revolution in the healthcare sector due to their supporting features or properties having ability to analyze medical images in recent years, demonstrating great potential for achieving accurate and improved outcomes but also resulted in a few drawbacks due to their black-box nature. As understanding reasoning behind their predictions is still a great challenge for the healthcare professionals and raised a great concern about their trustworthiness, interpretability and transparency in clinical settings. Thus, an advanced algorithm of explainable artificial intelligence (XAI) has been synergized with hybrid model comprising of DenseNet201 network for extracting the most important features based on the input Magnetic resonance imaging (MRI) data following supervised algorithm, support vector machine (SVM) to distinguish distinct types of brain scans. To overcome this, an explainable hybrid framework has been proposed that integrates DenseNet201 for deep feature extraction with a Support Vector Machine (SVM) classifier for robust binary classification. A region-adaptive preprocessing pipeline is used to enhance tumor visibility and feature clarity. To address the need for interpretability, multiple XAI techniques-Grad-CAM, Integrated Gradients (IG), and Layer-wise Relevance Propagation (LRP) have been incorporated. Our comparative evaluation shows that LRP achieves the highest performance across all explainability metrics, with 98.64% accuracy, 0.74 F1-score, and 0.78 IoU. The proposed model provides transparent and highly accurate diagnostic predictions, offering a reliable clinical decision support tool. It achieves 0.9801 accuracy, 0.9223 sensitivity, 0.9909 specificity, 0.9154 precision, and 0.9360 F1-score, demonstrating strong potential for real-world brain tumor diagnosis and personalized treatment strategies.
脑肿瘤会导致危及生命的后果,因此及时检测和准确分类对于确定合适的治疗方案至关重要,同时要注重改善患者的治疗效果。然而,传统的脑肿瘤诊断方法,如磁共振成像(MRI)和计算机断层扫描(CT),往往劳动强度大,容易出现人为误差,并且完全依赖放射科医生的专业知识。因此,近年来,机器学习(ML)和深度学习(DL)等先进技术的整合给医疗保健领域带来了变革,因为它们具有分析医学图像的支持特性或属性,显示出实现准确和改善治疗效果的巨大潜力,但由于其黑箱性质也带来了一些缺点。由于理解其预测背后的推理对医疗保健专业人员来说仍然是一个巨大的挑战,并且引发了对其在临床环境中的可信度、可解释性和透明度的极大关注。因此,一种可解释人工智能(XAI)的先进算法已与由DenseNet201网络组成的混合模型协同使用,该模型基于输入的磁共振成像(MRI)数据,遵循监督算法提取最重要的特征,支持向量机(SVM)用于区分不同类型的脑部扫描。为了克服这一问题,提出了一种可解释的混合框架,该框架将用于深度特征提取的DenseNet201与用于稳健二元分类的支持向量机(SVM)分类器集成在一起。使用区域自适应预处理管道来增强肿瘤的可见性和特征清晰度。为了满足可解释性的需求,纳入了多种XAI技术——梯度加权类激活映射(Grad-CAM)、集成梯度(IG)和逐层相关传播(LRP)。我们的比较评估表明,LRP在所有可解释性指标上表现最佳,准确率为98.64%,F1分数为0.74,交并比为0.78。所提出的模型提供了透明且高度准确的诊断预测,提供了一个可靠的临床决策支持工具。它实现了0.9801的准确率、0.9223的灵敏度、0.9909的特异性、0.9154的精确率和0.9360的F1分数,显示出在现实世界脑肿瘤诊断和个性化治疗策略方面的强大潜力。