Rezk Nermeen Gamal, Alshathri Samah, Sayed Amged, El-Din Hemdan Ezz, El-Behery Heba
Department of Computer Science and Engineering, Faculty of Engineering, Kafrelsheikh University, Kafr_El_Sheikh 6860404, Egypt.
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia.
Bioengineering (Basel). 2024 Oct 12;11(10):1016. doi: 10.3390/bioengineering11101016.
Ensemble Learning (EL) has been used for almost ten years to classify heart diseases, but it is still difficult to grasp how the "black boxes", or non-interpretable models, behave inside. Predicting heart disease is crucial to healthcare, since it allows for prompt diagnosis and treatment of the patient's true state. Nonetheless, it is still difficult to forecast illness with any degree of accuracy. In this study, we have suggested a framework for the prediction of heart disease based on Explainable artificial intelligence (XAI)-based hybrid Ensemble Learning (EL) models, such as LightBoost and XGBoost algorithms. The main goals are to build predictive models and apply SHAP (SHapley Additive expPlanations) and LIME (Local Interpretable Model-agnostic Explanations) analysis to improve the interpretability of the models. We carefully construct our systems and test different hybrid ensemble learning algorithms to determine which model is best for heart disease prediction (HDP). The approach promotes interpretability and transparency when examining these widespread health issues. By combining hybrid Ensemble learning models with XAI, the important factors and risk signals that underpin the co-occurrence of heart disease are made visible. The accuracy, precision, and recall of such models were used to evaluate their efficacy. This study highlights how crucial it is for healthcare models to be transparent and recommends the inclusion of XAI to improve interpretability and medical decisionmaking.
集成学习(EL)已被用于心脏病分类近十年,但仍难以理解这些“黑匣子”或不可解释模型的内部运行机制。预测心脏病对医疗保健至关重要,因为它能使医生及时诊断并治疗患者的真实病情。尽管如此,仍难以达到任何程度的准确预测疾病。在本研究中,我们提出了一个基于可解释人工智能(XAI)的混合集成学习(EL)模型(如LightBoost和XGBoost算法)来预测心脏病的框架。主要目标是构建预测模型,并应用SHAP(SHapley加性解释)和LIME(局部可解释模型无关解释)分析来提高模型的可解释性。我们精心构建系统并测试不同的混合集成学习算法,以确定哪种模型最适合心脏病预测(HDP)。在研究这些普遍的健康问题时,该方法提高了可解释性和透明度。通过将混合集成学习模型与XAI相结合,构成心脏病并发的重要因素和风险信号得以显现。此类模型的准确性、精确性和召回率用于评估其有效性。本研究强调了医疗保健模型保持透明的至关重要性,并建议纳入XAI以提高可解释性和医疗决策能力。