Suppr超能文献

可解释人工智能在认知研究中的应用:一项范围综述。

The application of eXplainable artificial intelligence in studying cognition: A scoping review.

作者信息

Mahmood Shakran, Teo Colin, Sim Jeremy, Zhang Wei, Muyun Jiang, Bhuvana R, Teo Kejia, Yeo Tseng Tsai, Lu Jia, Gulyas Balazs, Guan Cuntai

机构信息

Lee Kong Chian School of Medicine Nanyang Technological University Singapore Singapore.

Centre for Neuroimaging Research Nanyang Technological University Singapore Singapore.

出版信息

Ibrain. 2024 Sep 5;10(3):245-265. doi: 10.1002/ibra.12174. eCollection 2024 Fall.

Abstract

The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.

摘要

人工智能(AI)的迅速发展引发了对其可信度以及可解释人工智能(XAI)概念的新一轮讨论。神经科学领域的最新研究强调了可解释人工智能在认知研究中的相关性。本综述旨在识别和分析用于研究认知功能和功能障碍的机制及特征的各种可解释人工智能方法。在本研究中,对收集到的证据进行定性评估,以建立一个在认知神经科学中应用可解释人工智能的有效框架。根据乔安娜·布里格斯研究所的标准以及系统评价与Meta分析扩展版的范围综述指南,我们在MEDLINE、Embase、科学网、Cochrane对照试验中心注册库和谷歌学术上搜索了同行评审的文章。两名评审员并行进行数据筛选、提取和主题分析。纳入了过去十年发表的12项符合条件的实验研究。结果表明,大多数研究(75%)关注正常认知功能,如感知、社会认知、语言、执行功能和记忆,而其他研究(25%)则研究认知受损情况。采用的主要可解释人工智能方法是内在可解释人工智能(58.3%),其次是基于归因的(41.7%)和基于示例的事后方法(8.3%)。可解释性应用于局部(66.7%)或全局(33.3%)范围。研究结果主要为相关性,涉及解剖学(83.3%)或非解剖学(16.7%)方面。总之,虽然这些可解释人工智能技术因其预测能力、稳健性、可测试性和合理性而受到称赞,但其局限性包括过度简化、混杂因素和不一致性。综述的研究展示了可解释人工智能模型的潜力,同时也认识到当前在因果关系和过度简化方面的挑战,特别强调了可重复性的必要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cce2/11427810/a53e891c4629/IBRA-10-245-g005.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验