Institute for Data Science, Cloud Computing, and IT Security, Furtwangen University, Furtwangen, Germany.
Stud Health Technol Inform. 2024 Aug 22;316:565-569. doi: 10.3233/SHTI240477.
This paper establishes requirements for assessing the usability of Explainable Artificial Intelligence (XAI) methods, focusing on non-AI experts like healthcare professionals. Through a synthesis of literature and empirical findings, it emphasizes achieving optimal cognitive load, task performance, and task time in XAI explanations. Key components include tailoring explanations to user expertise, integrating domain knowledge, and using non-propositional representations for comprehension. The paper highlights the critical role of relevance, accuracy, and truthfulness in fostering user trust. Practical guidelines are provided for designing transparent and user-friendly XAI explanations, especially in high-stakes contexts like healthcare. Overall, the paper's primary contribution lies in delineating clear requirements for effective XAI explanations, facilitating human-AI collaboration across diverse domains.
本文确立了评估可解释人工智能(XAI)方法可用性的要求,重点关注非人工智能专家,如医疗保健专业人员。通过文献综述和实证研究的综合分析,强调在 XAI 解释中实现最佳认知负荷、任务绩效和任务时间。关键组成部分包括根据用户专业知识定制解释、整合领域知识以及使用非命题表示法进行理解。本文强调了相关性、准确性和真实性在培养用户信任方面的关键作用。为设计透明和用户友好的 XAI 解释提供了实用指南,特别是在医疗保健等高风险环境中。总的来说,本文的主要贡献在于为有效的 XAI 解释划定了明确的要求,促进了跨不同领域的人机协作。