University of Washington, Department of Biological Structure, Seattle, WA, USA; University of Washington, Graduate Program in Neuroscience, Seattle, WA, USA. Electronic address: https://twitter.com/NastaciaGoodwin.
University of Washington, Department of Biological Structure, Seattle, WA, USA. Electronic address: https://twitter.com/nilssonsro.
Curr Opin Neurobiol. 2022 Apr;73:102544. doi: 10.1016/j.conb.2022.102544. Epub 2022 Apr 26.
The use of rigorous ethological observation via machine learning techniques to understand brain function (computational neuroethology) is a rapidly growing approach that is poised to significantly change how behavioral neuroscience is commonly performed. With the development of open-source platforms for automated tracking and behavioral recognition, these approaches are now accessible to a wide array of neuroscientists despite variations in budget and computational experience. Importantly, this adoption has moved the field toward a common understanding of behavior and brain function through the removal of manual bias and the identification of previously unknown behavioral repertoires. Although less apparent, another consequence of this movement is the introduction of analytical tools that increase the explainabilty, transparency, and universality of the machine-based behavioral classifications both within and between research groups. Here, we focus on three main applications of such machine model explainabilty tools and metrics in the drive toward behavioral (i) standardization, (ii) specialization, and (iii) explainability. We provide a perspective on the use of explainability tools in computational neuroethology, and detail why this is a necessary next step in the expansion of the field. Specifically, as a possible solution in behavioral neuroscience, we propose the use of Shapley values via Shapley Additive Explanations (SHAP) as a diagnostic resource toward explainability of human annotation, as well as supervised and unsupervised behavioral machine learning analysis.
通过机器学习技术进行严格的行为学观察来理解大脑功能(计算神经行为学)是一种快速发展的方法,有望极大地改变行为神经科学的常规做法。随着自动化跟踪和行为识别的开源平台的发展,尽管预算和计算经验存在差异,这些方法现在也可供广泛的神经科学家使用。重要的是,这种采用通过消除手动偏差和识别以前未知的行为模式,促使该领域朝着对行为和大脑功能的共同理解发展。尽管不太明显,但这一趋势的另一个后果是引入了分析工具,这些工具提高了基于机器的行为分类在研究小组内部和之间的可解释性、透明度和通用性。在这里,我们主要关注机器模型可解释性工具和指标在以下三个方面的应用:行为(i)标准化、(ii)专业化和(iii)可解释性。我们提供了在计算神经行为学中使用可解释性工具的视角,并详细说明了这为什么是该领域扩展的必要下一步。具体来说,作为行为神经科学的一种可能解决方案,我们建议使用 Shapley 值通过 Shapley 加法解释(SHAP)作为人类注释可解释性以及监督和无监督行为机器学习分析的诊断资源。