Division of Applied Regulatory Science, Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, U.S. Food and Drug Administration, WO Bldg 64 Rm 2084, 10903 New Hampshire Ave, Silver Spring, MD, 20993, USA.
Sci Rep. 2024 May 27;14(1):12082. doi: 10.1038/s41598-024-59378-9.
Deep learning neural networks are often described as black boxes, as it is difficult to trace model outputs back to model inputs due to a lack of clarity over the internal mechanisms. This is even true for those neural networks designed to emulate mechanistic models, which simply learn a mapping between the inputs and outputs of mechanistic models, ignoring the underlying processes. Using a mechanistic model studying the pharmacological interaction between opioids and naloxone as a proof-of-concept example, we demonstrated that by reorganizing the neural networks' layers to mimic the structure of the mechanistic model, it is possible to achieve better training rates and prediction accuracy relative to the previously proposed black-box neural networks, while maintaining the interpretability of the mechanistic simulations. Our framework can be used to emulate mechanistic models in a large parameter space and offers an example on the utility of increasing the interpretability of deep learning networks.
深度学习神经网络通常被描述为黑盒,因为由于内部机制不明确,很难将模型输出追溯到模型输入。对于那些旨在模拟机械模型的神经网络来说,情况更是如此,这些神经网络只是简单地学习机械模型的输入和输出之间的映射,而忽略了底层过程。我们使用一个研究阿片类药物和纳洛酮之间药理学相互作用的机械模型作为概念验证示例,证明通过重新组织神经网络的层以模拟机械模型的结构,可以实现比以前提出的黑盒神经网络更好的训练速度和预测准确性,同时保持机械模拟的可解释性。我们的框架可以用于在大参数空间中模拟机械模型,并提供了一个关于提高深度学习网络可解释性的实用示例。