R&D Center in Electronic and Information Technology, Federal University of Amazonas, Manaus 69077-000, Brazil.
Sensors (Basel). 2023 Apr 30;23(9):4409. doi: 10.3390/s23094409.
Human Activity Recognition (HAR) is a complex problem in deep learning, and One-Dimensional Convolutional Neural Networks (1D CNNs) have emerged as a popular approach for addressing it. These networks efficiently learn features from data that can be utilized to classify human activities with high performance. However, understanding and explaining the features learned by these networks remains a challenge. This paper presents a novel eXplainable Artificial Intelligence (XAI) method for generating visual explanations of features learned by one-dimensional CNNs in its training process, utilizing t-Distributed Stochastic Neighbor Embedding (t-SNE). By applying this method, we provide insights into the decision-making process through visualizing the information obtained from the model's deepest layer before classification. Our results demonstrate that the learned features from one dataset can be applied to differentiate human activities in other datasets. Our trained networks achieved high performance on two public databases, with 0.98 accuracy on the SHO dataset and 0.93 accuracy on the HAPT dataset. The visualization method proposed in this work offers a powerful means to detect bias issues or explain incorrect predictions. This work introduces a new type of XAI application, enhancing the reliability and practicality of CNN models in real-world scenarios.
人类活动识别(HAR)是深度学习中的一个复杂问题,一维卷积神经网络(1D CNN)已成为解决该问题的一种流行方法。这些网络能够高效地从数据中学习特征,可用于高性能地对人类活动进行分类。然而,理解和解释这些网络学习到的特征仍然是一个挑战。本文提出了一种新颖的可解释人工智能(XAI)方法,用于在一维 CNN 的训练过程中生成特征的可视化解释,利用 t-分布随机邻域嵌入(t-SNE)。通过应用这种方法,我们通过可视化模型在分类之前从最深层获得的信息,深入了解决策过程。我们的结果表明,从一个数据集学习到的特征可以应用于区分其他数据集的人类活动。我们的训练网络在两个公共数据库上实现了高准确率,在 SHO 数据集上的准确率为 0.98,在 HAPT 数据集上的准确率为 0.93。本文提出的可视化方法提供了一种强大的手段来检测偏差问题或解释错误预测。这项工作引入了一种新的 XAI 应用,增强了 CNN 模型在实际场景中的可靠性和实用性。