Suppr超能文献

LEDPatNet19:基于使用脑电信号的非线性LED模式特征提取功能的自动情感识别模型。

LEDPatNet19: Automated Emotion Recognition Model based on Nonlinear LED Pattern Feature Extraction Function using EEG Signals.

作者信息

Tuncer Turker, Dogan Sengul, Subasi Abdulhamit

机构信息

Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig, Turkey.

Institute of Biomedicine, Faculty of Medicine, University of Turku, 20520 Turku, Finland.

出版信息

Cogn Neurodyn. 2022 Aug;16(4):779-790. doi: 10.1007/s11571-021-09748-0. Epub 2021 Nov 25.

Abstract

Electroencephalography (EEG) signals collected from human brains have generally been used to diagnose diseases. Moreover, EEG signals can be used in several areas such as emotion recognition, driving fatigue detection. This work presents a new emotion recognition model by using EEG signals. The primary aim of this model is to present a highly accurate emotion recognition framework by using both a hand-crafted feature generation and a deep classifier. The presented framework uses a multilevel fused feature generation network. This network has three primary phases, which are tunable Q-factor wavelet transform (TQWT), statistical feature generation, and nonlinear textural feature generation phases. TQWT is applied to the EEG data for decomposing signals into different sub-bands and create a multilevel feature generation network. In the nonlinear feature generation, an S-box of the LED block cipher is utilized to create a pattern, which is named as Led-Pattern. Moreover, statistical feature extraction is processed using the widely used statistical moments. The proposed LED pattern and statistical feature extraction functions are applied to 18 TQWT sub-bands and an original EEG signal. Therefore, the proposed hand-crafted learning model is named LEDPatNet19. To select the most informative features, ReliefF and iterative Chi2 (RFIChi2) feature selector is deployed. The proposed model has been developed on the two EEG emotion datasets, which are GAMEEMO and DREAMER datasets. Our proposed hand-crafted learning network achieved 94.58%, 92.86%, and 94.44% classification accuracies for arousal, dominance, and valance cases of the DREAMER dataset. Furthermore, the best classification accuracy of the proposed model for the GAMEEMO dataset is equal to 99.29%. These results clearly illustrate the success of the proposed LEDPatNet19.

摘要

从人类大脑收集的脑电图(EEG)信号通常用于疾病诊断。此外,EEG信号还可用于多个领域,如情绪识别、驾驶疲劳检测。这项工作提出了一种利用EEG信号的新型情绪识别模型。该模型的主要目的是通过使用手工特征生成和深度分类器来呈现一个高精度的情绪识别框架。所提出的框架使用了一个多级融合特征生成网络。这个网络有三个主要阶段,即可调Q因子小波变换(TQWT)、统计特征生成和非线性纹理特征生成阶段。TQWT应用于EEG数据,将信号分解为不同的子带,并创建一个多级特征生成网络。在非线性特征生成中,利用LED分组密码的S盒来创建一种模式,称为Led模式。此外,使用广泛应用的统计矩来进行统计特征提取。所提出的LED模式和统计特征提取函数应用于18个TQWT子带和一个原始EEG信号。因此,所提出的手工学习模型被命名为LEDPatNet19。为了选择最具信息性的特征,部署了ReliefF和迭代卡方(RFIChi2)特征选择器。所提出的模型是在两个EEG情绪数据集上开发的,即GAMEEMO和DREAMER数据集。我们提出的手工学习网络在DREAMER数据集的唤醒、支配和效价情况下分别达到了94.58%、92.86%和94.44%的分类准确率。此外,所提出的模型在GAMEEMO数据集上的最佳分类准确率等于99.29%。这些结果清楚地说明了所提出的LEDPatNet19的成功。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/efb1/9279545/22321ba2108c/11571_2021_9748_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验