首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition
【2h】

Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition

机译:研究预训练卷积神经网络在跨主题和跨数据集EEG情绪识别中的使用

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network’s power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.
机译:脑电图(EEG)由于对人类的欺骗行为具有抵抗力,因此在情绪识别研究中具有极大的吸引力。与情绪识别环境中的视觉或语音信号相比,这是脑信号的最重要优势之一。基于EEG的情绪识别的一个主要挑战是,EEG录音在不同的时间点对不同的人以及同一个人表现出不同的分布。当受试者独立性为首要任务时,EEG的这种非平稳性质会限制其准确性。这项研究的目的是通过利用预训练的先进卷积神经网络(CNN)架构来提高独立于对象的识别精度。与从脑电图读数中提取谱带功率特征的类似研究不同,本研究采用开窗,预调整和归一化后使用原始脑电图数据。从训练系统中删除手动提取的特征可消除消除原始数据中隐藏特征的风险,并有助于利用深度神经网络的力量发现未知特征。为了进一步提高分类精度,使用中值滤波器来消除沿着情绪的预测间隔的错误检测。该方法在上海交通大学情感脑电数据集(SEED)上针对两个和三个情感类别的平均跨主题准确率分别为86.56%和78.34%。在使用生理信号的情感分析数据库(DEAP)上,对于两个情感类别,它在跨情感分析的数据库上的平均跨主题准确度也分别为72.81%,在拉夫堡大学多模式情感数据集(LUMED)上的平均跨主题准确性为81.8%。此外,使用DEED数据集测试了使用SEED数据集训练的识别模型,该模型在所有主题和情感类别中的平均预测准确度均为58.1%。结果表明,就分类准确性而言,所提出的方法优于或等同于文献中确定的与参考受试者无关的EEG情绪识别研究,并且由于消除了对特征提取的需求而具有有限的复杂性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号