首页> 外文期刊>Future generation computer systems >Deep Learning for EEG motor imagery classification based on multi-layer CNNs feature fusion
【24h】

Deep Learning for EEG motor imagery classification based on multi-layer CNNs feature fusion

机译:基于多层CNN特征融合的脑电运动图像分类深度学习

获取原文
获取原文并翻译 | 示例

摘要

Electroencephalography (EEG) motor imagery (MI) signals have recently gained a lot of attention as these signals encode a person's intent of performing an action. Researchers have used MI signals to help disabled persons, control devices such as wheelchairs and even for autonomous driving. Hence decoding these signals accurately is important for a Brain-Computer interface (BCI) system. But EEG decoding is a challenging task because of its complexity, dynamic nature and low signal to noise ratio. Convolution neural network (CNN) has shown that it can extract spatial and temporal features from EEG, but in order to learn the dynamic correlations present in MI signals, we need improved CNN models. CNN can extract good features with both shallow and deep models pointing to the fact that, at different levels relevant features can be extracted. Fusion of multiple CNN models has not been experimented for EEG data. In this work, we propose a multi-layer CNNs method for fusing CNNs with different characteristics and architectures to improve EEG MI classification accuracy. Our method utilizes different convolutional features to capture spatial and temporal features from raw EEG data. We demonstrate that our novel MCNN and CCNN fusion methods outperforms all the state-of-the-art machine learning and deep learning techniques for EEG classification. We have performed various experiments to evaluate the performance of the proposed CNN fusion method on public datasets. The proposed MCNN method achieves 75.7% and 95.4% on the BO Competition IV-2a dataset and the High Gamma Dataset respectively. The proposed CCNN method based on autoencoder cross-encoding achieves more than 10% improvement for cross-subject EEG classification. (C) 2019 Elsevier B.V. All rights reserved.
机译:最近,脑电图(EEG)运动图像(MI)信号引起了人们的广泛关注,因为这些信号可以编码人执行动作的意图。研究人员已经使用MI信号来帮助残疾人,轮椅等控制设备,甚至用于自动驾驶。因此,准确解码这些信号对于脑机接口(BCI)系统很重要。但是由于其复杂性,动态特性和低信噪比,EEG解码是一项具有挑战性的任务。卷积神经网络(CNN)表明它可以从EEG中提取空间和时间特征,但是为了了解MI信号中存在的动态相关性,我们需要改进的CNN模型。 CNN可以使用浅层模型和深层模型来提取良好的特征,这表明可以在不同级别上提取相关特征。尚未针对EEG数据对多个CNN模型进行融合。在这项工作中,我们提出了一种用于融合具有不同特性和架构的CNN的多层CNN方法,以提高EEG MI分类的准确性。我们的方法利用不同的卷积特征从原始EEG数据中捕获空间和时间特征。我们证明了我们新颖的MCNN和CCNN融合方法优于所有最新的EEG分类机器学习和深度学习技术。我们已经进行了各种实验,以评估在公共数据集上提出的CNN融合方法的性能。提出的MCNN方法在BO Competition IV-2a数据集和High Gamma数据集上分别达到75.7%和95.4%。所提出的基于自动编码器交叉编码的CCNN方法对跨主题脑电图分类实现了10%以上的改进。 (C)2019 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号