首页> 外文期刊>Biomedical and Health Informatics, IEEE Journal of >Attention-Based Parallel Multiscale Convolutional Neural Network for Visual Evoked Potentials EEG Classification
【24h】

Attention-Based Parallel Multiscale Convolutional Neural Network for Visual Evoked Potentials EEG Classification

机译:基于关注的并行多尺度卷积神经网络,用于视觉诱发电位EEG分类

获取原文
获取原文并翻译 | 示例
           

摘要

Electroencephalography (EEG) decoding is an important part of Visual Evoked Potentials-based Brain-Computer Interfaces (BCIs), which directly determines the performance of BCIs. However, long-time attention to repetitive visual stimuli could cause physical and psychological fatigue, resulting in weaker reliable response and stronger noise interference, which exacerbates the difficulty of Visual Evoked Potentials EEG decoding. In this state, subjects' attention could not be concentrated enough and the frequency response of their brains becomes less reliable. To solve these problems, we propose an attention-based parallel multiscale convolutional neural network (AMS-CNN). Specifically, the AMS-CNN first extract robust temporal representations via two parallel convolutional layers with small and large temporal filters respectively. Then, we employ two sequential convolution blocks for spatial fusion and temporal fusion to extract advanced feature representations. Further, we use attention mechanism to weight the features at different moments according to the output-related interest. Finally, we employ a full connected layer with softmax activation function for classification. Two fatigue datasets collected from our lab are implemented to validate the superior classification performance of the proposed method compared to the state-of-the-art methods. Analysis reveals the competitiveness of multiscale convolution and attention mechanism. These results suggest that the proposed framework is a promising solution to improving the decoding performance of Visual Evoked Potential BCIs.
机译:脑电图(EEG)解码是基于视觉诱发电位的大脑 - 计算机接口(BCIS)的重要组成部分,其直接确定BCI的性能。然而,长期关注重复的视觉刺激可能导致身体和心理疲劳,导致较弱的可靠性响应和更强的噪声干扰,这加剧了视觉诱发电位脑电图解码的难度。在这种状态下,受试者不能集中注意力,并且其大脑的频率响应变得不那么可靠。为了解决这些问题,我们提出了基于关注的并行多尺度卷积神经网络(AMS-CNN)。具体地,AMS-CNN首先通过分别具有小型和大时间滤波器的两个平行卷积层提取鲁棒的时间表示。然后,我们使用两个连续的卷积块进行空间融合和时间融合,以提取高级特征表示。此外,我们使用注意机制根据与输出相关的兴趣来重量不同时刻的特征。最后,我们使用具有SoftMax激活功能的完整连接层进行分类。从我们的实验室收集的两个疲劳数据集是实施,以验证所提出的方法的卓越分类性能与最先进的方法相比。分析揭示了多尺度卷积和注意机制的竞争力。这些结果表明,所提出的框架是提高视觉诱发潜在BCIS的解码性能的有希望的解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号