首页> 外文期刊>Sensors >Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals
【24h】

Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals

机译:验证用于从EEG信号在线解码运动图像运动的深度神经网络

获取原文
       

摘要

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.
机译:运动图像运动的基于非侵入性脑电图(EEG)的脑机接口(BCI)通过对由不同想象力任务(例如手部运动)引起的EEG模式进行分类,将受试者的运动意图转化为控制信号。这种类型的BCI已被广泛研究,并用作残疾患者(如患有脑干中风或脊髓损伤(SCI)的患者)的交流和环境控制的替代方式。尽管传统的机器学习方法在脑电信号分类中取得了成功,但这些方法仍依赖于手工制作的功能。由于EEG信号的高度非平稳性,提取此类特征是一项艰巨的任务,这是分类性能停滞不前的主要原因。深度学习方法的显着进步使得无需任何特征工程即可进行端到端学习,这可以使BCI运动图像应用受益。我们开发了三种深度学习模型:(1)长短期记忆(LSTM); (2)基于频谱图的卷积神经网络模型(CNN); (3)循环卷积神经网络(RCNN),可直接从原始EEG信号解码运动图像的运动,而无需(任何人工)特征工程。结果是根据我们从20个受试者收集的公开可用的EEG数据以及“ BCI竞赛IV”的现有数据集2b EEG数据集进行评估的。总体而言,与最新的机器学习技术相比,使用深度学习模型可以实现更好的分类性能,这可以为开发新的健壮的EEG信号解码技术指明前进的方向。我们通过演示使用基于CNN的BCI对机器人手臂进行成功的实时控制来支持这一点。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号