...
首页> 外文期刊>Journal of neural engineering >Auditory attention tracking states in a cocktail party environment can be decoded by deep convolutional neural networks
【24h】

Auditory attention tracking states in a cocktail party environment can be decoded by deep convolutional neural networks

机译:在鸡尾酒会环境中的听觉注意力跟踪状态可以由深卷积神经网络解码

获取原文
获取原文并翻译 | 示例
           

摘要

Objective. A deep convolutional neural network (CNN) is a method for deep learning (DL). It has apowerful ability to automatically extract features and is widely used in classification tasks withscalp electroencephalogram (EEG) signals. However, the small number of samples and lowsignal-to-noise ratio involved in scalp EEG with low spatial resolution constitute a limitation thatmight restrict potential brain-computer interface (BCI) applications that are based on the CNNmodel. In the present study, a novel CNN model with source-spatial feature images (SSFIs) as theinput is proposed to decode auditory attention tracking states in a cocktail party environment.Approach. We first extract SSFIs using rhythm entropy and weighted minimum norm estimation.Next, we develop a CNN model with three convolutional layers. Furthermore, we estimate theperformance of the proposed model via generalized performance, alternative models that deletedor replaced a model’s component, and loss curves. Finally, we use a deep transfer model withfine-tuning for a low (poor) behavioral performance group (L-group). Main results. Based oncortical activity reconstructions from the scalp EEGs, the classification accuracy (CA) of theproposed model is 80.4% (chance level: 52.5%), which is superior to that achieved by scalp EEG.Additionally, the performance of the proposed model is more stable when compared to alternativemodels that delete or replace specific model components. The proposed model identifies thedifference between two auditory attention tracking states (successful versus unsuccessful) at anearly stage with a short time window (250 ms after target offset). Furthermore, we propose a deeptransfer learning model to improve the classification for the L-group. With this model, the CA ofthe L-group significantly increase by 5.3%. Significance. Our proposed model improves theperformance of a decoder for auditory attention tracking, which could be suitable for relieving thedifficulty with the attentional modulation of individual’s neural responses. It provides a novelcommunication channel with auditory cognitive BCI for patients with attention and hearingimpairment.
机译:客观的。深度卷积神经网络(CNN)是一种深度学习(DL)的方法。它有一个强大的自动提取功能的能力,并广泛用于分类任务头皮脑电图(EEG)信号。但是,少量样品和低具有低空间分辨率的头皮EEG涉及的信噪比构成了一个限制可能会限制基于CNN的潜在脑电脑界面(BCI)应用程序模型。在本研究中,具有源空间特征图像(SSFI)的新型CNN模型作为建议在鸡尾酒会环境中解码听觉关注跟踪状态的投入。方法。我们首先利用节奏熵和加权最小规范估计提取SSFI。接下来,我们开发一个带有三个卷积层的CNN模型。此外,我们估计了通过广义性能,删除的替代模型来表现所提出的模型或替换模型的组件和损耗曲线。最后,我们使用深度传输模型低(差)行为性能组(L-Group)进行微调。主要结果。基于Scalp EEGS的皮质活动重建,分类准确性(CA)拟议的模型是80.4%(机会水平:52.5%),其优于由头皮脑电图实现。另外,与替代方案相比,所提出的模型的性能更稳定删除或替换特定模型组件的模型。建议的模型标识了两个听觉关注跟踪状态(成功与不成功)之间的差异早期阶段有短时间窗口(目标偏移后250毫秒)。此外,我们提出深度转移学习模型,提高L-Group的分类。使用此模型,CAL-群体显着增加了5.3%。意义。我们拟议的模型改善了用于听觉注意跟踪的解码器的性能,这可能适合缓解对个人神经响应的注意调制难以。它提供了一部小说具有听觉和听力的患者的通信通道与听觉认知BCI损伤。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号