首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Neural Encoding and Decoding With Distributed Sentence Representations
【24h】

Neural Encoding and Decoding With Distributed Sentence Representations

机译:具有分布式句子表示的神经编码和解码

获取原文
获取原文并翻译 | 示例

摘要

Building computational models to account for the cortical representation of language plays an important role in understanding the human linguistic system. Recent progress in distributed semantic models (DSMs), especially transformer-based methods, has driven advances in many language understanding tasks, making DSM a promising methodology to probe brain language processing. DSMs have been shown to reliably explain cortical responses to word stimuli. However, characterizing the brain activities for sentence processing is much less exhaustively explored with DSMs, especially the deep neural network-based methods. What is the relationship between cortical sentence representations against DSMs? What linguistic features that a DSM catches better explain its correlation with the brain activities aroused by sentence stimuli? Could distributed sentence representations help to reveal the semantic selectivity of different brain areas? We address these questions through the lens of neural encoding and decoding, fueled by the latest developments in natural language representation learning. We begin by evaluating the ability of a wide range of 12 DSMs to predict and decipher the functional magnetic resonance imaging (fMRI) images from humans reading sentences. Most models deliver high accuracy in the left middle temporal gyrus (LMTG) and left occipital complex (LOC). Notably, encoders trained with transformer-based DSMs consistently outperform other unsupervised structured models and all the unstructured baselines. With probing and ablation tasks, we further find that differences in the performance of the DSMs in modeling brain activities can be at least partially explained by the granularity of their semantic representations. We also illustrate the DSM's selectivity for concept categories and show that the topics are represented by spatially overlapping and distributed cortical patterns. Our results corroborate and extend previous findings in understanding the relation between DSMs and neural activation patterns and contribute to building solid brain-machine interfaces with deep neural network representations.
机译:建立计算模型,以考虑语言的皮质表示在理解人类语言系统方面发挥着重要作用。近期分布式语义模型(DSM)的进展,尤其是基于变压器的方法,在许多语言理解任务中推动了推进,使DSM成为探测脑语言处理的有希望的方法。 DSM已被证明可以可靠地解释对词刺激的皮质反应。然而,表征句子处理的大脑活动与DSMS,特别是基于深度神经网络的方法都要略微彻底探索。对DSM的皮质句子表示之间的关系是什么? DSM捕获的语言特征更好地解释其与句子刺激引起的大脑活动的相关性吗?分发句子表示有助于揭示不同脑区的语义选择性吗?我们通过神经编码和解码的镜头来解决这些问题,由自然语言代表学习的最新发展推动。我们首先评估广泛的12个DSM的能力,以预测和破译人类阅读句子的功能磁共振成像(FMRI)图像。大多数模型在左侧中间时颞克鲁斯(LMTG)和左枕骨复合物(LOC)中提供高精度。值得注意的是,具有基于变压器的DSMS培训的编码器始终如一地优于其他无监督的结构化模型和所有非结构化基线。通过探测和消融任务,我们进一步发现,DSM在建模大脑活动中的性能的差异可以至少部分地解释他们的语义表示的粒度。我们还说明了DSM的概念类别的选择性,并表明主题由空间重叠和分布式皮质模式表示。我们的结果证实并扩展了先前的调查结果,以了解DSM和神经激活模式之间的关系,并有助于构建具有深度神经网络表示的实心脑机接口。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号