首页> 外文会议>Conference on empirical methods in natural language processing >Extracting Syntactic Trees from Transformer Encoder Self-Attentions
【24h】

Extracting Syntactic Trees from Transformer Encoder Self-Attentions

机译:从变压器编码器自行接收中提取句法树

获取原文

摘要

Interpreting neural networks is a popular topic, and there are many works focusing on analyzing networks with respect to learning syntax (Shi et al., 2016; Linzen et al., 2016; Blevins et al., 2018).In particular, Vaswani et al. (2017) showed that the self-attentions in their Transformer architecture may be directly interpreted as syntactic dependencies between tokens. However, there is a potential problem in the fact that the attention mechanism on deeper layers operates on the previous-layer neurons, which already comprise mixed information from multiple source tokens.
机译:解释神经网络是一个流行的主题,并且有许多作品专注于在学习语法上分析网络(Shi等,2016; Linzen等,2016; Blevins等,2018)。特别是,Vaswani et al。 (2017)表明,变压器架构中的自我关注可以直接被解释为令牌之间的句法依赖性。然而,在更深层上的注意机制在前一层神经元上操作的事实上存在潜在的问题,该神经元已经包括来自多个源代币的混合信息。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号