首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Extracting Syntactic Trees from Transformer Encoder Self-Attentions
【24h】

Extracting Syntactic Trees from Transformer Encoder Self-Attentions

机译:从变压器编码器自注意中提取语法树

获取原文
获取原文并翻译 | 示例

摘要

Interpreting neural networks is a popular topic, and there are many works focusing on analyzing networks with respect to learning syntax (Shi et al., 2016; Linzen et al., 2016; Blevins et al., 2018).In particular, Vaswani et al. (2017) showed that the self-attentions in their Transformer architecture may be directly interpreted as syntactic dependencies between tokens. However, there is a potential problem in the fact that the attention mechanism on deeper layers operates on the previous-layer neurons, which already comprise mixed information from multiple source tokens.
机译:解释神经网络是一个热门话题,有很多工作着眼于分析与学习语法有关的网络(Shi等人,2016; Linzen等人,2016; Blevins等人,2018),特别是Vaswani等人。等(2017)表明,他们的Transformer体系结构中的自我注意可以直接解释为令牌之间的句法依赖性。但是,存在一个潜在的问题,即更深层的注意力机制作用于已经包含来自多个源标记的混合信息的前一层神经元。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号