首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models
【24h】

Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models

机译:无监督令牌明智的对齐方式,以改善对编码器-解码器模型的解释

获取原文
获取原文并翻译 | 示例

摘要

Developing a method for understanding the inner workings of black-box neural methods is an important research endeavor. Conventionally, many studies have used an attention matrix to interpret how Encoder-Decoder-based models translate a given source sentence to the corresponding target sentence. However, recent studies have empirically revealed that an attention matrix is not optimal for token-wise translation analyses. We propose a method that explicitly models the token-wise alignment between the source and target sequences to provide a better analysis. Experiments show that our method can acquire token-wise alignments that are superior to those of an attention mechanism.
机译:开发一种了解黑箱神经方法内部工作原理的方法是一项重要的研究工作。传统上,许多研究都使用注意力矩阵来解释基于编码器/解码器的模型如何将给定的源语句转换为相应的目标语句。但是,最近的研究从经验上揭示,注意矩阵对于标记式翻译分析不是最佳的。我们提出了一种方法,该方法显式地对源序列和目标序列之间的令牌方式比对建模,以提供更好的分析。实验表明,我们的方法可以获得优于注意机制的令牌方式的对齐方式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号