首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogue
【24h】

Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogue

机译:分析面向任务对话中逐步解释的seq-to-seq模型的潜力

获取原文
获取原文并翻译 | 示例

摘要

We investigate how encoder-decoder models trained on a synthetic dataset of task-oriented dialogues process disfluencies, such as hesitations and self-corrections. We find that, contrary to earlier results, disfluencies have very little impact on the task success of seq-to-seq models with attention. Using visualisations and diagnostic classifiers, we analyse the representations that are incrementally built by the model, and discover that models develop little to no awareness of the structure of disfluencies. However, adding disfluencies to the data appears to help the model create clearer representations overall, as evidenced by the attention patterns the different models exhibit.
机译:我们研究如何在面向任务的对话的综合数据集上训练编码器-解码器模型来处理诸如犹豫和自我校正之类的差异。我们发现,与早期结果相反,在引起注意的情况下,差异性对seq-to-seq模型的任务成功影响很小。使用可视化和诊断分类器,我们分析了由模型增量构建的表示形式,并发现模型几乎没有意识到流离失所的结构。但是,向数据中添加差异性似乎可以帮助模型整体上创建更清晰的表示形式,不同模型表现出的注意力模式可以证明这一点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号