首页> 外文会议>Workhshop on NLP for Conversational AI >Probing Neural Dialog Models for Conversational Understanding
【24h】

Probing Neural Dialog Models for Conversational Understanding

机译:对话理解的神经对话模型

获取原文
获取外文期刊封面目录资料

摘要

The predominant approach to open-domain dialog generation relies on end-to-end training of neural models on chat datasets. However, this approach provides little insight as to what these models learn (or do not learn) about engaging in dialog. In this study, we analyze the internal representations learned by neural open-domain dialog systems and evaluate the quality of these representations for learning basic conversational skills. Our results suggest that standard open-domain dialog systems struggle with answering questions, inferring contradiction, and determining the topic of conversation, among other tasks. We also find that the dyadic, turn-taking nature of dialog is not fully leveraged by these models. By exploring these limitations, we highlight the need for additional research into architectures and training methods that can better capture high-level information about dialog.
机译:开放域对话框生成的主要方法依赖于聊天数据集上的神经模型的端到端培训。但是,这种方法很少见到这些模型学习(或不学习)关于参与对话的内容。在本研究中,我们分析了神经开放式对话系统学习的内部陈述,并评估了这些表现的质量,以了解基本的对话技能。我们的结果表明,标准开放式对话系统与其他任务中的回答问题,推断矛盾和确定会话主题斗争。我们还发现,对话的二元,转向性质不是通过这些模型充分利用的。通过探索这些限制,我们突出了对架构和培训方法的需要进行额外的研究,可以更好地捕获有关对话框的高级信息。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号