首页> 外文会议>Conference on Empirical Methods in Natural Language Processing >TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue
【24h】

TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue

机译:TOD-BERT:预先训练的自然语言理解为面向任务对话

获取原文

摘要

The underlying difference of linguistic patterns between general text and task-oriented dialogue makes existing pre-trained language models less useful in practice. In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling. To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling. We propose a contrastive objective function to simulate the response selection task. Our pre-trained task-oriented dialogue BERT (TOD-BERT) outperforms strong baselines like BERT on four downstream task-oriented dialogue applications, including intention recognition, dialogue state tracking, dialogue act prediction, and response selection. We also show that TOD-BERT has a stronger few-shot ability that can mitigate the data scarcity problem for task-oriented dialogue.
机译:一般文本与任务导向对话之间的语言模式的潜在差异使得现有的预先训练的语言模型在实践中不太有用。在这项工作中,我们为语言建模统一九个人类和多转任务的对话数据集。在预培训期间更好地模型对话行为,我们将用户和系统代币纳入屏蔽语言建模。我们提出了一种对比的目标函数来模拟响应选择任务。我们预先接受的面向任务的对话BERT(TOD-BERT)在四个下游任务导向的对话申请中占BERT等强的基线,包括意图识别,对话状态跟踪,对话法预测和反应选择。我们还表明TOD-BERT具有更强的几次射击能力,可以减轻面向任务对话的数据稀缺问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号