【24h】

Structural Pre-training for Dialogue Comprehension

机译:对话理解的结构性预培训

获取原文

摘要

Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training. However, even with the help of the powerful PrLMs, it is still challenging to effectively capture task-related knowledge from dialogue texts which are enriched by correlations among speaker-aware utterances. In this work, we present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features. To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives: 1) utterance order restoration, which predicts the order of the permuted utterances in dialogue context; 2) sentence backbone regular-ization, which regularizes the model to improve the factual correctness of summarized subject-verb-object triplets. Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.
机译:预先接受的语言模型(PRLMS)由于其强烈学习来自自我监督的预培训的普遍语言表示的能力而展示了卓越的性能。 然而,即使在强大的PRLMS的帮助下,它仍然有效地从对话文本中捕获与对话文本相关的任务相关知识仍然具有挑战性。 在这项工作中,我们呈现蜘蛛,结构性预训练的对话读者,捕获对话的独家功能。 为了模拟类似对话的功能,我们还提出了两个培训目标,除了原始LM目标之外:1)话语订单恢复,预测对话背景下允许的话语的顺序; 2)句子骨干常规型,这将模型正规化,以提高总结主语动词 - 对象三元组的实际正确性。 广泛使用的对话基准测试的实验结果验证了新引入的自我监督任务的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号