首页> 外文期刊>Computer speech and language >Learning what to say and how to say it: Joint optimisation of spoken dialogue management and natural language generation
【24h】

Learning what to say and how to say it: Joint optimisation of spoken dialogue management and natural language generation

机译:学习说什么和怎么说:共同优化口语对话管理和自然语言生成

获取原文
获取原文并翻译 | 示例

摘要

This paper argues that the problems of dialogue management (DM) and Natural Language Generation (NLG) in dialogue systems are closely related and can be fruitfully treated statistically, in a joint optimisation framework such as that provided by Reinforcement Learning (RL). We first review recent results and methods in automatic learning of dialogue management strategies for spoken and multimodal dialogue systems, and then show how these techniques can also be used for the related problem of Natural Language Generation. This approach promises a number of theoretical and practical benefits such as fine-grained adaptation, generalisation, and automatic (global) optimisation, and we compare it to related work in statistical/trainable NLG. A demonstration of the proposed approach is then developed, showing combined DM and NLG policy learning for adaptive information presentation decisions. A joint DM and NLG policy learned in the framework shows a statistically significant 27% relative increase in reward over a baseline policy, which is learned in the same way only without the joint optimisation. We thereby show that that NLG problems can be approached statistically, in combination with dialogue management decisions, and we show how to jointly optimise NLG and DM using Reinforcement Learning.
机译:本文认为,在诸如强化学习(RL)提供的联合优化框架中,对话系统中的对话管理(DM)和自然语言生成(NLG)问题密切相关,并且可以通过统计学有效地加以处理。我们首先回顾一下针对口语和多模式对话系统的对话管理策略的自动学习的最新结果和方法,然后说明如何将这些技术也用于自然语言生成的相关问题。这种方法带来了许多理论和实践上的好处,例如细粒度的适应,泛化和自动(全局)优化,我们将其与统计/可训练的NLG中的相关工作进行了比较。然后开发了所提出方法的演示,展示了针对自适应信息表示决策的DM和NLG策略学习组合。在框架中学习的DM和NLG联合政策显示,与基线政策相比,奖励在统计上有27%的相对显着相对增加,只有在没有联合优化的情况下,才能以相同的方式获得奖励。因此,我们表明,结合对话管理决策,可以统计地解决NLG问题,并且我们展示了如何使用强化学习共同优化NLG和DM。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号