首页> 外文会议>NLG 11 >Combining Hierarchical Reinforcement Learning and Bayesian Networks for Natural Language Generation in Situated Dialogue
【24h】

Combining Hierarchical Reinforcement Learning and Bayesian Networks for Natural Language Generation in Situated Dialogue

机译:在位于对话中结合分层加固学习和贝叶斯网络的自然语言生成

获取原文

摘要

Language generators in situated domains face a number of content selection, utterance planning and surface realisation decisions, which can be strictly interdependent. We therefore propose to optimise these processes in a joint fashion using Hierarchical Reinforcement Learning. To this end, we induce a reward function for content selection and utterance planning from data using the PARADISE framework, and suggest a novel method for inducing a reward function for surface realisation from corpora. It is based on generation spaces represented as Bayesian Networks. Results in terms of task success and human-likeness suggest that our unified approach performs better than a baseline optimised in isolation or a greedy or random baseline. It receives human ratings close to human authors.
机译:在位于域中的语言生成器面临着许多内容选择,话语规划和表面实现决策,这可能是严格相互依赖的。因此,我们建议使用等级加强学习以联合方式优化这些过程。为此,我们使用天堂框架从数据诱导内容选择和话语规划的奖励功能,并提出了一种诱导奖励功能从基层实现的新方法。它基于代表贝叶斯网络的生成空间。结果取得成功和人类的肖像表明,我们的统一方法比孤立或贪婪或随机基线优化的基线表现更好。它接近人类评级接近人类作者。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号