【24h】

Sentence Compression with Reinforcement Learning

机译:强化学习的句子压缩

获取原文

摘要

Deletion-based sentence compression is frequently formulated as a constrained optimization problem and solved by integer linear programming (ILP). However, ILP methods searching the best compression given the space of all possible compressions would be intractable when dealing with overly long sentences and too many constraints. Moreover, the hard constraints of ILP would restrict the available solutions. This problem could be even more severe considering parsing errors. As an alternative solution, we formulate this task in a reinforcement learning framework, where hard constraints are used as rewards in a soft manner. The experiment results show that our method achieves competitive performance with a large improvement on the speed.
机译:基于删除的句子压缩通常被公式化为约束优化问题,并通过整数线性规划(ILP)解决。但是,在处理过长的句子和过多的约束时,给定所有可能的压缩空间的情况下,搜索最佳压缩的ILP方法将很棘手。此外,ILP的严格限制将限制可用的解决方案。考虑到解析错误,这个问题可能更加严重。作为替代解决方案,我们在强化学习框架中制定了此任务,在该框架中,硬约束以软方式用作奖励。实验结果表明,该方法取得了较优的性能,并在速度上有了很大的提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号