首页> 外文会议>Conference on empirical methods in natural language processing >Maximum Margin Reward Networks for Learning from Explicit and Implicit Supervision
【24h】

Maximum Margin Reward Networks for Learning from Explicit and Implicit Supervision

机译:可从显式和隐式监督中学习的最大保证金奖励网络

获取原文

摘要

Neural networks have achieved state-of-the-art performance on several structured-output prediction tasks, trained in a fully supervised fashion. However, annotated examples in structured domains are often costly to obtain, which thus limits the applications of neural networks. In this work, we propose Maximum Margin Reward Networks, a neural network-based framework that aims to learn from both explicit (full structures) and implicit supervision signals (delayed feedback on the correctness of the predicted structure) On named entity recognition and semantic parsing, our model outperforms previous systems on the benchmark datasets, CoNLL-2003 and WebQuestionsSP.
机译:神经网络已经在以完全监督的方式进行训练的几个结构化输出预测任务上实现了最先进的性能。但是,结构化域中的带注释的示例通常很昂贵,因此限制了神经网络的应用。在这项工作中,我们提出了最大保证金奖励网络(Maximum Margin Reward Networks),这是一个基于神经网络的框架,旨在从显式(完整结构)和隐式监督信号(对预测结构的正确性的延迟反馈)中学习,关于命名实体识别和语义解析,我们的模型优于基准数据集CoNLL-2003和WebQuestionsSP上的先前系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号