【24h】

Robust Semantic Parsing with Adversarial Learning for Domain Generalization

机译:对抗性学习的鲁棒语义解析,用于域泛化

获取原文
获取外文期刊封面目录资料

摘要

This paper addresses the issue of generalization for Semantic Parsing in an adversarial framework. Building models that are more robust to inter-document variability is crucial for the integration of Semantic Parsing technologies in real applications. The underlying question throughout this study is whether adversarial learning can be used to train models on a higher level of abstraction in order to increase their robustness to lexical and stylistic variations. We propose to perform Semantic Parsing with a domain classification adversarial task without explicit knowledge of the domain. The strategy is first evaluated on a French corpus of encyclopedic documents, annotated with FrameNet, in an information retrieval perspective, then on PropBank Semantic Role Labeling task on the CoNLL-2005 benchmark. We show that adversarial learning increases all models generalization capabilities both on in and out-of-domain data.
机译:本文讨论了在对抗性框架中语义解析的泛化问题。建立对文档间可变性更强健的模型对于将语义解析技术集成到实际应用程序中至关重要。贯穿本研究的根本问题是,对抗性学习是否可用于在更高抽象水平上训练模型,以提高其对词汇和文体变化的鲁棒性。我们建议对领域分类对抗任务执行语义解析,而无需明确了解领域。首先从信息检索的角度对法国百科全书语料库(以FrameNet注释)对该策略进行评估,然后在CoNLL-2005基准上对PropBank语义角色标签任务进行评估。我们表明对抗性学习增加了域内和域外数据的所有模型泛化能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号