首页> 外文会议>Workshop on gender bias in natural language processing;Annual meeting of the Association for Computational Linguistics >Look Again at the Syntax: Relational Graph Convolutional Network for Gendered Ambiguous Pronoun Resolution
【24h】

Look Again at the Syntax: Relational Graph Convolutional Network for Gendered Ambiguous Pronoun Resolution

机译:再次看一下语法:用于性别歧义代词解析的关系图卷积网络

获取原文

摘要

Gender bias has been found in existing coref-erence resolvers. In order to eliminate gender bias, a gender-balanced dataset Gendered Ambiguous Pronouns (GAP) has been released and the best baseline model achieves only 66.9% F1. Bidirectional Encoder Representations from Transformers (BERT) has broken several NLP task records and can be used on GAP dataset. However, fine-tune BERT on a specific task is computationally expensive. In this paper, we propose an end-to-end resolver by combining pre-trained BERT with Relational Graph Convolutional Network (R-GCN). R-GCN is used for digesting structural syntactic information and learning better task-specific embeddings. Empirical results demonstrate that, under explicit syntactic supervision and without the need to fine tune BERT, R-GCN's embeddings outperform the original BERT embeddings on the coref-erence task. Our work significantly improves the snippet-context baseline F1 score on GAP dataset from 66.9% to 80.3%. We participated in the Gender Bias for Natural Language Processing 2019 shared task, and our codes are available online. ~1
机译:在现有的核心能力解决者中发现了性别偏见。为了消除性别偏见,已经发布了性别平衡数据集“性别歧义代词”(GAP),最佳基准模型的F1仅达到66.9%。变压器的双向编码器表示(BERT)打破了多个NLP任务记录,可用于GAP数据集。但是,针对特定任务微调BERT的计算量很大。在本文中,我们通过将预训练的BERT与关系图卷积网络(R-GCN)相结合,提出了一种端到端解析器。 R-GCN用于消化结构句法信息并更好地学习特定于任务的嵌入。实验结果表明,在显式的语法监督下并且无需微调BERT,R-GCN的嵌入在核心任务上的性能优于原始BERT嵌入。我们的工作将GAP数据集上的摘要上下文基线F1分数从66.9%显着提高到80.3%。我们参加了2019年自然语言处理的性别偏见共享任务,我们的代码可在线获得。 〜1

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号