首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization
【24h】

Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization

机译:通过分组权重共享和文本分类应用开发领域知识

获取原文

摘要

A fundamental advantage of neural models for NLP is their ability to learn representations from scratch. However, in practice this often means ignoring existing external linguistic resources, e.g., Word-Net or domain specific ontologies such as the Unified Medical Language System (UMLS). We propose a general, novel method for exploiting such resources via weight sharing. Prior work on weight sharing in neural networks has considered it largely as a means of model compression. In contrast, we treat weight sharing as a flexible mechanism for incorporating prior knowledge into neural models. We show that this approach consistently yields improved performance on classification tasks compared to baseline strategies that do not exploit weight sharing.
机译:用于NLP的神经模型的基本优点是它们具有从头学习表示的能力。但是,实际上,这通常意味着忽略现有的外部语言资源,例如Word-Net或领域特定的本体,例如统一医学语言系统(UMLS)。我们提出了一种通过权重共享来利用此类资源的通用,新颖的方法。先前在神经网络中进行权重分配的工作主要是将其视为模型压缩的一种方式。相反,我们将权重共享视为将先验知识整合到神经模型中的灵活机制。我们显示,与不利用权重共享的基线策略相比,这种方法在分类任务上始终能够提高性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号