首页> 外文会议>Workshop on Knowledge Extraction and Integration for Deep Learning Architectures >KW-ATTN: Knowledge Infused Attention for Accurate and Interpretable Text Classification
【24h】

KW-ATTN: Knowledge Infused Attention for Accurate and Interpretable Text Classification

机译:KW-ATTN:知识注入注意力,实现准确且可解释的文本分类

获取原文

摘要

Text classification has wide-ranging applications in various domains. While neural network approaches have drastically advanced performance in text classification, they tend to be powered by a large amount of training data, and interpretability is often an issue. As a step towards better accuracy and interpretability especially on small data, in this paper we present a new knowledge-infused attention mechanism, called KW-ATTN (KnoWledge-infused ATTentioN) to incorporate high-level concepts from external knowledge bases into Neural Network models. We show that KW-ATTN outperforms baseline models using only words as well as other approaches using concepts by classification accuracy, which indicates that high-level concepts help model prediction. Furthermore, crowdsourced human evaluation suggests that additional concept information helps interpretability of the model.
机译:文本分类在各个领域有着广泛的应用。虽然神经网络方法在文本分类方面有着显著的优势,但它们往往需要大量的训练数据,而可解释性往往是一个问题。为了提高准确度和可解释性,特别是在小数据上,本文提出了一种新的知识注入注意机制,称为KW-ATTN(knowledge infused attention),将外部知识库中的高级概念纳入神经网络模型。我们发现,KW-ATTN优于仅使用单词的基线模型以及其他使用概念的方法,这表明高级概念有助于模型预测。此外,众包人工评估表明,额外的概念信息有助于模型的可解释性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号