首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Learning and Evaluating Sparse Interpretable Sentence Embeddings
【24h】

Learning and Evaluating Sparse Interpretable Sentence Embeddings

机译:学习和评估稀疏可解释的句子嵌入

获取原文
获取原文并翻译 | 示例

摘要

Previous research on word embeddings has shown that sparse representations, which can be either learned on top of existing dense embeddings or obtained through model constraints during training time, have the benefit of increased interpretability properties: to some degree, each dimension can be understood by a human and associated with a recognizable feature in the data. In this paper, we transfer this idea to sentence embeddings and explore several approaches to obtain a sparse representation. We further introduce a novel, quantitative and automated evaluation metric for sentence embedding interpretability, based on topic coherence methods. We observe an increase in interpretability compared to dense models, on a dataset of movie dialogs and on the scene descriptions from the MS COCO dataset.
机译:先前对单词嵌入的研究表明,可以在现有的密集嵌入之上学习或在训练期间通过模型约束获得的稀疏表示,具有增加的可解释性的优点:在某种程度上,每个维度都可以被理解。并与数据中可识别的特征相关联。在本文中,我们将这种思想转换为句子嵌入,并探索了几种获得稀疏表示的方法。我们还基于主题一致性方法,为句子嵌入的可解释性引入了一种新颖,定量和自动的评估指标。与密集模型相比,我们在电影对话的数据集和MS COCO数据集的场景描述中观察到了可解释性的提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号