首页> 外文会议>International Joint Conference on Artificial Intelligence >Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model
【24h】

Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model

机译:学习中文敏感的单词嵌入着神经张量跳过克模型

获取原文

摘要

Distributed word representations have a rising interest in NLP community. Most of existing models assume only one vector for each individual word, which ignores polysemy and thus degrades their effectiveness for downstream tasks. To address this problem, some recent work adopts multi-prototype models to learn multiple embeddings per word type. In this paper, we distinguish the different senses of each word by their latent topics. We present a general architecture to learn the word and topic embeddings efficiently, which is an extension to the Skip-Gram model and can model the interaction between words and topics simultaneously. The experiments on the word similarity and text classification tasks show our model outperforms state-of-the-art methods.
机译:分布式文字表示对NLP社区的兴趣增长了。大多数现有模型对于每个单独的单词仅假设一个向量,这忽略了多士密度,从而降低了他们对下游任务的有效性。为了解决这个问题,最近的一些工作采用多原型模型来学习每单词类型的多个嵌入式。在本文中,我们通过潜在主题区分每个单词的不同感官。我们提供了一般的架构,以有效地学习单词和主题嵌入式,这是跳过克模型的扩展,可以同时模拟单词和主题之间的交互。关于单词相似性和文本分类任务的实验表明我们的模型优于最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号