【24h】

Querying Word Embeddings for Similarity and Relatedness

机译:查询词嵌入的相似性和相关性

获取原文

摘要

Word embeddings obtained from neural network models such as Word2Vec Skipgram have become popular representations of word meaning and have been evaluated on a variety of word similarity and relatedness norm-ing data. Skipgram generates a set of word and context embeddings, the latter typically discarded after training. We demonstrate the usefulness of context embeddings in predicting asymmetric association between words from a recently published dataset of production norms (Jouravlev and McRae, 2016). Our findings suggest that humans respond with words closer to the cue within the context embedding space (rather than the word embedding space), when asked to generate themati-cally related words.
机译:从诸如Word2Vec Skipgram之类的神经网络模型获得的词嵌入已成为词义的流行表示,并已根据各种词相似性和相关性规范数据进行了评估。跳过图会生成一组单词和上下文嵌入,通常在训练后将其丢弃。我们证明了上下文嵌入在预测来自最近发布的生产规范数据集的单词之间的不对称关联方面的有用性(Jouravlev和McRae,2016)。我们的发现表明,当被要求生成与他们相关的单词时,人类会在语境嵌入空间(而不是单词嵌入空间)中以更接近提示的单词做出反应。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号