首页> 外文会议>International conference on computational linguistics >Learning Word Meta-Embeddings by Autoencoding
【24h】

Learning Word Meta-Embeddings by Autoencoding

机译:通过自动编码学习词汇嵌入式

获取原文

摘要

Distributed word embeddings have shown superior performances in numerous Natural Language Processing (NLP) tasks. However, their performances vary significantly across different tasks, implying that the word embeddings learnt by those methods capture complementary aspects of lexical semantics. Therefore, we believe that it is important to combine the existing word embeddings to produce more accurate and complete meta-embeddings of words. We model the meta-embedding learning problem as an autoencoding problem, where we would like to learn a meta-embedding space that can accurately reconstruct all source embeddings simultaneously. Thereby, the meta-embedding space is enforced to capture complementary information in different source embeddings via a coherent common embedding space. We propose three flavours of autoencoded meta-embeddings motivated by different requirements that must be satisfied by a meta-embedding. Our experimental results on a series of benchmark evaluations show that the proposed auloencoded meta-embeddings outperform the existing slate-of-the-art meta-embeddings in multiple tasks.
机译:分布式Word Embeddings在许多自然语言处理(NLP)任务中表现出卓越的性能。然而,他们的表演在不同的任务中差异很大,这意味着这些方法捕获了这些方法的单词捕获了词汇语义的互补方面。因此,我们认为将现有的单词嵌入物结合起来是重要的,以产生更准确和完整的单词的元嵌入。我们将元嵌入式学习问题模拟为自动统计问题,在那里我们想学习元嵌入空间,可以同时准确地重建所有源嵌入的空间。由此,强制执行元嵌入空间以通过相干的公共嵌入空间捕获不同源嵌入的互补信息。我们提出了三种自动化元嵌入的味道,其具有不同要求的荟萃嵌入必须满足。我们对一系列基准评估的实验结果表明,所提出的Auloencoded Meta-Embeddings优于多个任务中现有的最艺术元嵌入品。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号