首页> 外文会议>International Joint Conference on Neural Networks >A low-dimensional vector representation for words using an extreme learning machine
【24h】

A low-dimensional vector representation for words using an extreme learning machine

机译:使用极限学习机的单词的低维向量表示

获取原文

摘要

Word embeddings are a low-dimensional vector representation of words that incorporates context. TWo popular methods are word2vec and global vectors (GloVe). Word2vec is a single-hidden layer feedforward neural network (SLFN) that has an auto-encoder influence for computing a word context matrix using backpropagation for training. GloVe computes the word context matrix first then performs matrix factorization on the matrix to arrive at word embeddings. Backpropagation is a typical training method for SLFN's, which is time consuming and requires iterative tuning. Extreme learning machines (ELM) have the universal approximation capability of SLFN's, based on a randomly generated hidden layer weight matrix in lieu of backpropagation. In this research, we propose an efficient method for generating word embeddings that uses an auto-encoder architecture based on ELM that works on a word context matrix. Word similarity is done using the cosine similarity measure on a dozen various words and the results are reported.
机译:词嵌入是结合上下文的词的低维向量表示。流行的方法有word2vec和全局向量(GloVe)。 Word2vec是单隐藏层前馈神经网络(SLFN),具有自动编码器影响力,可使用反向传播进行训练来计算单词上下文矩阵。 GloVe首先计算单词上下文矩阵,然后对该矩阵执行矩阵分解,以得出单词嵌入。反向传播是用于SLFN的典型训练方法,该方法既耗时又需要迭代调整。极限学习机(ELM)具有SLFN的通用逼近能力,它基于随机生成的隐藏层权重矩阵代替反向传播。在这项研究中,我们提出了一种有效的生成单词嵌入的方法,该方法使用基于ELM的自动编码器体系结构,该体系结构适用于单词上下文矩阵。使用余弦相似度度量对十几个不同的单词进行单词相似性处理,并报告结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号