【24h】

NTUA-SLP at SemEval-2018 Task 2: Predicting Emojis using RNNs with Context-aware Attention

机译:NTUA-SLP在SemEval-2018上的任务2:使用具有上下文感知注意的RNN预测表情符号

获取原文

摘要

In this paper we present a deep-learning model that competed at SemEval-2018 Task 2 "Multilingual Emoji Prediction". We participated in subtask A, in which we are called to predict the most likely associated emoji in English tweets. The proposed architecture relies on a Long Short-Term Memory network, augmented with an attention mechanism, that conditions the weight of each word, on a "context vector" which is taken as the aggregation of a tweet's meaning. Moreover, we initialize the embedding layer of our model, with word2vec word embeddings, pretrained on a dataset of 550 million English tweets. Finally, our model docs not rely on hand-crafted features or lexicons and is trained end-to-end with back-propagation. We ranked 2nd out of 48 teams.
机译:在本文中,我们展示了一个深入的学习模式,在2018年第2次“多语言表情符号预测”中竞争。我们参加了SubTask A,其中我们被要求预测英语推文中最有可能的相关表情符号。所提出的架构依赖于长期短期内存网络,以注意机制增强,这会在“上下文向量”上的条件上的每个单词的权重被视为推文的含义的聚合。此外,我们初始化了我们模型的嵌入层,使用Word2Vec Word Embeddings,在5.5亿英语推文的数据集上掠夺。最后,我们的模型文档不依赖于手工制作的功能或词汇,并通过背部传播训练结束端。我们在48支球队中排名第二。

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号