首页> 外文会议>International Joint Conference on Rough Sets >Acr2Vec: Learning Acronym Representations in Twitter
【24h】

Acr2Vec: Learning Acronym Representations in Twitter

机译:ACR2VEC:在推特中学习首字母缩写

获取原文

摘要

Acronyms are common in Twitter and bring in new challenges to social media analysis. Distributed representations have achieved successful applications in natural language processing. An acronym is different from a single word and is generally defined by several words. To this end, we present Acr2Vec, an algorithmic framework for learning continuous representations for acronyms in Twitter. First, a Twitter ACRonym (TACR) dataset is automatically constructed, in which an acronym is expressed by one or more definitions. Then, three acronym embedding models have been proposed: MPDE (Max Pooling Definition Embedding), APDE (Average Pooling Definition Embedding), and PLAE (Paragraph-Like Acronym Embedding). The qualitative experimental results (i.e., similarity measure) and quantitative experimental results (i.e., acronym polarity classification) both show that MPDE and APDE are superior to PLAE.
机译:首字母缩略词在Twitter中很常见,并为社交媒体分析带来了新的挑战。分布式表示在自然语言处理中取得了成功的应用。首字母缩略词与单个单词不同,通常由几个单词定义。为此,我们展示了ACR2VEC,一种用于学习Twitter中的首字母缩略词的连续表示的算法框架。首先,自动构建Twitter首字母缩略词(TACR)数据集,其中缩写由一个或多个定义表示。然后,已经提出了三种亚焦族嵌入模型:MPDE(MAX汇集定义嵌入),APDE(平均池定义嵌入)和PLAE(段落类似的缩写嵌入)。定性实验结果(即相似度测量)和定量实验结果(即,缩略词极性分类)显示MPDE和APDE优于PLAE。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号