首页> 外文会议>IEEE International Conference on Data Engineering Workshops >Distilling Knowledge from User Information for Document Level Sentiment Classification
【24h】

Distilling Knowledge from User Information for Document Level Sentiment Classification

机译:从用户信息中蒸馏出文档级别情绪分类的知识

获取原文

摘要

Combining global user and product characteristics with local review information provides a powerful mechanism for predicting users' sentiment in a review document about a product on online review sites such as Amazon, Yelp and IMDB. However, the user information is not always available in the real scenario, for example, some new-registered users, or some sites allowing users' comments without logging in. To address this issue, we introduce a novel knowledge distillation (KD) learning paradigm, to transfer the user characteristics into the weights of student neural networks that just utilize product and review information. The teacher model transfers its predictive distributions of training data to the student model. Thus, the user profiles are only required during the training stage. Experimental results on several sentiment classification datasets show that the proposed learning framework enables student models to achieve improved performance.
机译:将全球用户和产品特征与本地审查信息相结合,提供了一种强大的机制,可以在关于在线评论网站上的审查文档中预测用户情绪,例如亚马逊,yelp和imdb等产品。但是,用户信息并不总是可用于实际方案中的,例如,一些新注册的用户,或一些允许用户注释的一些站点在不登录的情况下。要解决这个问题,我们介绍了一种小说知识蒸馏(KD)学习范式,将用户特征转移到仅利用产品和审查信息的学生神经网络的权重。教师模型将其预测数据传输到学生模型。因此,仅在训练阶段期间仅需要用户配置文件。关于若干情感分类数据集的实验结果表明,建议的学习框架使学生模型能够实现改进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号