首页> 外文会议>Chinese conference pattern recognition and computer vision >Multimodal Joint Representation for User Interest Analysis on Content Curation Social Networks
【24h】

Multimodal Joint Representation for User Interest Analysis on Content Curation Social Networks

机译:内容管理社交网络中用于用户兴趣分析的多模式联合表示

获取原文

摘要

Content curation social networks (CCSNs), where users share interests by images and their text descriptions, are booming social networks. For the purpose of fully utilizing user-generated contents to analysis user interests on CCSNs, we propose a framework of learning multimodal joint representations of pins for user interest analysis. First, images are automatically annotated with category distributions, which benefit from the network characteristics and represent interests of users. Further, image representations are extracted from an intermediate layer of a fine-tuned multilabel convolutional neural network (CNN) and text representations are obtained with a trained Word2Vec. Finally, a multimodal deep Boltzmann machine (DBM) are trained to fuse two modalities. Experiments on a dataset from Huaban demonstrate that using category distributions instead of single categories as labels to fine-tune CNN significantly improve the performance of image representation, and multimodal joint representations perform better than either of unimodal representations.
机译:用户通过图像及其文字描述共享兴趣的内容管理社交网络(CCSN)正在蓬勃发展的社交网络中。为了充分利用用户生成的内容来分析CCSN上的用户兴趣,我们提出了学习用于用户兴趣分析的引脚的多峰联合表示的框架。首先,使用类别分布自动注释图像,这得益于网络特性并代表用户的兴趣。此外,从微调多标签卷积神经网络(CNN)的中间层提取图像表示,并使用训练有素的Word2Vec获得文本表示。最后,训练多模式深部Boltzmann机器(DBM)以融合两种模式。在Huaban的数据集上进行的实验表明,使用类别分布而不是单个类别作为标签来微调CNN可以显着改善图像表示的性能,并且多峰联合表示的性能要优于单峰表示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号