首页> 外文期刊>IEEE Transactions on Image Processing >Understanding Deep Representations Learned in Modeling Users Likes
【24h】

Understanding Deep Representations Learned in Modeling Users Likes

机译:了解在建模用户喜欢过程中学习到的深层表示形式

获取原文
获取原文并翻译 | 示例

摘要

Automatically understanding and discriminating different users’ liking for an image is a challenging problem. This is because the relationship between image features (even semantic ones extracted by existing tools, viz., faces, objects, and so on) and users’ likes is non-linear, influenced by several subtle factors. This paper presents a deep bi-modal knowledge representation of images based on their visual content and associated tags (text). A mapping step between the different levels of visual and textual representations allows for the transfer of semantic knowledge between the two modalities. Feature selection is applied before learning deep representation to identify the important features for a user to like an image. The proposed representation is shown to be effective in discriminating users based on images they like and also in recommending images that a given user likes, outperforming the state-of-the-art feature representations by %–20%. Beyond this test-set performance, an attempt is made to qualitatively understand the representations learned by the deep architecture used to model user likes.
机译:自动理解和区分不同用户对图像的喜好是一个具有挑战性的问题。这是因为图像特征(甚至是现有工具提取的语义特征,即面部,物体等)与用户喜好之间的关系是非线性的,并受到一些微妙因素的影响。本文基于图像的视觉内容和相关标签(文本),提出了图像的深层双峰知识表示。视觉和文本表示的不同级别之间的映射步骤允许在两种模式之间传递语义知识。在学习深度表示之前应用特征选择,以识别出用户喜欢图像的重要特征。所提出的表示形式可以有效地根据用户喜欢的图像来区分用户,并且可以推荐给定用户喜欢的图像,其性能比最新的特征表示要好20%到20%。除此测试集性能之外,还尝试定性地了解用于建模用户喜欢程度的深度架构所学习的表示形式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号