首页> 外文期刊>IEEE transactions on multimedia >Enhancing the Robustness of Neural Collaborative Filtering Systems Under Malicious Attacks
【24h】

Enhancing the Robustness of Neural Collaborative Filtering Systems Under Malicious Attacks

机译:增强恶意攻击下神经协作过滤系统的鲁棒性

获取原文
获取原文并翻译 | 示例

摘要

Recommendation systems have become ubiquitous in online shopping in recent decades due to their power in reducing excessive choices of customers and industries. Recent collaborative filtering methods based on the deep neural network are studied and introduce promising results due to their power in learning hidden representations for users and items. However, it has revealed its vulnerabilities under malicious user attacks. With the knowledge of a collaborative filtering algorithm and its parameters, the performance of this recommendation system can be easily downgraded. Unfortunately, this problem is not addressed well, and the study on defending recommendation systems is insufficient. In this paper, we aim to improve the robustness of recommendation systems based on two concepts-stage-wise hints training and randomness. To protect a target model, we introduce noise layers in the training of a target model to increase its resistance to adversarial perturbations. To reduce the noise layers' influence on model performance, we introduce intermediate layer outputs as hints from a teacher model to regularize the intermediate layers of a student target model. We consider white box attacks under which attackers have the knowledge of the target model. The generalizability and robustness properties of our method have been analytically inspected in experiments and discussions, and the computational cost is comparable to training a standard neural network-based collaborative filtering model. Through our investigation, the proposed defensive method can reduce the success rate of malicious user attacks and keep the prediction accuracy comparable to standard neural recommendation systems.
机译:推荐系统由于减少了过多的客户和行业选择的能力,最近几十年来在在线购物中变得无处不在。对基于深度神经网络的最新协作过滤方法进行了研究,并由于其在学习用户和项目的隐藏表示方面的强大功能而引入了有希望的结果。但是,它已经揭示了在恶意用户攻击下的漏洞。利用协作过滤算法及其参数的知识,可以轻松地降低此推荐系统的性能。不幸的是,这个问题没有得到很好的解决,对推荐系统的辩护还不够。在本文中,我们旨在基于阶段性提示训练和随机性这两个概念来提高推荐系统的鲁棒性。为了保护目标模型,我们在目标模型的训练中引入了噪声层,以增加其对对抗性摄动的抵抗力。为了减少噪声层对模型性能的影响,我们引入中间层输出作为教师模型的提示,以规范化学生目标模型的中间层。我们考虑白盒攻击,在这种情况下,攻击者可以了解目标模型。我们的方法的一般性和鲁棒性已在实验和讨论中进行了分析检查,其计算成本可与训练基于标准神经网络的协作过滤模型相媲美。通过我们的调查,提出的防御方法可以降低恶意用户攻击的成功率,并使预测准确性可与标准神经推荐系统相比。

著录项

  • 来源
    《IEEE transactions on multimedia》 |2019年第3期|555-565|共11页
  • 作者单位

    Univ Technol Sydney, Fac Engn & Informat Technol, Ultimo, NSW 2007, Australia|Univ Sydney, Fac Engn & Informat Technol, UBTECH Sydney Artificial Intelligence Ctr, Darlington, NSW 2008, Australia|Univ Sydney, Fac Engn & Informat Technol, Sch Informat Technol, Darlington, NSW 2008, Australia|Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen Key Lab Virtual Real & Human Interact Te, Shenzhen 518055, Peoples R China;

    Tencent AI Lab, Shenzhen 518057, Peoples R China;

    JD AI Res, Beijing 100020, Peoples R China;

    Univ Sydney, Fac Engn & Informat Technol, UBTECH Sydney Artificial Intelligence Ctr, Darlington, NSW 2008, Australia|Univ Sydney, Fac Engn & Informat Technol, Sch Informat Technol, Darlington, NSW 2008, Australia;

    Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen Key Lab Virtual Real & Human Interact Te, Shenzhen 518055, Peoples R China|Chinese Univ Hong Kong, Hong Kong, Peoples R China;

    Univ Sydney, Fac Engn & Informat Technol, UBTECH Sydney Artificial Intelligence Ctr, Darlington, NSW 2008, Australia|Univ Sydney, Fac Engn & Informat Technol, Sch Informat Technol, Darlington, NSW 2008, Australia;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Recommendation systems; adversarial learning; collaborative filtering; malicious attacks;

    机译:推荐系统;对抗学习;协作过滤;恶意攻击;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号