...
首页> 外文期刊>Automatica Sinica, IEEE/CAA Journal of >Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning
【24h】

Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning

机译:通过势头掺入的并行随机梯度下降学习的高效和高质量建议

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel data-splitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.
机译:依赖于潜在因子分析的推荐系统(RS)通常采用随机梯度下降(SGD)作为其学习算法。然而,由于其串行机制,在处理大规模的工业问题时,SGD算法遭受了低效率和可扩展性。旨在解决这个问题,本研究提出了一种势头掺入的并行随机梯度下降(MPSGD)算法,其主要思想是通过新的数据分割策略实现并行化,B)通过集成加速收敛速度势头效应进入其培训过程。利用它,实现了基于MPSGD的潜在因子(MLF)模型,其能够进行高效和高质量的推荐。由工业RS产生的四个高维和稀疏矩阵的实验结果表明,由于MPSGD算法,MLF模型以计算效率和可扩展性占现有的最先进状态。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号