首页> 外文会议>Future of Information and Communication Conference >GPU_MF_SGD: A Novel GPU-Based Stochastic Gradient Descent Method for Matrix Factorization
【24h】

GPU_MF_SGD: A Novel GPU-Based Stochastic Gradient Descent Method for Matrix Factorization

机译:gpu_mf_sgd:基于GPU的基于GPU的随机梯度渐变方法,用于矩阵分解

获取原文

摘要

Recommender systems are used in most of nowadays applications. Providing real-time suggestions with high accuracy is considered as one of the most crucial challenges that face them. Matrix factorization (MF) is an effective technique for recommender systems as it improves the accuracy. Stochastic Gradient Descent (SGD) for MF is the most popular approach used to speed up MF. SGD is a sequential algorithm, which is not trivial to be parallelized, especially for large-scale problems. Recently, many researches have proposed parallel methods for parallelizing SGD. In this research, we propose GPU_MF_SGD, a novel GPU-based method for large-scale recommender systems. GPU_MF_SGD utilizes Graphics Processing Unit (GPU) resources by ensuring load balancing and linear scalability, and achieving coalesced access of global memory without preprocessing phase. Our method demonstrates 3.1X-5.4X speedup over the most state-of-the-art GPU method, CuMF_SGD.
机译:推荐系统在现在的大多数应用中使用。提供高精度的实时建议被认为是面对它们的最重要的挑战之一。矩阵分解(MF)是推荐系统的有效技术,因为它提高了准确性。用于MF的随机梯度下降(SGD)是用于加速MF的最流行的方法。 SGD是一种顺序算法,其并行不均匀化,尤其是对于大规模问题。最近,许多研究已经提出了对SGD并行化的并行方法。在本研究中,我们提出了一种基于GPU的大型推荐系统的GPU_MF_SGD。 GPU_MF_SGD通过确保负载平衡和线性可伸缩性来利用图形处理单元(GPU)资源,并在不预处理阶段的情况下实现全球存储器的合并访问。我们的方法演示了3.1x-5.4x的加速在最先进的GPU方法中,CUMF_SGD。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号