首页> 美国卫生研究院文献>other >Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
【2h】

Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization

机译:图正则化非负矩阵分解的有限内存快速梯度下降方法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix to the product of two lower-rank nonnegative factor matrices, i.e., and () and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.
机译:图正则化非负矩阵分解(GNMF)将非负数据矩阵分解为两个较低秩的非负因子矩阵的乘积,即和(),旨在通过最小化平方的欧几里得距离或Kullback-Leibler来保留数据集的局部几何结构(KL)在X和WH之间的差异。乘法更新规则(MUR)通常用于优化GNMF,但是它具有收敛速度较慢的缺点,因为它本质上会沿着重新缩放的负梯度方向以非最佳步长前进一个步长。近来,已经提出了一种多步长快速梯度下降(MFGD)方法来优化NMF,该方法通过使用牛顿法沿着重新缩放的负梯度方向搜索最优步长来加速MUR。但是,MFGD的计算成本很高,这是因为1)高维Hessian矩阵密集且占用太多内存; 2)Hessian逆算子及其与梯度的乘积花费了太多时间。为了克服MFGD的这些不足,我们提出了一种有效的有限内存FGD(L-FGD)方法来优化GNMF。尤其是,我们应用有限内存BFGS(L-BFGS)方法来直接近似反Hessian和梯度的乘积,以在MFGD中搜索最佳步长。实际数据集的初步结果表明,L-FGD比MFGD和MUR都更有效。为了评估L-FGD的有效性,我们在两个流行的人脸图像数据集(包括ORL和PIE)以及两个文本语料库(包括Reuters和TDT2)上验证了其聚类性能,以优化基于KL散度的GNMF。通过与代表性的GNMF求解器进行比较,实验结果证实了L-FGD的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号