首页> 外文期刊>Neural processing letters >Parallel Implementation of the Nonlinear Semi-NMF Based Alternating Optimization Method for Deep Neural Networks
【24h】

Parallel Implementation of the Nonlinear Semi-NMF Based Alternating Optimization Method for Deep Neural Networks

机译:基于非线性半NMF的深度神经网络交替优化方法的并行实现

获取原文
获取原文并翻译 | 示例

摘要

For computing weights of deep neural networks (DNNs), the backpropagation (BP) method has been widely used as a de-facto standard algorithm. Since the BP method is based on a stochastic gradient descent method using derivatives of objective functions, the BP method has some difficulties finding appropriate parameters such as learning rate. As another approach for computing weight matrices, we recently proposed an alternating optimization method using linear and nonlinear semi-nonnegative matrix factorizations (semi-NMFs). In this paper, we propose a parallel implementation of the nonlinear semi-NMF based method. The experimental results show that our nonlinear semi-NMF based method and its parallel implementation have competitive advantages to the conventional DNNs with the BP method.
机译:为了计算深度神经网络(DNN)的权重,反向传播(BP)方法已被广泛用作事实上的标准算法。由于BP方法基于使用目标函数导数的随机梯度下降方法,因此BP方法很难找到合适的参数,例如学习率。作为计算权重矩阵的另一种方法,我们最近提出了一种使用线性和非线性半负矩阵分解(semi-NMFs)的交替优化方法。在本文中,我们提出了基于非线性半NMF的方法的并行实现。实验结果表明,我们的基于非线性半NMF的方法及其并行实现与使用BP方法的常规DNN具有竞争优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号