首页> 外文会议>AAAI Conference on Artificial Intelligence >Expected Tensor Decomposition with Stochastic Gradient Descent
【24h】

Expected Tensor Decomposition with Stochastic Gradient Descent

机译:随机梯度下降预期的张量分解

获取原文

摘要

In this study, we investigate expected CP decomposition - a special case of CP decomposition in which a tensor to be decomposed is given as the sum or average of tensor samples χ~(t) for t = 1,...,T. To determine this decomposition, we develop stochastic-gradient-descent-type algorithms with four appealing features: efficient memory use, ability to work in an online setting, robustness of parameter tuning, and simplicity. Our theoretical analysis show that the solutions do not diverge to infinity for any initial value or step size. Experimental results confirm that our algorithms significantly outperform all existing methods in terms of accuracy. We also show that they can successfully decompose a large tensor, containing billion-scale nonzero elements.
机译:在这项研究中,我们研究了预期的CP分解 - 一种特殊的CP分解情况,其中将要分解的张量作为T = 1,...,T的张量样本的总和或平均值给出。为了确定这种分解,我们开发了具有四种吸引人的功能的随机梯度 - 下降型算法:高效的内存使用,在线设置工作,参数调整的鲁棒性和简单性。我们的理论分析表明,对于任何初始值或步长,解决方案不会向无限远发散。实验结果证实,我们的算法在准确性方面显着优于所有现有方法。我们还表明,他们可以成功地分解大张量,包含亿级非零元素。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号