首页> 外文期刊>IEEE Transactions on Signal Processing >Error Preserving Correction: A Method for CP Decomposition at a Target Error Bound
【24h】

Error Preserving Correction: A Method for CP Decomposition at a Target Error Bound

机译:误差保留校正:一种在目标误差范围内进行CP分解的方法

获取原文
获取原文并翻译 | 示例

摘要

In CANDECOMP/PARAFAC tensor decomposition, degeneracy often occurs in some difficult scenarios, especially, when the rank exceeds the tensor dimension, or when the loading components are highly collinear in several or all modes, or when CPD does not have an optimal solution. In such cases, norms of some rank-1 tensors become significantly large and cancel each other. This makes algorithms getting stuck in local minima while running a huge number of iterations does not improve the decomposition. In this paper, we propose an error preservation correction method to deal with such problem. Our aim is to seek an alternative tensor, which preserves the approximation error, but norms of rank-1 tensor components of the new tensor are minimized. Alternating and all-at-once correction algorithms have been developed for the problem. In addition, we propose a novel CPD with a bound constraint on the norm of the rank-one tensors. The method can be useful for decomposing tensors that cannot be performed by traditional algorithms. Finally, we demonstrate an application of the proposed method in image denoising and decomposition of the weight tensors in convolutional neural networks.
机译:在CANDECOMP / PARAFAC张量分解中,退化通常发生在某些困难的情况下,尤其是当秩超过张量维数时,或者当载荷分量在几种或所有模式下高度共线时,或者当CPD没有最佳解决方案时。在这种情况下,某些1级张量的范数会变得很大并且彼此抵消。这使得算法在运行大量迭代时陷入局部极小值中,因此无法改善分解。在本文中,我们提出了一种错误保留纠正方法来解决此类问题。我们的目标是寻找一个替代张量,该张量保留近似误差,但将新张量的秩1张量分量的范数最小化。针对此问题,已经开发了交替和一次校正算法。此外,我们提出了一种新颖的CPD,对秩一张量的范数有一定的约束。该方法对于分解传统算法无法执行的张量很有用。最后,我们证明了该方法在卷积神经网络中图像权重张量的图像去噪和分解中的应用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号