首页> 外文会议>International symposium on neural networks >Optimal Calculation of Tensor Learning Approaches
【24h】

Optimal Calculation of Tensor Learning Approaches

机译:张量学习方法的最优计算

获取原文

摘要

Most algorithms have been extended to the tensor space to create algorithm versions with direct tensor inputs. However, very unfortunately basically all objective functions of algorithms in the tensor space are non-convex. However, sub-problems constructed by fixing all the modes but one are often convex and very easy to solve. However, this method may lead to difficulty converging; iterative algorithms sometimes get stuck in a local minimum and have difficulty converging to the global solution. Here, we propose a computational framework for constrained and unconstrained tensor methods. Using our methods, the algorithm convergence situation can be improved to some extent and better solutions obtained. We applied our technique to Uncorrelated Multilinear Principal Component Analysis (UMPCA), Tensor Rank one Discriminant Analysis (TR1DA) and Support Tensor Machines (STM); Experiment results show the effectiveness of our method.
机译:大多数算法已扩展到张量空间以创建具有直接张量输入的算法版本。然而,非常不幸的是,张量空间中算法的所有目标函数基本上都是非凸的。但是,通过固定所有模式而不是一种模式构成的子问题通常是凸的,并且很容易解决。但是,这种方法可能会导致收敛困难;迭代算法有时会陷入局部最小值,并且难以收敛到全局解决方案。在这里,我们为约束和非约束张量方法提出了一个计算框架。使用我们的方法,可以在一定程度上改善算法的收敛情况,并获得更好的解决方案。我们将技术应用于不相关的多线性主成分分析(UMPCA),张量一级判别分析(TR1DA)和支持张量机(STM);实验结果表明了该方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号