首页> 外文会议>International Conference on Algorithmic Learning Theory >Accelerated Training of Max-Margin Markov Networks with Kernels
【24h】

Accelerated Training of Max-Margin Markov Networks with Kernels

机译:加速训练Max-Margin Markov网络与核

获取原文

摘要

Structured output prediction is an important machine learning problem both in theory and practice, and the max-margin Markov network (M~3N) is an effective approach. All state-of-the-art algorithms for optimizing M~3N objectives take at least O(1/∈)number of iterations to find an ∈ accurate solution. [1] broke this barrier by proposing an excessive gap reduction technique (EGR) which converges in O(1/{the square root of}∈) iterations. However, it is restricted to Euclidean projections which consequently requires an intractable amount of computation for each iteration when applied to solve M~3N. In this paper, we show that by extending EGR to Bregman projection, this faster rate of convergence can be retained, and more importantly, the updates can be performed efficiently by exploiting graphical model factorization. Further, we design a kernelized procedure which allows all computations per iteration to be performed at the same cost as the state-of-the-art approaches.
机译:结构化输出预测是理论和实践中的重要机器学习问题,并且MAX-MARIN MARKOV网络(M〜3N)是一种有效的方法。所有最先进的算法用于优化M〜3n目标,至少o(1 /∈)迭代次数,以找到∈准确的解决方案。 [1]通过提出过量的间隙减少技术(EGR)来突破该屏障,该技术(EGR)收敛于O(1 / {}}}}}}}}}}}}}}}}迭代。然而,它仅限于欧几里德的突起,因此在施加求解M〜3N时需要对每次迭代的每次迭代的棘手计算量。在本文中,我们表明,通过将EGR扩展到Bregman投影,可以保留这种更快的收敛速率,更重要的是,通过利用图形模型分解,可以有效地执行更新。此外,我们设计了一个内核化过程,其允许每个迭代的所有计算以与最先进的方法的成本相同。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号