...
首页> 外文期刊>Optimization methods & software >Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations
【24h】

Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations

机译:用于训练响应的信任区域算法:使用无限期Hessian近似的机器学习方法

获取原文
获取原文并翻译 | 示例

摘要

Machine learning (ML) problems are often posed as highly nonlinear and nonconvex unconstrained optimization problems. Methods for solving ML problems based on stochastic gradient descent are easily scaled for very large problems but may involve fine-tuning many hyper-parameters. Quasi-Newton approaches based on the limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) update typically do not require manually tuning hyper-parameters but suffer from approximating a potentially indefinite Hessian with a positive-definite matrix. Hessian-free methods leverage the ability to perform Hessian-vector multiplication without needing the entire Hessian matrix, but each iteration's complexity is significantly greater than quasi-Newton methods. In this paper we propose an alternative approach for solving ML problems based on a quasi-Newton trust-region framework for solving large-scale optimization problems that allow for indefinite Hessian approximations. Numerical experiments on a standard testing data set show that with a fixed computational time budget, the proposed methods achieve better results than the traditional limited-memory BFGS and the Hessian-free methods.
机译:机器学习(ML)问题通常是高度非线性和非凸不受约束的优化问题。基于随机梯度下降的麦mL问题的方法很容易缩放非常大的问题,但可能涉及微调许多超参数。基于有限的记忆库泡沫 - 弗莱彻 - 戈尔彻 - 戈尔德(BFGS)更新的准牛顿接近通常不需要手动调整超参数,而是遭受近似潜在的无限粗糙的Hessian,具有正定的矩阵。无粗糙的方法利用了在不需要整个Hessian矩阵的情况下执行Hessian-向量乘法的能力,但每种迭代的复杂性明显大于Quasi-Newton方法。在本文中,我们提出了一种替代方法,用于基于准牛顿信任区域框架解决ML问题,用于解决允许无限期Hessian近似的大规模优化问题。在标准测试数据集的数值实验表明,通过固定的计算时间预算,所提出的方法比传统的有限记忆BFG和无Hessian的方法实现更好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号