首页> 外文会议>Annual conference on Neural Information Processing Systems >Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods
【24h】

Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

机译:零阶随机优化方法的有限样本收敛速度

获取原文

摘要

We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates. We show that if pairs of function values are available, algorithms that use gradient estimates based on random perturbations suffer a factor of at most d~(1/2) in convergence rate over traditional stochastic gradient methods, where d is the problem dimension. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, which show that our bounds are sharp with respect to all problem-dependent quantities: they cannot be improved by more than constant factors.
机译:我们考虑用于随机优化问题的无导数算法,该算法仅使用噪声函数值而不使用梯度,并分析其有限样本收敛率。我们表明,如果有成对的函数值,使用基于随机扰动的梯度估计的算法与传统的随机梯度方法相比,其收敛速度最多会受到d〜(1/2)的影响,其中d是问题维度。我们以关于此类问题的极大极小收敛速度的信息理论下限来补充我们的算法开发,这表明我们的界限对于所有与问题相关的数量都是敏锐的:它们不能通过恒定的因素加以改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号