首页> 外文会议>Annual conference on Neural Information Processing Systems >Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods
【24h】

Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

机译:有限样品收敛率零级随机优化方法

获取原文

摘要

We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates. We show that if pairs of function values are available, algorithms that use gradient estimates based on random perturbations suffer a factor of at most d~(1/2) in convergence rate over traditional stochastic gradient methods, where d is the problem dimension. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, which show that our bounds are sharp with respect to all problem-dependent quantities: they cannot be improved by more than constant factors.
机译:我们考虑为随机优化问题的无衍生性算法,这些问题仅使用噪声函数值而不是梯度,分析它们的有限样本收敛速率。我们表明,如果有成对的功能值,则使用基于随机扰动的梯度估计的算法在传统随机梯度方法上占收敛速率的大多数d〜(1/2),其中d是问题尺寸。我们补充了我们的算法开发,通过信息 - 理论的下限对这些问题的最低限度收敛速度,这表明我们的界限对于所有问题依赖量锐利:它们不能得到超过恒定因素。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号