首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
【24h】

Variants of RMSProp and Adagrad with Logarithmic Regret Bounds

机译:具有对数后悔界限的RMSProp和Adagrad的变体

获取原文
       

摘要

Adaptive gradient methods have become recently very popular, in particular as they have been shown to be useful in the training of deep neural networks. In this paper we have analyzed RMSProp, originally proposed for the training of deep neural networks, in the context of online convex optimization and show $sqrt{T}$-type regret bounds. Moreover, we propose two variants SC-Adagrad and SC-RMSProp for which we show logarithmic regret bounds for strongly convex functions. Finally, we demonstrate in the experiments that these new variants outperform other adaptive gradient techniques or stochastic gradient descent in the optimization of strongly convex functions as well as in training of deep neural networks.
机译:自适应梯度方法最近变得非常流行,特别是因为它们已被证明在深度神经网络的训练中很有用。在本文中,我们在在线凸优化的背景下分析了最初建议用于深度神经网络训练的RMSProp,并显示了$ sqrt {T} $型后悔界限。此外,我们提出了两个变体SC-Adagrad和SC-RMSProp,它们针对强凸函数显示了对数后悔边界。最后,我们在实验中证明,这些新变体在优化强凸函数以及训练深度神经网络方面优于其他自适应梯度技术或随机梯度下降。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号