首页> 外文会议>IEEE Annual Conference on Decision and Control >A random monotone operator framework for strongly convex stochastic optimization
【24h】

A random monotone operator framework for strongly convex stochastic optimization

机译:用于强凸随机优化的随机单调算子框架

获取原文
获取外文期刊封面目录资料

摘要

Analysis of every algorithm for stochastic optimization seems to require a different convergence proof. It would be desirable to have a unified mathematical framework within which with minimal extra effort, proof of convergence and its rate could be obtained. We present a random monotone operator-based unified convergence analysis framework for iterative algorithms for strongly convex stochastic optimization. The framework offers both versatility and simplicity, and allows for clean and straightforward analysis of many algorithms for stochatic convex minimization, saddle-point problems and variational inequalities. We show convergence of the random operator to a probabilistic fixed point, and obtain non-asymptotic rates of convergence. The analysis technique relies on a novel stochastic dominance argument.
机译:对于随机优化的每种算法的分析似乎都需要不同的收敛证明。希望有一个统一的数学框架,在此框架内,只需最少的额外努力,即可获得收敛性及其速率的证明。我们为强凸随机优化的迭代算法提出了一个基于随机单调算子的统一收敛分析框架。该框架既具有多功能性又具有简单性,并且可以对许多用于随机凸最小化,鞍点问题和变分不等式的算法进行简洁明了的分析。我们显示了随机算子到一个概率固定点的收敛性,并获得了非渐近收敛率。分析技术依赖于一种新颖的随机优势论据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号