首页> 外文期刊>SIAM Journal on Optimization: A Publication of the Society for Industrial and Applied Mathematics >VARIANCE-BASED EXTRAGRADIENT METHODS WITH LINE SEARCH FOR STOCHASTIC VARIATIONAL INEQUALITIES
【24h】

VARIANCE-BASED EXTRAGRADIENT METHODS WITH LINE SEARCH FOR STOCHASTIC VARIATIONAL INEQUALITIES

机译:基于差异的自题方法,具有线路搜索随机变分不等式

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we propose dynamic sampled stochastic approximated (DS-SA) extragradient methods for stochastic variational inequalities (SVIs) that are robust with respect to an unknown Lipschitz constant L. We propose, to the best of our knowledge, the first provably convergent robust SA method with variance reduction, either for SVIs or stochastic optimization, assuming just an unbiased stochastic oracle within a large sample regime. This widens the applicability and improves, up to constants, the desired efficient acceleration of previous variance reduction methods, all of which still assume knowledge of L (and, hence, are not robust against its estimate). Precisely, compared to the iteration and oracle complexities of O(epsilon(-2)) of previous robust methods with a small stepsize policy, our robust method uses a DS-SA line search scheme obtaining the faster iteration complexity of O(epsilon(-1)) with oracle complexity of (ln L) O(d epsilon(-2)) (up to log factors on epsilon(-1)) for a d-dimensional space. This matches, up to constants, the sample complexity of the sample average approximation estimator which does not assume additional problem information (such as L). Differently from previous robust methods for ill-conditioned problems, we allow an unbounded feasible set and an oracle with multiplicative noise (MN) whose variance is not necessarily uniformly bounded. These properties are appreciated in our complexity estimates which depend only on L and local variances or fourth moments at solutions. The robustness and variance reduction properties of our DS-SA line search scheme come at the expense of nonmartingale-like dependencies (NMDs) due to the needed inner statistical estimation of a lower bound for L. In order to handle an NMD and an MN, our proofs rely on a novel iterative localization argument based on empirical process theory.
机译:在本文中,我们提出了用于随机变分不等式(SVIS)的动态采样随机近似(DS-SA)以鲁棒,对未知的Lipschitz常数L.我们提出的是我们的知识,这是第一个可怕的会聚具有方差减少的鲁棒SA方法,用于SVIS或随机优化,假设仅在大型样品制度范围内的无偏析随机甲腔。这扩大了适用性并提高了常量,所需的先前方差减少方法的所需高效加速度,所有这些都仍然假设L(以及因此,对其估计不稳定)。精确地,与o(-2)的迭代和Oracle复杂性相比,具有小于步骤策略的先前强大方法,我们的鲁棒方法使用DS-SA线路搜索方案获得o的更快的迭代复杂度(epsilon( - 1))(LN L)O的Oracle复杂性(D epsilon(-2))(最多为D维空间的epsilon(-1)上的因子)。这与常量相比匹配,样本平均近似估计器的样本复杂度不承担其他问题信息(例如L)。与以前的稳健方法不同,我们允许无限的可行集合和具有乘法噪声(MN)的Oracle,其方差不一定是均匀的界限。在我们的复杂性估计中,这些属性仅依赖于L和局部差异或解决方案的第四个时刻。由于L的所需内部统计估计,我们的DS-SA线路搜索方案的鲁棒性和方差减少性能以非统计的统计估计为代价。为了处理NMD和MN,我们的证据依赖于基于实证过程理论的新型迭代本地化论点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号