Devising efficient algorithms to solve continuously-varying strongly convexoptimization programs is key in many applications, from control systems tosignal processing and machine learning. In this context, solving means to findand track the optimizer trajectory of the continuously-varying convexoptimization program. Recently, a novel prediction-correction methodology hasbeen put forward to set up iterative algorithms that sample thecontinuously-varying optimization program at discrete time steps and perform alimited amount of computations to correct their approximate optimizer with thenew sampled problem and predict how the optimizer will change at the next timestep. Prediction-correction algorithms have been shown to outperform moreclassical strategies, i.e., correction-only methods. Typically,prediction-correction methods have asymptotic tracking errors of the order of$h^2$, where $h$ is the sampling period, whereas classical strategies haveorder of $h$. Up to now, Prediction-correction algorithms have been developedin the primal space, both for unconstrained and simply constrained convexprograms. In this paper, we show how to tackle linearly constrainedcontinuously-varying problem by prediction-correction in the dual space and weprove similar asymptotic error bounds than their primal versions.
展开▼
机译:设计有效的算法来解决不断变化的强大凸面化程序是许多应用中的关键,来自控制系统触索过程和机器学习。在这种情况下,解决手段来追踪连续变化的凸面化程序的优化器轨迹。最近,已经提出了一种新的预测校正方法,向前提出了设置迭代算法,以在离散时间步骤中对其进行分离的变化优化程序,并执行多边数量的计算,以纠正与那个采样问题的近似优化器,并预测优化器如何改变下一次时间。已经显示预测 - 校正算法以令人生意的是令人兴奋的策略,即惩罚方法。通常,预测 - 校正方法具有$ H ^ 2 $的顺序的渐近跟踪误差,其中$ H $是采样期,而经典策略有$ H $。到目前为止,预测校正算法已经开发出原始空间,无论是无关紧要的还是简单地约束的凸起编程。在本文中,我们展示了如何通过双层空间中的预测校正来解决线性收缩的连续变化的问题,而我们与其原始版本相似的渐近误差界限。
展开▼