We consider the solution of a stochastic convex optimization problem E[f(x; θ~*, ξ)] in x over a closed and convex set X in a regime where θ~* is unavailable. Instead, θ~* may be learnt by minimizing a suitable metric E[g(θ;η)] in θ over a closed and convex set Θ. We present a coupled stochastic approximation scheme for the associated stochastic optimization problem with imperfect information. The schemes are shown to be equipped with almost sure convergence properties in regimes where the function f is both strongly convex as well as merely convex. Rate estimates are provided in both a strongly convex as well as a merely convex regime, where the use of averaging facilitates the development of a bound.
展开▼
机译:我们考虑随机凸优化问题E [f(x;θ〜*,ξ)]在x上的x oled和convex set x中的解决方案,其中θ〜*不可用。相反,可以通过在闭合和凸起的设定θ上最小化θ中的合适度量E [g(θ;η)]来学习θ〜*。我们介绍了具有不完美信息的相关随机优化问题的耦合随机近似方案。该方案被示出配备有几乎定期的函数F在功能f的趋差和仅凸起的状态下。速率估计是在强凸的以及仅仅凸出的制度中提供的,其中使用平均值有助于开发界定。
展开▼