In this paper we consider nonconvex optimization and learning over a network of distributed nodes. We develop a Proximal Primal-Dual Algorithm (Prox-PDA), which enables the network nodes to distributedly and collectively compute the set of first-order stationary solutions in a global sublinear manner [with a rate of $O(1/r)$, where $r$ is the iteration counter]. To the best of our knowledge, this is the first algorithm that enables distributed nonconvex optimization with global rate guarantees. Our numerical experiments also demonstrate the effectiveness of the proposed algorithm.
展开▼
机译:在本文中,我们考虑了非凸优化和在分布式节点网络上的学习。我们开发了一种近邻原始对偶算法(Prox-PDA),该算法使网络节点能够以全局亚线性方式[速率为$ O(1 / r)$ ,其中$ r $是迭代计数器]。据我们所知,这是第一个启用具有全局速率保证的分布式非凸优化的算法。我们的数值实验也证明了该算法的有效性。
展开▼