Decentralized distributed optimization over time-varying graphs (networks) is nowadays a very popular branch of research in optimization theory and consensus theory. Applications of this field include drone or satellite networks, as well as distributed machine learning. However, the first theoretical results in this branch appeared only a few years ago. In this paper, we propose a simple method which alternates making gradient steps and special communication procedures. Our approach is based on reformulation of the distributed optimization problem as a problem with linear constraints and then replacing linear constraints with a penalty term.
展开▼