首页> 外文期刊>IEEE Journal on Selected Areas in Communications >Deep Learning for Distributed Optimization: Applications to Wireless Resource Management
【24h】

Deep Learning for Distributed Optimization: Applications to Wireless Resource Management

机译:深度学习的分布式优化:在无线资源管理中的应用

获取原文
获取原文并翻译 | 示例
           

摘要

This paper studies a deep learning (DL) framework to solve distributed non-convex constrained optimizations in wireless networks where multiple computing nodes, interconnected via backhaul links, desire to determine an efficient assignment of their states based on local observations. Two different configurations are considered: First, an infinite-capacity backhaul enables nodes to communicate in a lossless way, thereby obtaining the solution by centralized computations. Second, a practical finite-capacity backhaul leads to the deployment of distributed solvers equipped along with quantizers for communication through capacity-limited backhaul. The distributed nature and the non-convexity of the optimizations render the identification of the solution unwieldy. To handle them, deep neural networks (DNNs) are introduced to approximate an unknown computation for the solution accurately. In consequence, the original problems are transformed to training tasks of the DNNs subject to non-convex constraints where existing DL libraries fail to extend straightforwardly. A constrained training strategy is developed based on the primal-dual method. For distributed implementation, a novel binarization technique at the output layer is developed for quantization at each node. Our proposed distributed DL framework is examined in various network configurations of wireless resource management. Numerical results verify the effectiveness of our proposed approach over existing optimization techniques.
机译:本文研究了一种深度学习(DL)框架,用于解决无线网络中的分布式非凸约束优化,在该网络中,多个通过回程链路互连的计算节点希望根据本地观察确定其状态的有效分配。考虑了两种不同的配置:首先,无限容量回程使节点能够以无损方式进行通信,从而通过集中计算获得解决方案。其次,实际的有限容量回程导致部署分布式求解器以及量化器,以通过容量受限的回程进行通信。优化的分布式性质和非凸性使解决方案的识别变得笨拙。为了处理这些问题,引入了深度神经网络(DNN)来为解决方案精确地估算未知的计算量。结果,原始问题被转换为受非凸约束的DNN训练任务,其中现有DL库无法直接扩展。基于原对偶方法,提出了一种约束训练策略。对于分布式实现,在输出层开发了一种新颖的二值化技术,用于在每个节点进行量化。在无线资源管理的各种网络配置中检查了我们提出的分布式DL框架。数值结果证明了我们提出的方法优于现有优化技术的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号