首页> 外文会议>IEEE International Conference on Communications >DQ Scheduler: Deep Reinforcement Learning Based Controller Synchronization in Distributed SDN
【24h】

DQ Scheduler: Deep Reinforcement Learning Based Controller Synchronization in Distributed SDN

机译:DQ调度程序:分布式SDN中基于深度强化学习的控制器同步

获取原文

摘要

In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralized control, scalability and reliability requirements. In such networking paradigm, controllers synchronize with each other to maintain a logically centralized network view. Despite various proposals of distributed SDN controller architectures, most existing works only assume that such logically centralized network view can be achieved with some synchronization designs, but the question of how exactly controllers should synchronize with each other to maximize the benefits of synchronization under the eventual consistency assumptions is largely overlooked. To this end, we formulate the controller synchronization problem as a Markov Decision Process (MDP) and apply reinforcement learning techniques combined with deep neural network to train a smart controller synchronization policy, which we call the Deep-Q (DQ) Scheduler. Evaluation results show that DQ Scheduler outperforms the anti-entropy algorithm implemented in the ONOS controller by up to 95.2% for inter-domain routing tasks.
机译:在分布式软件定义网络(SDN)中,实现了多个物理SDN控制器,每个控制器管理一个网络域,以平衡集中控制,可伸缩性和可靠性要求。在这样的网络范例中,控制器彼此同步以维持逻辑上集中的网络视图。尽管提出了分布式SDN控制器体系结构的各种建议,但大多数现有工作仅假设可以通过一些同步设计来实现这种逻辑上集中的网络视图,但是在最终一致性下,控制器之间应如何精确地同步以最大程度地发挥同步优势的问题仍然存在。假设在很大程度上被忽略了。为此,我们将控制器同步问题表述为马尔可夫决策过程(MDP),并应​​用强化学习技术与深度神经网络相结合来训练智能控制器同步策略,我们将其称为Deep-Q(DQ)调度程序。评估结果表明,对于域间路由任务,DQ调度程序比在ONOS控制器中实现的反熵算法性能高95.2%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号