首页> 外文期刊>Software >A deep recurrent Q network towards self-adapting distributed microservice architecture
【24h】

A deep recurrent Q network towards self-adapting distributed microservice architecture

机译:面向自适应分布式微服务架构的深度递归Q网络

获取原文
获取原文并翻译 | 示例
           

摘要

One desired aspect of microservice architecture is the ability to self-adapt its own architecture and behavior in response to changes in the operational environment. To achieve the desired high levels of self-adaptability, this research implements distributed microservice architecture model running a swarm cluster, as informed by the Monitor, Analyze, Plan, and Execute over a shared Knowledge (MAPE-K) model. The proposed architecture employs multiadaptation agents supported by a centralized controller, which can observe the environment and execute a suitable adaptation action. The adaptation planning is managed by a deep recurrent Q-learning network (DRQN). It is argued that such integration between DRQN and Markov decision process (MDP) agents in a MAPE-K model offers distributed microservice architecture with self-adaptability and high levels of availability and scalability. Integrating DRQN into the adaptation process improves the effectiveness of the adaptation and reduces any adaptation risks, including resource overprovisioning and thrashing. The performance of DRQN is evaluated against deep Q-learning and policy gradient algorithms, including (1) a deep Q-learning network (DQN), (2) a dueling DQN (DDQN), (3) a policy gradient neural network, and (4) deep deterministic policy gradient. The DRQN implementation in this paper manages to outperform the aforementioned algorithms in terms of total reward, less adaptation time, lower error rates, plus faster convergence and training time. We strongly believe that DRQN is more suitable for driving the adaptation in distributed services-oriented architecture and offers better performance than other dynamic decision-making algorithms.
机译:微服务架构的一个理想方面是能够响应于操作环境的变化而自适应其自身架构和行为的能力。为了实现所需的高水平的自适应性,本研究通过监控,分析,计划和执行基于共享知识(MAPE-K)的模型,实现了运行集群的分布式微服务体系结构模型。所提出的体系结构采用由中央控制器支持的多适应代理,该代理可以观察环境并执行适当的适应动作。适应计划由深度循环Q学习网络(DRQN)管理。有人认为,在MAPE-K模型中,DRQN和Markov决策过程(MDP)代理之间的这种集成提供了具有自适应性,高可用性和可伸缩性的分布式微服务体系结构。将DRQN集成到适应过程中可提高适应的有效性,并减少任何适应风险,包括资源超额配置和颠簸。 DRQN的性能是根据深度Q学习和策略梯度算法进行评估的,包括(1)深度Q学习网络(DQN),(2)决斗DQN(DDQN),(3)策略梯度神经网络和(4)深度确定性政策梯度。本文的DRQN实现在总报酬,更少的适应时间,更低的错误率以及更快的收敛和训练时间方面设法优于上述算法。我们坚信,DRQN更适合于驱动面向分布式服务的体系结构中的适应,并且比其他动态决策算法提供更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号