首页> 外文会议>IEEE International Conference on Communications >Deep Multi-Agent Reinforcement Learning Based Cooperative Edge Caching in Wireless Networks
【24h】

Deep Multi-Agent Reinforcement Learning Based Cooperative Edge Caching in Wireless Networks

机译:无线网络中基于深度多主体强化学习的协作边缘缓存

获取原文

摘要

The growing demand on high-quality and low-latency multimedia services has led to much interest in edge caching techniques. Motivated by this, we in this paper consider edge caching at the base stations with unknown content popularity distributions. To solve the dynamic control problem of making caching decisions, we propose a deep actor-critic reinforcement learning based multi-agent framework with the aim to minimize the overall average transmission delay. To evaluate the proposed framework, we compare the learning-based performance with three other caching policies, namely least recently used (LRU), least frequently used (LFU), and first-in-first-out (FIFO) policies. Through simulation results, performance improvements of the proposed framework over these three caching algorithms have been identified and its superior ability to adapt to varying environments is demonstrated.
机译:对高质量和低延迟多媒体服务的日益增长的需求引起了对边缘缓存技术的极大兴趣。因此,我们在内容受欢迎程度分布未知的基站中考虑边缘缓存。为了解决制定缓存决策的动态控制问题,我们提出了一种基于深度行为者批评的强化学习的多主体框架,旨在最大程度地减少总体平均传输延迟。为了评估提出的框架,我们将基于学习的性能与其他三种缓存策略进行了比较,即最近最少使用(LRU),最少使用(LFU)和先进先出(FIFO)策略。通过仿真结果,已经确定了所提出的框架在这​​三种缓存算法上的性能改进,并展示了其适应变化环境的出色能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号