首页> 外文会议>2019 International Conference on High Performance Big Data and Intelligent Systems >Content-centric Caching Using Deep Reinforcement Learning in Mobile Computing
【24h】

Content-centric Caching Using Deep Reinforcement Learning in Mobile Computing

机译:在移动计算中使用深度强化学习的以内容为中心的缓存

获取原文
获取原文并翻译 | 示例

摘要

In era of Internet, the amount of the connected devices has been remarkably increasing along with the increment of the network-based service. Both service quality and user's experience are facing great impact from latency issue while a large volume of concurrent user requests are made in the context of mobile computing. Deploying caching techniques at base stations or edge nodes is an alternative for dealing with the latency time issue. However, traditional caching techniques, e.g. Least Recently Used (LRU) or Least Frequently Used (LFU), cannot efficiently resolve latency caused by the complex content-oriented popularity distribution. In this paper, we propose a Deep Reinforcement Learning (DPL)-based approach to make the caching storage adaptable for dynamic and complicated mobile networking environment. The proposed mechanism does not need priori knowledge of the popularity distribution, so that it has a higher-level adoptability and flexibility in practice, compared with LRU and LFU. Our evaluation also compares the proposed approach with other deep learning methods and the results have suggested that our approach has a higher accuracy.
机译:在互联网时代,随着基于网络的服务的增加,连接设备的数量已显着增加。服务质量和用户体验都面临着延迟问题的巨大影响,而在移动计算环境中同时发出大量并发用户请求。在基站或边缘节点上部署缓存技术是解决延迟时间问题的一种替代方法。但是,传统的缓存技术例如最近最少使用(LRU)或最少经常使用(LFU)无法有效解决由复杂的面向内容的受欢迎程度分布引起的延迟。在本文中,我们提出了一种基于深度强化学习(DPL)的方法,以使缓存存储适用于动态和复杂的移动网络环境。所提出的机制不需要先验的流行度分布知识,因此与LRU和LFU相比,它在实践中具有更高级别的采用性和灵活性。我们的评估还将该提议的方法与其他深度学习方法进行了比较,结果表明我们的方法具有更高的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号