首页> 外文期刊>IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems >Q -Value Prediction for Reinforcement Learning Assisted Garbage Collection to Reduce Long Tail Latency in SSD
【24h】

Q -Value Prediction for Reinforcement Learning Assisted Garbage Collection to Reduce Long Tail Latency in SSD

机译:Q -Value预测加固学习辅助垃圾收集减少SSD的长尾延迟

获取原文
获取原文并翻译 | 示例

摘要

Garbage collection (GC), an essential operation for flash storage systems, causes long tail latency which is one of the key problems in real-time and quality-critical systems. In this article, we take advantage of reinforcement learning (RL) to reduce long tail latency. Especially, we propose two novel techniques which are: 1) Q-table cache (QTC) and 2) Q-value prediction. The QTC allows us to utilize appropriate and frequently recurring key states at a small memory cost. We propose a neural network called Q-value prediction network (QP Net) that predicts the initial Q-value of a new state in the QTC. The integrated solution of QTC and QP Net enables us to benefit from both short-term (by QTC) and long-term (by QP Net) history of system behavior to reduce the long tail latency. The experimental results demonstrate that the proposed scheme offers significant (by 25%-37%) reductions in the long tail latency of storage-intensive workloads compared with the state-of-the-art solution that adopts an RL-assisted GC scheduler.
机译:垃圾收集(GC)是闪存存储系统的基本操作,导致长尾延迟是实时和质量关键系统中的关键问题之一。在本文中,我们利用加强学习(RL)来减少长尾延迟。特别是,我们提出了两种新颖的技术,即:1)Q-Table Cache(QTC)和2)Q值预测。 QTC允许我们以小的内存成本利用适当的和经常重复的密钥状态。我们提出了一种称为Q值预测网络(QP网)的神经网络,其预测QTC中的新状态的初始Q值。 QTC和QP网的集成解决方案使我们能够从短期(QTC)和长期(按QP网)的系统行为历史中受益,以减少长尾延迟。实验结果表明,与采用RL辅助GC调度器的最先进的解决方案相比,所提出的方案在存储密集型工作量的长尾延迟中提供了显着的(25%-37%)减少。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号