首页> 外文期刊>IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems >DeepPrefetcher: A Deep Learning Framework for Data Prefetching in Flash Storage Devices
【24h】

DeepPrefetcher: A Deep Learning Framework for Data Prefetching in Flash Storage Devices

机译:DeepPrefetcher:闪存存储设备中的数据预取的深度学习框架

获取原文
获取原文并翻译 | 示例
           

摘要

In today's information-driven world, data access latency accounts for the expensive part of processing user requests. One potential solution to access latency is prefetching, a technique to speculate and move future requests closer to the processing unit. However, the block access requests received by the storage device show poor spatial locality because most file-related locality is absorbed in the higher layers of the memory hierarchy, including the CPU cache and main memory. Besides, the utilization of multithreading results in an interleaved access request making prefetching at the storage level more picky using existing prefetching techniques. Toward this, we propose and assess DeepPrefetcher, a novel deep neural network inspired context-aware prefetching method that adapts to arbitrary memory access patterns. DeepPrefetcher learns the block access pattern contexts using distributed representation and leverage long short-term memory learning model for context-aware data prefetching. Instead of using the logical block address (LBA) value directly, we model the difference between successive access requests, which contains more patterns than LBA value for modeling. By targeting access pattern sequence in this manner, the DeepPrefetcher can learn the vital context from a long input LBA sequence and learn to predict both the previously seen and unseen access patterns. The experimental result reveals that DeepPrefetcher can increase an average prefetch accuracy, coverage, and speedup by 21.5%, 19.5%, and 17.2%, respectively, contrasted with the baseline prefetching strategies. Overall, the proposed prefetching approach surpasses other schemes in all benchmarks, and the outcomes are promising.
机译:在当今的信息驱动的世界中,数据访问延迟占处理用户请求的昂贵部分。访问延迟的一个潜在解决方案是预取,推测和移动未来请求更接近处理单元的技术。然而,存储设备接收的块访问请求显示出差的空间局部性,因为大多数文件相关的局部性被吸收在存储器层级的较高层中,包括CPU高速缓存和主存储器。此外,利用多线程的利用,在交错的访问请求中使用现有预取技术在存储级别更加挑选。对此,我们提出并评估了深度预填充器,这是一种新的深度神经网络启发的上下文知识预取方法,适应任意内存访问模式。 DeepPrefetcher使用分布式表示来了解块访问模式上下文,并利用长期内存学习模型进行上下文感知数据预取。不直接使用逻辑块地址(LBA)值,而不是使用逻辑块地址(LBA)值模型在连续的访问请求之间的差异,其中包含比LBA值的更多模式以进行建模。通过以这种方式定位访问模式序列,深度预缩器可以从长输入LBA序列中学习重要的环境,并学习预测先前看到的和未经看不见的访问模式。实验结果表明,深脱硫器可以分别增加平均预取准确度,覆盖率和加速21.5%,19.5%和17.2%,与基线预取策略形成鲜明对比。总体而言,拟议的预取方法超越了所有基准中的其他方案,结果是有前途的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号