首页> 外文会议>IEEE International Conference on Parallel and Distributed Systems >Efficient GPU-Based Query Processing with Pruned List Caching in Search Engines
【24h】

Efficient GPU-Based Query Processing with Pruned List Caching in Search Engines

机译:基于GPU的基于GPU的查询处理以及搜索引擎中的修剪列表缓存

获取原文

摘要

There are two inherent obstacles to effectively using Graphics Processing Units (GPUs) for query processing in search engines: (a) the highly restricted GPU memory space, and (b) the CPU-GPU transfer latency. Previously, Ao et al. presented a GPU method for lists intersection, an essential component in AND-based query processing. However, this work assumes the whole inverted index can be stored in GPU memory and does not address document ranking. In this paper, we describe and analyze a GPU query processing method which incorporates both lists intersection and top-k ranking. We introduce a parameterized pruned posting list GPU caching method where the parameter determines how much GPU memory is used for caching. This method allows list caching for large inverted indexes using the limited GPU memory, thereby making a qualitative improvement over previous work. We also give a mathematical model which can identify an approximately optimal choice of the parameter. Experimental results indicate that this GPU approach under the pruned list caching policy achieves better query throughput than its CPU counterpart, even when the inverted index size is much larger than the GPU memory space.
机译:有效地使用图形处理单元(GPU)有效地有两个固有的障碍,用于搜索引擎中的查询处理:(a)高度限制的GPU存储空间,(b)CPU-GPU传输延迟。以前,Ao等人。介绍了一个GPU方法,用于列出交叉点,基于基于和基于和基于查询处理的基本组件。但是,这项工作假定整个反相索引可以存储在GPU内存中,并且不会解决文档排名。在本文中,我们描述并分析了GPU查询处理方法,该方法包含了一个列表交叉口和排名。我们介绍了一个参数化修剪的帖子列表GPU缓存方法,其中该参数确定GPU内存用于缓存的数量。该方法允许使用有限的GPU存储器列出用于大反相索引的缓存,从而对以前的工作进行了定性改进。我们还提供了一个数学模型,可以识别参数的大致最佳选择。实验结果表明,这种GPU方法在修剪名单缓存政策下实现了比其CPU对应于其CPU对应的更好的查询吞吐量,即使反向索引大小远大于GPU存储空间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号