首页> 外文会议>IEEE International Conference on Big Data >Making caches work for graph analytics
【24h】

Making caches work for graph analytics

机译:使高速缓存为图形分析工作

获取原文

摘要

Large-scale applications implemented in today's high performance graph frameworks heavily underutilize modern hardware systems. While many graph frameworks have made substantial progress in optimizing these applications, we show that it is still possible to achieve up to 5× speedups over the fastest frameworks by greatly improving cache utilization. Previous systems have applied out-of-core processing techniques from the memory/disk boundary to the cache/DRAM boundary. However, we find that blindly applying such techniques is ineffective because the much smaller performance gap between cache and DRAM requires new designs for achieving scalable performance and low overhead. We present Cagra, a cache optimized inmemory graph framework. Cagra uses a novel technique, CSR Segmenting, to break the vertices into segments that fit in last level cache, and partitions the graph into subgraphs based on the segments. Random accesses in each subgraph are limited to one segment at a time, eliminating the much slower random accesses to DRAM. The intermediate updates from each subgraph are written into buffers sequentially and later merged using a low overhead parallel cache-aware merge. Cagra achieves speedups of up to 5× for PageRank, Collaborative Filtering, Label Propagation and Betweenness Centrality over the best published results from state-of-the-art graph frameworks, including GraphMat, Ligra and GridGraph.
机译:在当今高性能图框架中实施的大型应用程序大量减轻了现代硬件系统。虽然许多图表框架在优化这些应用程序方面取得了实质性进展,但我们表明,通过大大提高缓存利用率,仍然可以通过最快的框架实现最多5倍的加速。以前的系统已经将核心处理技术从存储器/磁盘边界应用于缓存/ DRAM边界。但是,我们发现盲目应用这些技术是无效的,因为缓存和DRAM之间的性能差距更小,需要新的设计来实现可扩展性能和低开销。我们呈现CAGRA,一个缓存优化的Inmemory图形框架。 CAGRA使用新颖的技术CSR分段,将顶点分解为适合最后一级缓存的段,并根据段基于段将图形分为子图。一次随机访问一次限制为一个段,消除了对DRAM的速度较慢的随机访问。每个子图中的中间更新按顺序写入缓冲区,然后使用低开销并行缓存感知合并合并。 CAGRA可以实现PageRank,协作过滤,标签传播和最佳发布结果的最佳发布结果的加速度,从而实现了最佳的图形框架,包括GraphMat,LIGra和Gridgraph。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号