首页> 外文会议>International Conference on Neural Information Processing >Accelerating Core Decomposition in Large Temporal Networks Using GPUs
【24h】

Accelerating Core Decomposition in Large Temporal Networks Using GPUs

机译:使用GPU加速大型时间网络中的核心分解

获取原文

摘要

In recent times, many real-world networks are naturally modeled as temporal networks, such as neural connection in biological networks over time, the interaction between friends at different time in social networks, etc. To visualize and analysis these temporal networks, core decomposition is an efficient strategy to distinguish the relative "importance" of nodes. Existing works mostly focus on core decomposition in non-temporal networks and pursue efficient CPU-based approaches. However, applying these works in temporal networks makes core decomposition an already computationally expensive task. In this paper, we propose two novel acceleration methods of core decomposition in the large temporal networks using the high parallelism of GPU. From the evaluation results, the proposed acceleration methods achieve maximum 4.1 billions TEPS (traversed edges per second), which corresponds to up to 26.6× speedup compared to a single threaded CPU execution.
机译:最近,许多真实网络自然被建模为时间网络,例如生物网络中的神经连接随着时间的推移,社交网络不同时间的朋友之间的互动等,以便可视化和分析这些时间网络,核心分解是有效的策略,以区分节点的相对“重要性”。现有工作主要集中在非时间网络中的核心分解,并追求基于CPU的高效方法。但是,在时间网络中应用这些作品使核心分解已经计算出昂贵的任务。在本文中,我们使用GPU的高行性地提出了两种大型时间网络中的核心分解的新型加速方法。从评估结果,所提出的加速方法实现最多4.1个数册TEPS(每秒穿过边缘),与单螺纹CPU执行相比,最多可达26.6倍的加速。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号