首页> 外文会议>IEEE Region 10 Symposium >GPU Accelerated Indexing for High Order Tensors in Google Colab
【24h】

GPU Accelerated Indexing for High Order Tensors in Google Colab

机译:Google Colab中针对高阶张量的GPU加速索引

获取原文

摘要

Processing large volume of multi-dimensional data or high order tensors can be challenging in terms of limitless execution time. We can improve the processing time with the parallel computation on a suitable indexing scheme. There are many software and hardware implementations available to apply the parallelism. We use Graphics Processing Unit (GPU) as the parallel device considering the idea of General-Purpose GPU (GPGPU) computing. In this paper, we implement an Index Partitioning Algorithm (IPA) to speed-up the tensor indexing where the tensor indices are divided into smaller segments and executed concurrently. This algorithm ensures the load balancing by the equal distribution of the segments on GPU threads. We also embed a Scalable Tensor Structure (STS) to transform the n-dimensional index into a 2- dimensional index. This structure gives better performance by the efficient memory utilization and lesser index computation. We deploy our algorithms in the “Google Colab” platform for experimental purposes. Because of the current increasing rate of data creation, our algorithm is expected to contribute heavily in many research domains for large scale data processing.
机译:就无限的执行时间而言,处理大量的多维数据或高阶张量可能具有挑战性。通过在合适的索引方案上进行并行计算,我们可以缩短处理时间。有许多软件和硬件实现可用于应用并行性。考虑到通用GPU(GPGPU)计算的思想,我们将图形处理单元(GPU)用作并行设备。在本文中,我们实现了索引分区算法(IPA)来加速张量索引,其中将张量索引划分为较小的段并同时执行。该算法通过在GPU线程上分段的均匀分布来确保负载平衡。我们还嵌入了可伸缩张量结构(STS),以将n维索引转换为2维索引。通过有效的内存利用和较少的索引计算,此结构可提供更好的性能。我们出于实验目的在“ Google Colab”平台中部署了算法。由于当前数据创建速度的提高,我们的算法有望在大规模数据处理的许多研究领域中发挥重要作用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号