首页> 外文期刊>Journal of Parallel and Distributed Computing >A multi-GPU biclustering algorithm for binary datasets
【24h】

A multi-GPU biclustering algorithm for binary datasets

机译:二进制数据集的多GPU双板算法

获取原文
获取原文并翻译 | 示例

摘要

Graphics Processing Units technology (GPU) and CUDA architecture are one of the most used options to adapt machine learning techniques to the huge amounts of complex data that are currently generated. Biclustering techniques are useful for discovering local patterns in datasets. Those of them that have been implemented to use GPU resources in parallel have improved their computational performance. However, this fact does not guarantee that they can successfully process large datasets. There are some important issues that must be taken into account, like the data transfers between CPU and GPU memory or the balanced distribution of workload between the GPU resources. In this paper, a GPU version of one of the fastest biclustering solutions, BiBit, is presented. This implementation, named gBiBit, has been designed to take full advantage of the computational resources offered by GPU devices. Either using a single GPU device or in its multi-GPU mode, gBiBit is able to process large binary datasets. The experimental results have shown that gBiBit improves the computational performance of BiBit, a CPU parallel version and an early GPU version, called ParBiBit and CUBiBit, respectively.
机译:图形处理单元技术(GPU)和CUDA架构是将机器学习技术调整到当前生成的大量复杂数据的机器学习技术之一。 BICLUSTING技术对于在数据集中发现本地模式非常有用。已经实施以并行使用GPU资源的人具有改进了其计算性能。但是,这一事实并不能保证他们可以成功处理大型数据集。必须考虑一些重要的问题,如CPU和GPU内存之间的数据转移或GPU资源之间的工作量平衡分布。本文提出了一种最快的BICLUSTING SOLUTION,BIBIT的GPU版本。该实施名为GBIBIT,旨在充分利用GPU设备提供的计算资源。使用单个GPU设备或其多GPU模式,GBIBIT能够处理大型二进制数据集。实验结果表明,GBIBIT分别提高了BIBIT的计算性能,CPU并行版本和早期GPU版本,称为Parbibit和Cubibit。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号