首页> 外文会议>International Conference on Computer Science and Service System;CSSS 2012 >An Efficient Method for Incremental Learning of GMM Using CUDA
【24h】

An Efficient Method for Incremental Learning of GMM Using CUDA

机译:一种使用CUDA进行GMM增量学习的有效方法

获取原文

摘要

Incremental learning algorithms of the Gaussian Mixture Model can find applications in various scenarios. This paper proposes a CUDA-based method to accelerate incremental learning of GMM. Different from existing methods towards GMM on GPU, our method aims to hide data transfer latency instead of accelerating the algorithm itself. Due to the inherent characteristic of memory-critical incremental learning applications, loading data from external memory and copying data from host to device will inevitably contributes to the overall time consumption. CUDA capabilities called 'concurrent execution' and 'overlap data transfer' are leveraged to implement incremental GMM learning in a pipelined pattern. The efficiency of our method is validated through preliminary experiments, which demonstrate improved performance over the non-pipelined method.
机译:高斯混合模型的增量学习算法可以在各种场景中找到应用。本文提出了一种基于CUDA的方法来加速GMM的增量学习。与针对GPU上的GMM的现有方法不同,我们的方法旨在隐藏数据传输延迟,而不是加速算法本身。由于关键存储增量学习应用程序的固有特性,从外部存储器加载数据以及将数据从主机复制到设备将不可避免地造成总体时间消耗。利用称为“并行执行”和“重叠数据传输”的CUDA功能,以流水线模式实施增量GMM学习。我们的方法的效率已通过初步实验验证,这些实验证明了与非流水线方法相比,性能有所提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号