首页> 外文会议>IEEE International Symposium on Signal Processing and Information Technology >A GPU Parallel Algorithm for Non Parametric Tensor Learning
【24h】

A GPU Parallel Algorithm for Non Parametric Tensor Learning

机译:一种非参数张解学学习的GPU并行算法

获取原文

摘要

One rich source of large data sets is the high dimensionality of the data formats known as tensors. Compared to the vector use, learning with tensors is inherently more complex and requires high-performance computing. The aim of this paper is to investigate tensor-based algorithms for regression and classification, i.e. tensor learning, that are suitable to be implemented in parallel architecture to handle large data sets. To this end a tensor learning model based on a general theoretical framework for approximating a generic tensor function has been established. Then a parallel version of the model has been derived to benefit t he GPU resources. Finally, extensive experiments on large data sets that use both CPU and GPU have been carried out to validate the proposed approach.
机译:一个丰富的大数据集来源是称为张量的数据格式的高维度。与矢量使用相比,用张量学习本质上更复杂,需要高性能计算。本文的目的是调查基于张量的回归和分类算法,即张量学习,适合于在并行架构中实现以处理大数据集。为此,已经建立了基于近似通用张量函数的普通理论框架的张量学习模型。然后,已导出模型的并行版本以使GPU资源受益。最后,已经执行了使用CPU和GPU的大型数据集的大量实验以验证所提出的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号