...
首页> 外文期刊>電子情報通信学会技術研究報告. ニュ-ロコンピュ-ティング. Neurocomputing >Parallel learning of neural networks on PC-cluster systems using mini-batch learning schema
【24h】

Parallel learning of neural networks on PC-cluster systems using mini-batch learning schema

机译:使用小批量学习模式在PC集群系统上并行学习神经网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Due to the increasing development of hardware and network technology, it has become common to have a huge data set. When processing such data by neural networks, learning takes long time. In this paper, we propose parallel learning by distributing a task into two or more CPUs. In parallel learning, mini-batch learning schema is indespensable because we must divide the entire data set into subsets and allocate each of them to the parallelized neural network. In parallel learning, the redundancy of the data is a key concept. When there are no redundancy in data, learning accuracy might decrease by the existence of the bias in each subset. On the other hand, when the data has much redundancy, the bias in each subset would be small and parallel learning might yield high efficiency. In this paper, we study a parallel learning procedure using mini-batch learning schema and investigate the relationship between the efficiency of the parallel learning and the data redundancy. It is quite common for huge data sets to have some redundancy and we expect our procedure might work in a variety of applications.
机译:由于硬件和网络技术的不断发展,拥有庞大的数据集已变得司空见惯。通过神经网络处理此类数据时,学习需要很长时间。在本文中,我们通过将任务分配到两个或多个CPU中来提出并行学习。在并行学习中,小批量学习模式是必不可少的,因为我们必须将整个数据集划分为子集,并将每个数据集分配给并行化的神经网络。在并行学习中,数据的冗余是一个关键概念。当数据中没有冗余时,由于每个子集中存在偏差,学习准确性可能会降低。另一方面,当数据具有很多冗余时,每个子集中的偏差会很小,并且并行学习可能会产生很高的效率。在本文中,我们研究了使用小批量学习模式的并行学习过程,并研究了并行学习的效率与数据冗余之间的关系。大型数据集具有一定的冗余是很常见的,我们希望我们的过程可以在各种应用程序中运行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号