...
首页> 外文期刊>電子情報通信学会技術研究報告. ニュ-ロコンピュ-ティング. Neurocomputing >Parallel learning of neural networks on PC-cluster systems using mini-batch learning schema
【24h】

Parallel learning of neural networks on PC-cluster systems using mini-batch learning schema

机译:使用迷你批量学习架构的PC集群系统对神经网络的并行学习

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Due to the increasing development of hardware and network technology, it has become common to have a huge data set. When processing such data by neural networks, learning takes long time. In this paper, we propose parallel learning by distributing a task into two or more CPUs. In parallel learning, mini-batch learning schema is indespensable because we must divide the entire data set into subsets and allocate each of them to the parallelized neural network. In parallel learning, the redundancy of the data is a key concept. When there are no redundancy in data, learning accuracy might decrease by the existence of the bias in each subset. On the other hand, when the data has much redundancy, the bias in each subset would be small and parallel learning might yield high efficiency. In this paper, we study a parallel learning procedure using mini-batch learning schema and investigate the relationship between the efficiency of the parallel learning and the data redundancy. It is quite common for huge data sets to have some redundancy and we expect our procedure might work in a variety of applications.
机译:由于硬件和网络技术的发展越来越高,它具有巨大的数据集是常见的。通过神经网络处理此类数据,学习需要很长时间。在本文中,我们通过将任务分发成两个或更多CPU来提出并行学习。在并行学习中,迷你批处理学习模式是不明标记的,因为我们必须将整个数据划分为子集并将它们中的每一个分配给并行化神经网络。在并行学习中,数据的冗余是关键概念。当数据中没有冗余时,通过每个子集中的偏差的存在可能会降低学习精度。另一方面,当数据具有很大的冗余时,每个子集中的偏差将是小而平行的学习可能会产生高效率。在本文中,我们使用迷你批量学习模式研究并行学习程序,并调查并行学习效率与数据冗余之间的关系。巨大的数据集具有一些冗余,我们希望我们的程序可能在各种应用程序中工作。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号