首页> 外文会议>IEEE Symposium Series on Computational Intelligence >An improved mini-batching technique: Sample-and-Learn
【24h】

An improved mini-batching technique: Sample-and-Learn

机译:改进的迷你分批技术:样例学习

获取原文

摘要

Artificial neural networks suffer from prolonged training times, this is intensified when the volume of data is big. Mini-batching has become the standard in training neural networks. By reducing the number of data used to train each iteration the training time greatly shortens; even in a big data environment. A number of techniques such as parallel computation have rendered mini-batching a necessity for complex neural network models. Due to simplicity however, mini-batching methods have a number of inherent disadvantages that can affect the accuracy and convergence of a model. In this work, we focus on the ordering of samples presented to a neural network and propose a random sampling approach for generating mini-batches in linear time. Experimental results show that networks using our proposed Sample-and-Learn approach converge in fewer iterations while providing comparable or better accuracy.
机译:人工神经网络受训练时间延长的困扰,当数据量很大时,这种情况会加剧。迷你批处理已成为训练神经网络的标准。通过减少用于训练每次迭代的数据数量,训练时间大大缩短了;即使在大数据环境中诸如并行计算之类的许多技术已使微型分批处理成为复杂神经网络模型的必要条件。但是,由于简单性,迷你批处理方法具有许多固有的缺点,这些缺点可能会影响模型的准确性和收敛性。在这项工作中,我们专注于呈现给神经网络的样本的排序,并提出了一种用于在线性时间内生成小批量的随机抽样方法。实验结果表明,使用我们提出的“样本与学习”方法的网络收敛次数更少,同时提供了可比或更高的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号