首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Batch-Expansion Training: An Efficient Optimization Framework
【24h】

Batch-Expansion Training: An Efficient Optimization Framework

机译:批量扩展培训:高效的优化框架

获取原文
       

摘要

We propose Batch-Expansion Training (BET), a framework for running a batch optimizer on a gradually expanding dataset. As opposed to stochastic approaches, batches do not need to be resampled i.i.d. at every iteration, thus making BET more resource efficient in a distributed setting, and when disk-access is constrained. Moreover, BET can be easily paired with most batch optimizers, does not require any parameter-tuning, and compares favorably to existing stochastic and batch methods. We show that when the batch size grows exponentially with the number of outer iterations, BET achieves optimal O (1/epsilon) data-access convergence rate for strongly convex objectives. Experiments in parallel and distributed settings show that BET performs better than standard batch and stochastic approaches.
机译:我们提出了批量扩展培训(BET),这是一个用于在逐渐扩展的数据集上运行批优化器的框架。与随机方法相反,批次不需要进行i.i.d重新采样。在每次迭代时,这样都可以使BET在分布式设置中以及在磁盘访问受到限制时更加有效地利用资源。此外,BET可以轻松地与大多数批处理优化器配对,不需要任何参数调整,并且与现有的随机和批处理方法相比具有优势。我们表明,当批处理量随着外部迭代次数的增长而呈指数增长时,对于强凸目标,BET可获得最佳O(1 / epsilon)数据访问收敛速度。并行和分布式设置的实验表明,BET的性能优于标准的批处理和随机方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号