首页> 外文会议>International Conference on High Performance Computing and Simulation >Enhancing machine learning optimization algorithms by leveraging memory caching
【24h】

Enhancing machine learning optimization algorithms by leveraging memory caching

机译:通过利用内存缓存增强机器学习优化算法

获取原文

摘要

Searching a solution space using Stochastic Gradient Descent (SGD) depends on the examples picked at each iteration of the algorithm. Therefore, best practices suggest randomizing the order of training points to visit after every epoch. This random selection is typically implemented as a random shuffling of the order of the training vectors rather than a genuine random training point selection. The shuffling is usually performed after every epoch which results in an extremely low temporal locality of access to the training set. Indeed, each training point is used once, and not before all the other training points have been visited. This means that a cache layer in the memory hierarchy of a modern HPC computer system will have little benefit for the algorithm unless all the training points fit inside that cache.
机译:使用随机梯度下降(SGD)搜索解决方案空间取决于在算法的每次迭代时拾取的示例。因此,最佳实践建议随机化培训点之后的训练点才能访问。这种随机选择通常被实现为训练矢量的顺序的随机洗机,而不是真正的随机训练点选择。在每个时期之后通常执行洗机,这导致对训练集的极低时间局部地点。实际上,每个训练点都被使用一次,而不是在访问过的所有其他培训点之前。这意味着现代HPC计算机系统的内存层次结构中的缓存层对算法几乎没有益处,除非所有训练点都适合该缓存。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号