首页> 外文期刊>Operating systems review >Taming Hyper-parameters in Deep Learning Systems
【24h】

Taming Hyper-parameters in Deep Learning Systems

机译:驯服深度学习系统中的超参数

获取原文
获取原文并翻译 | 示例
           

摘要

Deep learning (DL) systems expose many tuning parameters ("hyper-parameters") that affect the performance and accuracy of trained models. Increasingly users struggle to configure hyper-parameters, and a substantial portion of time is spent tuning them empirically. We argue that future DL systems should be designed to help manage hyper-parameters. We describe how a distributed DL system can (i) remove the impact of hyper-parameters on both performance and accuracy, thus making it easier to decide on a good setting, and (ii) support more powerful dynamic policies for adapting hyper-parameters, which take monitored training metrics into account. We report results from prototype implementations that show the practicality of DL system designs that are hyper-parameter-friendly.
机译:深度学习(DL)系统提供许多调整参数(“超参数”),这些参数会影响已训练模型的性能和准确性。用户越来越难以配置超参数,并且有大量的时间用于凭经验调整它们。我们认为,未来的DL系统应设计为有助于管理超参数。我们描述了分布式DL系统如何(i)消除超参数对性能和准确性的影响,从而使确定良好设置变得更加容易,并且(ii)支持更强大的动态策略以适应超参数,考虑到受监控的培训指标。我们报告了原型实现的结果,这些结果表明了超参数友好的DL系统设计的实用性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号