首页> 外文会议>International Conference on Image and Vision Computing New Zealand >Improving the Efficient Neural Architecture Search via Rewarding Modifications
【24h】

Improving the Efficient Neural Architecture Search via Rewarding Modifications

机译:通过奖励修改改善高效的神经结构搜索

获取原文

摘要

Nowadays, a challenge for the scientific community concerning deep learning is to design architectural models to obtain the best performance on specific data sets. Building effective models is not a trivial task and it can be very time-consuming if done manually. Neural Architecture Search (NAS) has achieved remarkable results in deep learning applications in the past few years. It involves training a recurrent neural network (RNN) controller using Reinforcement Learning (RL) to automatically generate architectures. Efficient Neural Architecture Search (ENAS) was created to address the prohibitively expensive computational complexity of NAS using weight sharing. In this paper we propose Improved-ENAS (I-ENAS), a further improvement of ENAS that augments the reinforcement learning training method by modifying the reward of each tested architecture according to the results obtained in previously tested architectures. We have conducted many experiments on different public domain datasets and demonstrated that I-ENAS, in the worst-case reaches the performance of ENAS, but in many other cases it overcomes ENAS in terms of convergence time needed to achieve better accuracies.
机译:如今,关于深入学习的科学界的挑战是设计架构模型,以获得特定数据集的最佳性能。建立有效模型不是一项琐碎的任务,如果手动完成,它可能会非常耗时。神经结构搜索(NAS)在过去几年中取得了显着的结果。它涉及使用强化学习(RL)训练经常性神经网络(RNN)控制器以自动生成架构。创建有效的神经结构搜索(ENAS)以使用权重共享来解决NAS的昂贵昂贵的计算复杂性。在本文中,我们提出了改进的ena(I-ZHAS),进一步改进了enaas通过根据在先前测试的架构中获得的结果来修改每个测试架构的奖励来增强加强学习训练方法。我们在不同的公共领域数据集中进行了许多实验,并展示了I-Zhas,在最坏的情况下达到了ena的性能,但在许多其他情况下,它在实现更好准确性所需的收敛时间方面克服了Zhas。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号