首页> 外文会议>International Conference on Machine Learning >TaskNorm: Rethinking Batch Normalization for Meta-Learning
【24h】

TaskNorm: Rethinking Batch Normalization for Meta-Learning

机译:Tasknorm:重新思考Meta-Learning的批量标准化

获取原文

摘要

Modern meta-learning approaches for image classification rely on increasingly deep networks to achieve state-of-the-art performance, making batch normalization an essential component of meta-learning pipelines. However, the hierarchical nature of the meta-learning setting presents several challenges that can render conventional batch normalization ineffective, giving rise to the need to rethink normalization in this setting. We evaluate a range of approaches to batch normalization for meta-learning scenarios, and develop a novel approach that we call TaskNorm. Experiments on fourteen datasets demonstrate that the choice of batch normalization has a dramatic effect on both classification accuracy and training time for both gradient based- and gradient-free meta-learning approaches. Importantly, TaskNorm is found to consistently improve performance. Finally, we provide a set of best practices for normalization that will allow fair comparison of meta-learning algorithms.
机译:图像分类的现代元学习方法依赖于越来越深的网络来实现最先进的性能,使批量标准化成为元学习管道的基本组成部分。 然而,元学习环境的分层性质具有几种挑战,可以使传统的批量归一化无效,从而产生在此设置中重新思考的必要性。 我们评估了一系列批量标准化的批量归一化,并开发了一种我们称之为Tasknorm的新方法。 十四个数据集的实验表明,批量归一化的选择对梯度和渐变的元学习方法的分类准确性和培训时间具有显着影响。 重要的是,发现TaskNorm始终如一地提高性能。 最后,我们为归一化提供了一系列最佳实践,将允许公平比较元学习算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号